Friday, June 22. 2018The empty brain | #neurosciences #nometaphor
Via Aeon ----- Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
Img. by Jan Stepnov (Twenty20).
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’. Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer. To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections. A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced. Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving. But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever. We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not. Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word. Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’. Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers? In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence. In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least. The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while. By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
"The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain" Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics. This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain. Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure. The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question. But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge. Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them. The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors. Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly. If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost? In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences. Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times. What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing? Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
"The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?" A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers. The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell? So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent. The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before. Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music. From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor. As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways. We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded. Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary. A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable. A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world. My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball. That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
"We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading." Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor. One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity. Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing. Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences. This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above). This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised. Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.) Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down. We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
Posted by Patrick Keller
in Culture & society, Science & technology
at
14:32
Defined tags for this entry: cognition, computing, culture & society, intelligence, neurosciences, research, science & technology, thinking
Thursday, April 12. 2018Vlatko Vedral - Decoding Reality | #quantum #information #thermodynamics
More about Quantum Information by Vlatko Vedral and his book Decoding Reality.
Via Legalise Freedom ----- Listen to the discussion online HERE (Youtube, 1h02). ... Vlatko Vedral on Decoding Reality -- The Universe as Quantum Information. What is the nature of reality? Why is there something rather than nothing? These are the deepest questions that human beings have asked, that thinkers East and West have pondered over millennia. For a physicist, all the world is information. The Universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the blueprints for our basic forms to future generations using a digital code called DNA. Decoding Reality asks some of the deepest questions about the Universe and considers the implications of interpreting it in terms of information. It explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. It describes the bizarre effects of quantum behaviour such as 'entanglement', which Einstein called 'spooky action at a distance' and explores cutting edge work on harnessing quantum effects in hyperfast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach up into our reality. The book concludes by considering the answer to the ultimate question: where did all of the information in the Universe come from? The answers considered are exhilarating and challenge our concept of the nature of matter, of time, of free will, and of reality itself.
Posted by Patrick Keller
in Culture & society, Science & technology
at
08:19
Defined tags for this entry: culture & society, research, science & technology, scientists, theory, thinkers, thinking
Tuesday, April 10. 2018I’m building a machine that breaks the rules of reality | #information #thermodynamics
Note: the title and beginning of the article is very promissing or teasing, so to say... But unfortunately not freely accessible without a subscription on the New Scientist. Yet as it promisses an interesting read, I do archive it on | rblg for record and future readings. In the meantime, here's also an interesting interview (2010) from Vlatko, at the time when he published his book Decoding Reality () about Information with physicist Vlatko Vedral for The Guardian.
Via The Guardian -----
... And an extract from the article on the New Scientist:
I’m building a machine that breaks the rules of reality We thought only fools messed with the cast-iron laws of thermodynamics – but quantum trickery is rewriting the rulebook, says physicist Vladko Vedral.
Martin Leon Barreto
By Vlatko Vedral A FEW years ago, I had an idea that may sound a little crazy: I thought I could see a way to build an engine that works harder than the laws of physics allow. You would be within your rights to baulk at this proposition. After all, the efficiency of engines is governed by thermodynamics, the most solid pillar of physics. This is one set of natural laws you don’t mess with. Yet if I leave my office at the University of Oxford and stroll down the corridor, I can now see an engine that pays no heed to these laws. It is a machine of considerable power and intricacy, with green lasers and ions instead of oil and pistons. There is a long road ahead, but I believe contraptions like this one will shape the future of technology. Better, more efficient computers would be just the start. The engine is also a harbinger of a new era in science. To build it, we have had to uncover a field called quantum thermodynamics, one set to retune our ideas about why life, the universe – everything, in fact – are the way they are. Thermodynamics is the theory that describes the interplay between temperature, heat, energy and work. As such, it touches on pretty much everything, from your brain to your muscles, car engines to kitchen blenders, stars to quasars. It provides a base from which we can work out what sorts of things do and don’t happen in the universe. If you eat a burger, you must burn off the calories – or …
Related Links:
Posted by Patrick Keller
in Science & technology
at
16:19
Defined tags for this entry: information, research, science & technology, scientists, theory, thinking
Monday, February 05. 2018Environmental Devices · Projets & expérimentations (1997-2017) | #fabric|ch #exhibition
Note: 2017 was very busy (the reason why I wasn't able to post much on | rblg...), and the start of 2018 happens to be the same. Fortunately and unfortunatly! I hope things will calm down a bit next Spring, but in the meantime, we're setting up an exhibition with fabric | ch. A selection of works retracing 20 years of activities, which purpose will be also to serve in the perspective of a photo shooting for a forthcoming book. The event will take place in a disuse factory (yet a historical monument from the 2nd industrial era), near Lausanne. If you are around, do not hesitate to knock at the door!
By fabric | ch ----- Environmental Devices · Projets & expérimentations (1997-2017)
Image: Daniela & Tonatiuh.
During a few days, in the context of the preparation of a book, a selection of works retracing 20 years of activities of fabric | ch will be on display in a disused factory close to Lausanne.
----- Pendant quelques jours et dans le contexte de la création d'un livre monographique, accrochage d'une sélection de travaux retraçant 20 ans d'activités de fabric | ch.
&
Posted by Patrick Keller
in fabric | ch, Architecture, Interaction design
at
18:59
Defined tags for this entry: architects, architecture, art, artificial reality, atmosphere, data, devices, experimentation, fabric | ch, interaction design, networks, projects, thinking
Thursday, September 21. 2017Timothy Morton, “the philosopher prophet of the Anthropocene” | #hyperobjects #climate
Note: Timothy Morton introducing his concept of "hyperobjects" and "object-oriented philosophy".
Via e-flux via The Guardian (June 17) ----- Image of Thimothy Morton.
The Guardian has a longread on the US-based British philosopher Timothy Morton, whose work combines object-oriented ontology and ecological concerns. The author of the piece, Alex Blasdel, discusses how Morton's ideas have spread far and wide—from the Serpentine Gallery to Newsweek magazine—and how his seemingly bleak outlook has a silver lining. Here's an excerpt:
Posted by Patrick Keller
in Culture & society, Sustainability, Territory
at
09:08
Defined tags for this entry: atmosphere, climate, culture & society, ecology, interferences, sustainability, systems, territory, thinkers, thinking
Saturday, July 15. 2017All 234 fabric | rblg updated tags! | #fabric|ch #Summer #farniente #reading
By fabric | ch -----
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2017), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG: (to be seen just below if you're navigating on the blog's html pages or here for rss readers)
Posted by Patrick Keller
in fabric | ch
at
08:30
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Monday, February 20. 2017The Ulm Model: a school and its pursuit of a critical design practice | #design #teaching
Via It's Nice That ----- Words by Billie Muraben, photography by Connor Campbell
“My feeling is that the Bauhaus being conveniently located before the Second World War makes it safely historical,” says Dr. Peter Kapos. “Its objects have an antique character that is about as threatening as Arts and Crafts, whereas the problem with the Ulm School is that it’s too relevant. The questions raised about industrial design [still apply], and its project failed – its social project being particularly disappointing – which leaves awkward questions about where we are in the present.” Kapos discovered the Hochschule für Gestaltung Ulm, or Ulm School, through his research into the German manufacturing company Braun, the representation of which is a specialism of his archive, das programm. The industrial design school had developed out of a community college founded by educationalist Inge Scholl and graphic designer Otl Aicher in 1946. It was established, as Kapos writes in the book accompanying the Raven Row exhibition, The Ulm Model, “with the express purpose of curbing what nationalistic and militaristic tendencies still remained [in post-war Germany], and making a progressive contribution to the reconstruction of German social life.” The Ulm School closed in 1968, having undergone various forms of pedagogy and leadership, crises in structure and personality. Nor the faculty or student-body found resolution to the problems inherent to industrial design’s claim to social legitimacy – “how the designer could be thoroughly integrated within the production process at an operational level and at the same time adopt a critically reflective position on the social process of production.” But while the Ulm School and the Ulm Model collapsed, it remains an important resource, “it’s useful, even if the project can’t be restarted, because it was never going to succeed, the attempt is something worth recovering. Particularly today, under very difficult conditions.”
Foundation Course exercise
Foundation Course exercise
Foundation Course exercise
Max Bill, a graduate of the Bauhaus and then president of the Swiss Werkbund, arrived at Ulm in 1950, having been recruited partly in the hope that his international profile would attract badly needed funding. He tightened the previously broad curriculum, established by Marxist writer Hans Werner Richter, around design, mirroring the practices of his alma mater. Bill’s rectorship ran from 1955-58, during which “there was no tension between the way he designed and the requirements of the market”. The principle of the designer as artist, a popular notion of the Bauhaus, curbed the “alienating nature of industrial production”. Due perhaps in part to the trauma of WW2, people hadn’t been ready to allow technology into the home that declared itself as technology. “The result of that was record players and radios smuggled into the home, hidden in what looked like other pieces of furniture, with walnut veneers and golden tassels.” Bill’s way of thinking didn’t necessarily reflect the aesthetic, but it wasn’t at all challenging politically. “So in some ways that’s really straight-forward and unproblematic – and he’s a fantastic designer, an extraordinary architect, an amazing graphic designer, and a great artist – but he wasn’t radical enough. What he was trying to do with industrial design wasn’t taking up the challenge.”
Foundation Course exercise
In 1958 Bill stepped down having failed to “grasp the reality of industrial production simply at a technical and operational level… [or] recognise its emancipatory potential.” The industrial process had grown in complexity, and the prospect of rebuilding socially was too vast for single individuals to manage. It was no longer possible for the artist-designer to sit outside of the production process, because the new requirements were so complex. “You had to be absolutely within the process, and there had to be a team of disciplinary specialists — not only of material, but circulation and consumption, which was also partly sociological. It was a different way of thinking about form and its relation to product.” After Bill’s departure, Tomás Maldonado, an instructor at the school, “set out the implications for a design education adequate to the realities of professional practice.” Changes were made to the curriculum that reflected a critically reflective design practice, which he referred to as ‘scientific operationalism’ and subjects such as ‘the instruction of colour’, were dropped. Between 1960-62, the Ulm Model was introduced: “a novel form of design pedagogy that combined formal, theoretical and practical instruction with work in so-called ‘Development Groups’ for industrial clients under the direction of lecturers.” And it was during this period that the issue of industrial design’s problematic relationship to industry came to a head.
In 1959, a year prior to the Ulm Model’s formal introduction, Herbert Lindinger, a student from a Development Group working with Braun, designed an audio system. A set of transistor equipment, it made no apologies for its technology, and looked like a piece of engineering. His audio system became the model for Braun’s 1960s audio programme, “but Lindinger didn’t receive any credit for it, and Braun’s most successful designs from the period derived from an implementation of his project. It’s sad for him but it’s also sad for Ulm design because this had been a collective project.” The history of the Braun audio programme was written as being defined by Dieter Rams, “a single individual — he’s an important designer, and a very good manager of people, he kept the language consistent — but Braun design of the 60s is not a manifestation of his genius, or his vision.” And the project became an indication of why the Ulm project would ultimately fail, “when recalling it, you end up with a singular genius expressing the marvel of their mind, rather than something that was actually a collective project to achieve something social.” An advantage of Bill’s teaching model had been the space outside of the industrial process, “which is the space that offers the possibility of criticality. Not that he exercised it. But by relinquishing that space, [the Ulm School] ended up so integrated in the process that they couldn’t criticise it.” They realised the contradiction between Ulm design and consumer capitalism, which had been developing along the same timeline. “Those at the school became dissatisfied with the idea of design furnishing market positions, constantly producing cycles of consumptive acts, and they struggled to resolve it.” The school’s project had been to make the world rational and complete, industrially-based and free. “Instead they were producing something prison-like, individuals were becoming increasingly separate from each other and unable to see over their horizon.” In the Ulm Journal, the school’s sporadic, tactically published magazine that covered happenings at, and the evolving thinking and pedagogical approach of Ulm, Marxist thinking had become an increasingly important reference. “It was key to their understanding the context they were acting in, and if that thinking had been developed it would have led to an interesting and different kind of design, which they never got round to filling in. But they created a space for it.”
Foundation Course exercise
Foundation Course exercise (detail)
“[A Marxian approach] would inevitably lead you out of design in some way. And the Ulm Model, the title of the Raven Row exhibition, is slightly ironic because it isn’t really a model for anything, and I think they understood that towards the end. They started to consider critical design as something that had to not resemble design in its recognised form. It would be nominally designed, the categories by which it was generally intelligible would need to be dismantled.” The school’s funding was equally problematic, while their independence from the state facilitated their ability to validate their social purpose, the private foundation that provided their income was funded by industry commissions and indirect government funding from the regional legislator. “Although they were only partially dependent on government money, they accrued so much debt that in the end they were entirely dependent on it. The school was becoming increasingly radical politically, and the more radical it became, the more its own relation to capitalism became problematic. Their industry commissions tied them to the market, the Ulm Model didn’t work out, and their numbers didn’t add up.” The Ulm School closed in 1968, when state funding was entirely withdrawn, and its functionalist ideals were in crisis. Abraham Moles, an instructor at the school, had previously asserted the inconsistency arising from the practice of functionalism under the conditions of ‘the affluent society’, “which for the sake of ever expanding production requires that needs remain unsatisfied.” And although he had encouraged the school to anticipate and respond to the problem, so as to be the “subject instead of the object of a crisis”; he hadn’t offered concrete ideas on how that might be achieved. But correcting the course of capitalist infrastructure isn’t something the Ulm School could have been expected to achieve, “and although the project was ill-construed, it is productive as a resource for thinking about what a critical design practice could be in relation to capitalism.” What’s interesting about the Ulm Model today is their consideration of the purpose of education, and their questioning of whether it should merely reflect the current state of things – “preparing a workforce for essentially increasing the GDP; and establishing the efficiency of contributing sectors in a kind of diabolical utilitarianism.”
Ulm Journal of the Hochschule für Gestaltung
Foundation Course exercise (detail)
Foundation Course exercise
Foundation Course exercise (detail)
Foundation Course exercise
Foundation Course exercise (detail)
Related Links:Wednesday, November 30. 2016"Bot Like Me" at Centre Culturel Suisse Paris | #conference #talk #music
Note: I'll be pleased to be in Paris next Friday and Saturday (02-03.12) at the Centre Culturel Suisse and in the company of an excellent line up (!Mediengruppe Bitnik, Nicolas Nova, Yves Citton, Tobias Revell & Nathalie Kane, Rybn, Joël Vacheron and many others) for the conference and event "Bot Like Me" curated by Sophie Lamparter and Luc Meier. I'll present with Nicolas Nova the almost final state of our joint research project Inhabiting & Interfacing the Cloud(s).
-----
Du vendredi 2 au samedi 3 décembre 2016
Bot Like Me
interventions en anglais A l’occasion de l’exposition de !MedienGruppe Bitnik, et avec la complicité du duo d’artistes zurichois, Sophie Lamparter (directrice associée de swissnex San Francisco) et Luc Meier (directeur des contenus de l’EPFL ArtLab, Lausanne) ont concocté pour le CCS un événement de deux jours composé de conférences, tables rondes et concerts, réunissant scientifiques, artistes, écrivains, journalistes et musiciens pour examiner les dynamiques tourmentées des liens homme-machine. Conçues comme une plateforme d’échange à configuration souple, ces soirées interrogeront nos rapports complexes, à la fois familiers et malaisés, avec les bots qui se multiplient dans nos environnements ultra-connectés.
Vendredi 2 décembre / dès 19h30
conférence, 19h30-21h : Bot Like Me kick-off
avec Rolf Pfeifer (AI Lab de l’Université de Zurich / Osaka University), Carmen Weisskopf et Domagoj Smoljo ( !Mediengruppe Bitnik). Modération : Luc Meier et Sophie Lamparter
performance musicale live, 21h30 : Not Waving
Samedi 3 décembre / dès 14h30
tables rondes
-14h30-16h : Data Manifestos
avec Hannes Grassegger (auteur de Das Kapital bin ich), Hannes Gassert (Open Knowledge Network) et le collectif RYBN. Modération : Sophie Lamparter et Luc Meier
-16h30-18h : Cloud Labor, Petty Bot Jobs
avec Nicolas Nova (HEAD-Genève, Near Future Laboratory), Yves Citton (Université de Grenoble) et Patrick Keller (ECAL, fabric | ch). Modération : Marie Lechner
-18h30-20h : Botocene & Algoghosts
avec Tobias Revell et Natalie Kane (Haunted Machines), Gwenola Wagon et Jeff Guess (artistes). Modération : Joël Vacheron et Nicolas Nova
concert 21h : performance live de Low Jack et carte blanche au label Antinote
Entrée libre sauf concerts (12 €) / Réservations : billetterie en ligne / 01 42 71 44 50 / reservation@ccsparis.com
----- Post note : following the conference, the Swiss Cultural Center in Paris put up a video documentation of the full conference on their Youtube channel. In particular below the part when we're talking together about the research project Inhabiting and Interfacing teh Cloud(s) with Nicolas Nova.
Posted by Patrick Keller
in fabric | ch, Art, Culture & society, Interaction design
at
23:09
Defined tags for this entry: art, artificial reality, artists, conferences, culture & society, data, designers, fabric | ch, interaction design, networks, thinkers, thinking
Wednesday, October 19. 2016Le médium spirite ou la magie d’un corps hypermédiatique à l’ère de la modernité | #spirit #media #technology
Note: following the previous post that mentioned the idea of spiritism in relation to personal data, or forgotten personal data, but also in relation to "beliefs" linked to contemporary technologies, here comes an interesting symposium (Machines, magie, médias) and post on France Culture. The following post and linked talk from researcher Mireille Berton (nearby University of Lausanne, Dpt of Film History and Aesthetics) are in French.
Via France Culture -----
Cerisy : Machines, magie, médias (du 20 au 28 août 2016)
Les magiciens — de Robert-Houdin et Georges Méliès à Harry Houdini et Howard Thurston suivis par Abdul Alafrez, David Copperfield, Jim Steinmeyer, Marco Tempest et bien d’autres — ont questionné les processus de production de l’illusion au rythme des innovations en matière d’optique, d’acoustique, d’électricité et plus récemment d’informatique et de numérique. Or, toute technologie qui se joue de nos sens, tant qu’elle ne dévoile pas tous ses secrets, tant que les techniques qu'elle recèle ne sont pas maîtrisées, tant qu’elle n’est pas récupérée et formalisée par un média, reste à un stade que l’on peut définir comme un moment magique. Machines et Magie partagent, en effet, le secret, la métamorphose, le double, la participation, la médiation. Ce parti pris se fonde sur l’hypothèse avancée par Arthur C. Clarke : "Toute technologie suffisamment avancée est indiscernable de la magie" (1984, p. 36). L’émergence même des médias peut être analysée en termes d’incarnation de la pensée magique, "patron-modèle" (Edgar Morin, 1956) de la forme première de l’entendement individuel (Marcel Mauss, 1950). De facto, depuis les fantasmagories du XVIIIe siècle jusqu’aux arts numériques les plus actuels, en passant par le théâtre, la lanterne magique, la photographie, le Théâtrophone, le phonographe, la radio, la télévision et le cinéma, l’histoire des machineries spectaculaires croise celle de la magie et les expérimentations de ses praticiens, à l’affût de toute nouveauté permettant de réactualiser les effets magiques par la mécanisation des performances. C’est par l’étude des techniques d’illusion propres à chaque média, dont les principes récurrents ont été mis au jour par les études intermédiales et l’archéologie des médias, que la rencontre avec l’art magique s’est imposée. Ce colloque propose d’en analyser leur cycle technologique : le moment magique (croyance et émerveillement), le mode magique (rhétorique), la sécularisation (banalisation de la dimension magique). Ce cycle est analysé dans sa transversalité afin d’en souligner les dimensions intermédiales. Les communications sont ainsi regroupées en sept sections : L’art magique ; Magie et esthétiques de l’étonnement ; Magie, télévision et vidéo ; Les merveilles de la science ; Magie de l’image, l’image et la magie ; Magie du son, son et magie ; Du tableau vivant au mimétisme numérique. La première met en dialogue historiens et praticiens de la magie et présente un état des archives sur le sujet. Les six sections suivantes font état des corrélations: magie/médias et médias/magie.
Docteure ès Lettres, Mireille Berton est maître d’enseignement et de recherche à la Section d’Histoire et esthétique du cinéma de l'Université de Lausanne (UNIL). Ses travaux portent principalement sur les rapports entre cinéma et sciences du psychisme (psychologie, psychanalyse, psychiatrie, parapsychologie), avec un intérêt particulier pour une approche croisant histoire culturelle, épistémologie des médias et Gender Studies. Outre de nombreuses études, elle a publié un livre tiré de sa thèse de doctorat intitulé Le Corps nerveux des spectateurs. Cinéma et sciences du psychisme autour de 1900 (L’Âge d’Homme, 2015), et elle a co-dirigé avec Anne-Katrin Weber un ouvrage collectif consacré à l’histoire des dispositifs télévisuels saisie au travers de discours, pratiques, objets et représentations (La Télévision du Téléphonoscope à YouTube. Pour une archéologie de l'audiovision, Antipodes, 2009). Elle travaille actuellement sur un manuscrit consacré aux représentations du médium spirite dans les films et séries télévisées contemporains (à paraître chez Georg en 2017).
Résumé de la communication: L'intervention propose de revenir sur une question souvent traitée dans l’histoire des sciences et de l’occultisme, à savoir le rôle joué par les instruments de mesure et de capture dans l’appréhension des faits paranormaux. Une analyse de sources spirites parues durant les premières décennies du XXe siècle permet de mettre au jour les tensions provoquées par les dispositifs optiques et électriques qui viennent défier le corps tout-puissant du médium spirite sur son propre territoire. La rencontre entre occultisme et modernité donne alors naissance à la figure (discursive et fantasmatique) du médium "hypermédiatique", celui-ci surpassant toutes les possibilités offertes par les découvertes scientifiques.
Related Links:
Posted by Patrick Keller
in Culture & society, Science & technology
at
11:14
Defined tags for this entry: artificial reality, culture & society, display, history, illusion, interface, perception, science & technology, thinking
Tuesday, October 04. 2016L’Anthropocène et l’esthétique du sublime | #stupéfaction #bourgeoisie
Note: j'avais évoqué récemment cette idée du sublime dans le cadre d'un workshop à l'ECAL, avec pour invités Random International. Il s'agissait alors d'intervenir dans le cadre d'un projet de recherche où nous visions à développer des "contre-propositions" à l'expression actuelle de quelques-unes de nos infrastructures contemporaines, "douces" et "dures". Le "cloud computing" et les data-centers en particulier (le projet en question, en cours et dont le processus est documenté sur un blog: Inhabiting & Interfacing the Cloud(s)). Un projet conduit en collaboration avec Nicolas Nova de la HEAD - Genève Tout cela s'était développé autour du sentiment d'une technologie, qui mettant aujourd'hui de nouveau "à distance" ses utilisateurs, contribuerait au développement de "croyances" (dimension "magique") et dans certains cas, à la résurgence du sentiment de "sublime", cette fois non plus lié aux puissances natutrelles "terrifiantes", mais aux technologies développées par l'homme. Je n'avais pas fait le lien avec cette thématique très actuelle de l'Anthropocène, que nous avions toutefois déjà commentée et pointée sur ce blog. C'est fait dorénavant avec beaucoup de nuances par Jean-Baptiste Fressoz. Non sans souligner que "(...) cette opération esthétique, au demeurant très réussie, n’est pas sans poser problème car ce qui est rendu sublime ce n’est évidemment pas l’humanité, mais c’est, de fait, le capitalisme". ... On peut aussi se souvenir qu'en 1990 déjà, Michel Serres écrivait dans son livre Le Contrat Naturel:
Texte que nous avions par ailleurs cité avec fabric | ch dans l'un de nos premiers projets, Réalité Recombinée, en 1998.
Via Mouvements (via Nicolas Nova) ----- Par Jean-Baptiste Fressoz
Olafur Eliasson à la Tate Modern.
Pour Jean-Baptiste Fressoz, la force de l’idée d’Anthropocène n’est pas conceptuelle, scientifique ou heuristique : elle est avant tout esthétique. Dans cet article, l’auteur revient, pour en pointer les limites, sur les ressorts réactivés de cette esthétique occidentale et bourgeoise par excellence [note: le sublime], vilipendée par différents courants critiques. Il souligne qu’avant d’embrasser complètement l’Anthropocène, il faut bien se rappeler que le sublime n’est qu’une des catégories de l’esthétique, qui en comprend d’autres (le tragique, le beau…) reposant sur d’autres sentiments (l’harmonie, la douleur, l’amour…), peut-être plus à même de nourrir une esthétique du soin, du petit, du local dont l’agir écologique a tellement besoin.
Aussi sidérant, spectaculaire ou grandiloquent qu’il soit, le concept d’Anthropocène ne désigne pas une découverte scientifique [1]. Il ne représente pas une avancée majeure ou récente des sciences du système-terre. Nom attribué à une nouvelle époque géologique à l’initiative du chimiste Paul Crutzen, l’Anthropocène est une simple proposition stratigraphique encore en débat parmi la communauté des géologues. Faisant suite à l’Holocène (12 000 ans depuis la dernière glaciation), l’Anthropocène est marquée par la prédominance de l’être humain sur le système-terre. Plusieurs dates de départ et marqueurs stratigraphiques afférents sont actuellement débattus : 1610 (point bas du niveau de CO2 dans l’atmosphère causé par la disparition de 90% de la population amérindienne), 1830 (le niveau de CO2 sort de la fourchette de variabilité holocénique), 1945 date de la première explosion de la bombe atomique. La force de l’idée d’Anthropocène n’est pas conceptuelle, scientifique ou heuristique : elle est avant tout esthétique. Le concept d’Anthropocène est une manière brillante de renommer certains acquis des sciences du système-terre. Il souligne que les processus géochimiques que l’humanité a enclenchés ont une inertie telle que la terre est en train de quitter l’équilibre climatique qui a eu cours durant l’Holocène. L’Anthropocène désigne un point de non retour. Une bifurcation géologique dans l’histoire de la planète Terre. Si nous ne savons pas exactement ce que l’Anthropocène nous réserve (les simulations du système-terre sont incertaines), nous ne pouvons plus douter que quelque chose d’importance à l’échelle des temps géologiques a eu lieu récemment sur Terre. Le concept d’Anthropocène a cela d’intéressant, mais aussi de très problématique pour l’écologie politique, qu’il réactive les ressorts de l’esthétique du sublime, esthétique occidentale et bourgeoise par excellence, vilipendée par les critiques marxistes, féministes et subalternistes, comme par les postmodernes. Le discours de l’Anthropocène correspond en effet assez fidèlement aux canons du sublime tels que définis par Edmund Burke en 1757. Selon ce philosophe anglais conservateur, surtout connu pour son rejet absolu de 1789, l’expérience du sublime est associée aux sensations de stupéfaction et de terreur ; le sublime repose sur le sentiment de notre propre insignifiance face à une nature lointaine, vaste, manifestant soudainement son omnipuissance. Écoutons maintenant les scientifiques promoteurs de l’Anthropocène :
« L’humanité, notre propre espèce, est devenue si grande et si active qu’elle rivalise avec quelques-unes des grandes forces de la Nature dans son impact sur le fonctionnement du système terre […]. Le genre humain est devenu une force géologique globale [2] ».
La thèse de l’Anthropocène repose en premier lieu sur les quantités phénoménales de matière mobilisées et émises par l’humanité au cours des XIXe et XXe siècles. L’esthétique de la gigatonne de CO2 et de la croissance exponentielle renvoie à ce que Burke avait noté : « la grandeur de dimension est une puissante cause du sublime [3] », et, ajoute-t-il, le sublime demande « le solide et les masses mêmes [4] ». De manière plus précise, l’Anthropocène reporte le sublime de la vaste nature vers « l’espèce humaine ». Tout en jouant du sublime, il en renverse les polarités classiques : la terreur sacrée de la nature est transférée à une humanité colosse géologique. Or, cette opération esthétique, au demeurant très réussie, n’est pas sans poser problème car ce qui est rendu sublime ce n’est évidemment pas l’humanité, mais c’est, de fait, le capitalisme. L’Anthropocène n’est certainement pas l’affaire d’une « espèce humaine », d’un « anthropos » indifférencié, ce n’est même pas une affaire de démographie : entre 1800 et 2000 la population humaine a été multipliée par sept, la consommation d’énergie par 50 et le capital, si on reprend les chiffres de Thomas Picketty, par 134 [5]. Ce qui a fait basculer la planète dans l’Anthropocène, c’est avant tout une vaste technostructure orientée vers le profit, une « seconde nature », faite de routes, de plantations, de chemins de fer, de mines, de pipelines, de forages, de centrales électriques, de marchés à terme, de porte-containers, de places financières et de banques et bien d’autres choses encore qui structurent les flux de matière et d’énergie à l’échelle du globe selon une logique structurellement inégalitaire. Bref, le changement de régime géologique est bien sûr le fait de « l’âge du capital [6] » bien plus que le fait de « l’âge de l’être humain » dont nous rebattent les récits dominants [7]. Le premier problème du sublime de l’Anthropocène est qu’il renomme, esthétise et surtout naturalise le capitalisme, dont la force se mesure dorénavant à l’aune des manifestations de la première nature – les volcans, la tectonique des plaques ou les variations des orbites planétaires – que deux siècles d’esthétique du sublime nous avaient appris à craindre mais aussi à révérer. Au sublime de la quantité, l’Anthropocène ajoute le sublime géologique des âges et des éons, duquel il tire ses effets les plus saisissants. La thèse de l’Anthropocène nous dit en substance que les traces de notre âge industriel resteront pour des millions d’années dans les archives géologiques de la planète. Le fait d’ouvrir une nouvelle époque taillée à la mesure de l’être humain signifie que c’est à l’échelle des temps géologiques seulement que l’on peut identifier des événements agissant avec autant de force sur la planète que nous-mêmes : le taux de dioxyde de carbone en 2015 est sans précédent depuis trois millions d’années, le taux actuel d’extinction des espèces, depuis 65 millions d’années, l’acidité des océans, depuis 300 millions d’années, etc. Ce que nous vivons n’est pas une simple « crise environnementale », mais une révolution géologique d’origine humaine. Loin de constituer un cours extérieur, impavide et gigantesque, le temps de la Terre est devenu commensurable au temps de l’agir humain. En deux siècles tout au plus, l’humanité a altéré la dynamique du système-terre pour l’éternité ou presque. « Tout ce qui fait transition n’excite aucune terreur [8] » écrivait Burke. Le discours de l’Anthropocène cultive cette esthétique de la soudaineté, de la bifurcation et de l’événement. Le sublime de l’Anthropocène réside précisément dans cette rencontre extraordinaire : deux siècles d’activité humaine, une durée infime, quasi-nulle au regard de l’histoire terrienne, auront suffi à provoquer une altération comparable au grand bouleversement de la fin du Mésozoïque il y a 65 millions d’années. La troisième source du sublime anthropocénique est le sublime de la violence souveraine de la nature, celle des tremblements de terre, des tempêtes et des ouragans. Les promoteur·rice·s de l’Anthropocène mobilisent volontiers le sublime romantique des ruines, des civilisations disparues et des effondrements : « Les moteurs de l’Anthropocène pourraient bien menacer la viabilité de la civilisation contemporaine et peut-être même l’existence d’homo sapiens [9] ». Le succès artistique et médiatique du concept repose sur la « jouissance douloureuse », sur le « plaisir négatif » dont parle Burke :
« Nous jouissons à voir des choses que, bien loin de les occasionner, nous voudrions sincèrement empêcher… Je ne pense pas qu’il existe un·e ho·femme assez scélérat·e· pour désirer [que Londres] fût renversée par un tremblement de terre… Mais supposons ce funeste accident arrivé, quelle foule accourrait de toute part pour contempler ses ruines [10] ».
William Kentridge
L’Anthropocène s’appuie sur une culture de l’effondrement propre aux nations occidentales, qui, depuis deux siècles, admirent leur puissance en fantasmant les ruines de leur futur. L’Anthropocène joue des mêmes ressorts psychologiques que le plaisir pervers des décombres déjà décrit par Burke et qui nourrit la vogue actuelle du tourisme des catastrophes de Tchernobyl à ground zero. La violence de l’Anthropocène est aussi celle de la science hautaine et froide qui nomme les époques et définit notre condition historique. Violence, tout d’abord, de son diagnostic irrévocable : « toi qui entre dans l’Anthropocène abandonne tout espoir » semblent nous dire les savant·e·s. Violence ensuite de la naturalisation, de la « mise en espèce » des sociétés humaines : les statistiques globales de consommation et d’émissions compactent les mille manières d’habiter la terre en quelques courbes, effaçant par la même l’immense variation des responsabilités entre les peuples et les classes sociales. Violence enfin du regard géologique tourné vers nous-mêmes, jaugeant toute l’histoire (empires, guerres, techniques, hégémonies, génocides, luttes, etc.) à l’aune des traces sédimentaires laissées dans la roche. Le géologue de l’Anthropocène est plus effroyable encore que l’ange de l’histoire de Walter Benjamin qui, là même où nous voyions auparavant progrès, ne voyait que catastrophe et désastre : lui n’y voit que fossiles et sédiments. Que le sublime soit l’esthétique cardinale de l’Anthropocène n’est absolument pas fortuit : sublime et géologie se sont épaulés tout au long de leur histoire. En 1674, Nicolas Boileau traduit en français le traité de Longinus sur le sublime (1er siècle après J.-C.) introduisant ainsi cette notion dans l’Europe lettrée. Mais c’est seulement au milieu du XVIIIe siècle, après que la passion des montagnes et l’intérêt pour la géologie se sont cristallisés dans les classes supérieures, que la « grande nature » devient un objet de sublime [11]. Partis pour leur « grand tour », sur le chemin de l’Italie, les jeunes Anglais·es fortuné·e·s rencontrent en effet la chaîne des Alpes, ses pics vertigineux, ses glaciers terrifiants et ses panoramas immenses. Dans les récits de grands tours, l’expérience de l’effroi face à la nature représente le prix à payer pour goûter la beauté des trésors culturels de l’Italie. Le sublime joue ici un rôle de distinction : être capable de prendre du plaisir en contemplant les glaciers, ou les rochers arides, permettait aux touristes anglais·es de se différencier des guides et des paysan·e·s montagnard·e·s qui n’y voyaient que dangers et terres incultes. Mais c’est évidemment le tremblement de terre de Lisbonne de 1755 qui fournit le véritable coup d’envoi des réflexions sur le sublime : Burke, qui publie son traité l’année suivante, fait référence à la passion esthétique des décombres et des ruines qui saisit alors l’Europe entière. La même année, Emmanuel Kant publie également un court ouvrage sur le tremblement de terre de Lisbonne et, dans son essai ultérieur sur le sublime, il définit ce dernier comme un « plaisir négatif » pouvant procéder de deux manières : le sublime mathématique ressenti devant l’immensité de la nature (l’espace étoilé, l’océan etc.) et le « sublime dynamique » procuré par la violence de la nature (tornade, volcan, tremblement de terre). Le sublime de l’Anthropocène, et sa mise en scène d’une humanité devenue force tellurique signe la rencontre historique du sublime naturel du XVIIIe siècle et du sublime technologique des XIXe et XXe siècles. Avec l’industrialisation de l’Occident, la puissance de la seconde nature fait l’objet d’une intense célébration esthétique. Le sublime transféré à la technique jouait un rôle central dans la diffusion de la religion du progrès : les gares, les usines et les gratte-ciels en constituaient les harangues permanentes [12]. Dès cette époque, l’idée d’un monde traversé par la technique, d’une fusion entre première et seconde natures fait l’objet de réflexions et de louanges. On s’émerveille des ouvrages d’art matérialisant l’union majestueuse des sublimes naturel et humain : viaducs enjambant les vallées, tunnels traversant les montagnes, canaux reliant les océans, etc. L’idée d’un globe remodelé pour les besoins de l’être humain et fertilisé par la technique constitue une trope classique du positivisme depuis Saint-Simon au moins, qui, dès 1820, écrivait : « l’objet de l’industrie est l’exploitation du globe, c’est-à-dire l’appropriation de ses produits aux besoins de l’homme, et comme, en accomplissant cette tâche, elle modifie le globe, le transforme, change graduellement les conditions de son existence, il en résulte que par elle, l’homme participe, en dehors de lui-même en quelque sorte, aux manifestations successives de la divinité, et continue ainsi l’œuvre de la création. De ce point de vue, l’Industrie devient le culte [13] ». De manière plus précise, l’Anthropocène s’inscrit dans une version du sublime technologique reconfigurée par la guerre froide. Il prolonge la vision spatiale de la planète produite par le système militaro-industriel américain, une vision déterrestrée de la Terre saisie depuis l’espace comme un système que l’on pourrait comprendre dans son entièreté, un « spaceship earth » dont on pourrait maîtriser la trajectoire grâce aux nouveaux savoirs sur le système-terre [14]. Le risque est que l’esthétique de l’Anthropocène nourrisse davantage l’hubris d’une géo-ingénierie brutale qu’un travail patient, à la fois modeste et ambitieux d’involution et d’adaptation du social. Pour mémoire, la géo-ingénierie désigne un ensemble de techniques visant à modifier artificiellement le pouvoir réfléchissant de l’atmosphère terrestre pour contrecarrer le réchauffement climatique. Cela peut constituer par exemple à injecter du dioxyde de soufre dans la haute atmosphère afin de réfléchir une partie du rayonnement solaire vers l’espace. L’échec des gouvernements à obtenir un accord international contraignant et ambitieux a contribué à mettre en avant la géo-ingénierie, en tant que « plan B ». Ces techniques potentiellement très risquées pourraient donc soudainement s’imposer en cas « d’urgence climatique ». Pour ses promoteur·rice·s, l’Anthropocène est une révélation, un éveil, un changement de paradigme désorientant soudainement les représentations vulgaires du monde.
« Par le passé, du fait de la science, l’humanité a dû faire face à de profondes remises en cause de leurs systèmes de croyance. Un des exemples les plus important est la théorie de l’évolution… Le concept d’anthropocène pourrait susciter une réaction hostile similaire à celle que Darwin a produite [15] ».
On retrouve ici le trope romantique du·de la savant·e· payant de sa personne pour lutter contre la foule hostile. En se coupant ainsi du passé et de la décence environnementale commune, en rejetant comme dépassés les savoirs environnementaux qui le précèdent ainsi que les luttes sociales que ces savoirs ont nourries, l’Anthropocène dépolitise l’histoire longue de la destruction de la planète. Avant on ignorait les conséquences globales de l’agir humain, maintenant l’on sait, et, bien entendu, maintenant l’on peut agir. La prétention à la nouveauté des savoirs sur la Terre est aussi une prétention des savants à agir sur celle-ci. Ce n’est pas un hasard si l’inventeur du mot Anthropocène, le prix Nobel de chimie Paul Crutzen, est aussi l’un des avocat·e·s des techniques de géo-ingénierie. À l’Anthropocène inconscient issu de la révolution industrielle succéderait enfin le « bon Anthropocène » éclairé par les savoirs du système-terre. Comme toute forme de scientisme, l’esthétique de l’Anthropocène anesthésie le politique : les « expert·e·s », les autorités vont « faire quelque chose ». Les expériences du sublime sont toujours à replacer dans un contexte historique et politique particulier. Elles renvoient à des émotions dépendantes des conditions culturelles, naturelles ou technologiques de chaque époque et ce sont ces conditions qui en fournissent les clés de compréhension politique. De la fin du XVIIIe siècle à la fin du siècle suivant, le sublime d’une nature violente et abstraite permettait aux classes bourgeoises urbaines de goûter à la violence de la nature, tout en étant relativement protégées de ses manifestations et de relativiser les dangers bien réels d’un mode de vie technologique et urbain. L’art du sublime nourrissait également le fantasme d’une nature immense et inépuisable au moment précis où l’impérialisme en exploitait les derniers recoins. Dans une culture prenant au sérieux le projet de maîtrise technique de la nature, l’esthétique du sublime fournissait aussi un plaisir légèrement coupable. Enfin, selon le critique marxiste Terry Eagleton, le sublime correspondait aux impératifs esthétiques du capitalisme naissant : contre l’esthétique émolliente du beau, risquant de transformer le sujet bourgeois en sensualiste décadent, le sublime réénergisait le sujet capitaliste comme exploiteur·se ou comme pourvoyeur de travail. Le beau devient à la fin du XVIIIe siècle l’harmonieux, le non-productif, le doux et le féminin ; le sublime : l’effort, le danger, la souffrance, l’élevé, le majestueux et le masculin. Au fond, le sublime, nous dit Eagleton, contenait la menace que la beauté faisait peser sur la productivité [16]. Au début des années 2000, le sublime de l’Anthropocène occupe également une fonction idéologique. Alors que les classes intellectuelles se convertissent au souci écologique, alors qu’elles rejettent les idéaux modernistes de maîtrise de la nature comme has been, alors qu’elles proclament « la fin des grands récits », la fin du progrès, de la lutte des classes, etc., l’Anthropocène procure le frisson coupable d’un nouveau récit sublime. Sur un fond d’agnosticisme quant au futur, l’Anthropocène paraît donner un nouvel horizon grandiose à l’humanité tout entière : prendre en charge collectivement le destin d’une planète. Dans le contexte idéologique terne de l’écologie politique, du développement durable et de la précaution, penser le mouvement d’une humanité devenue force tellurique paraît autrement plus excitant que penser l’involution d’un système économique. Au fond le sublime de l’Anthropocène rejoue assez exactement la scène finale du chef-d’œuvre de Stanley Kubrick, 2001 l’Odyssée de l’espace : l’embryon stellaire contemplant la terre figurant parfaitement l’avènement d’un agent géologique conscient, d’un corps planétaire réflexif. Et c’est bien pour cela que l’Anthropocène fait tressaillir théoricien·ne·s, philosophes et artistes en herbe : il semble désigner un événement métaphysique intéressant. Pour l’écologie politique contemporaine, l’esthétique sublime de l’Anthropocène pose pourtant problème : en mettant en scène l’hybridation entre première et seconde natures, elle réénergise l’agir technologique des cold warriors (la géo-ingénierie) ; en déconnectant l’échelle individuelle et locale de ce qui importe vraiment (l’humanité force tellurique et les temps géologiques), elle produit sidération et cynisme (no future) ; enfin l’Anthropocène, comme tout autre sublime, est sujet à la loi des rendements décroissants : une fois que l’audience est préparée et conditionnée, son effet s’émousse. En ce sens, désigner une œuvre d’art comme « art de l’Anthropocène » serait absolument fatale à son efficacité esthétique. Le risque est que l’écologie du sublime soit alors appelée à une surenchère permanente, semblable en cela à la course à l’avant-garde dans l’art contemporain. Avant d’embrasser complètement l’Anthropocène, il faut bien se rappeler que le sublime n’est qu’une des catégories de l’esthétique, qui en comprend bien d’autres (le tragique, le beau, le pittoresque…) reposant sur d’autres sentiments (l’harmonie, l’ataraxie, la tristesse, la douleur, l’amour), qui sont peut-être plus à même de nourrir une esthétique du soin, du petit, du local, du contrôle, de l’ancien et de l’involution dont l’agir écologique a tellement besoin.
[1] Cet article reprend sous une forme modifiée un texte déjà paru dans le catalogue de l’exposition Sublime. Les tremblements du monde, Centre Pompidou-Metz, Metz, Centre Pompidou-Metz, 2016. [2] W. Steffen, J. Grinevald, P. Crutzen, J. McNeill, « The Anthropocene : conceptual and historical perspectives », Philosophical transactions of the Royal Society A, 369, 2011, p. 842–867. [3] E. Burke, Recherche philosophique sur l’origine de nos idées du sublime et du beau, Paris, Pichon, 1803 (1757), p. 129. [4] Ibid., p. 225. [5] T. Piketty, Le capital au XXIe siècle, Paris, Seuil, 2013. [6] E. Hobsbawm, The Age of Capital : 1848-1975, London, Weindefeld, 1975. [7] Voir le chapitre « capitalocène » de la nouvelle édition de C. Bonneuil, J.-B. Fressoz, L’événement Anthropocène. La terre, l’histoire et nous, Paris, Seuil, 2016. [8] E. Burke, op. cit. p. 151. [9] W. Steffen et al., art. cit. [10] E. Burke, op. cit., p. 85. [11] M. Hope Nicholson, Mountain gloom and mountain glory: The development of the aesthetics of the infinite, Ithaca, Cornell University Press, 1959. [12] D. Nye, American technological sublime, Cambridge (MA), MIT Press, 1994. [13] Saint-Simon, Doctrine de Saint-Simon, t. 2, Paris, Aux Bureaux de l’Organisateur, 1830, p. 219. [14] C. Bonneuil, J.-B. Fressoz, op. cit. ; S. Grevsmühl, La Terre vue d’en haut. L’invention de l’environnement global, Paris, Seuil, 2014. [15] W. Steffen et al., art. cit. [16] Terry Eagleton, The Ideology of the Aesthetic, Oxford, Basil Blackwell, 1990.
Related Links:
Posted by Patrick Keller
in Art, Culture & society, Science & technology, Territory
at
09:11
Defined tags for this entry: art, artificial reality, atmosphere, climate, conditioning, culture & society, ecology, economy, engineering, environment, geography, science & technology, technology, territory, thinking
(Page 1 of 4, totaling 35 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendar
| rblg on TwitterSyndicate This BlogArchivesBlog Administration |