Monday, March 13. 2023Atomized (re-)Staging short video documentation | #fabricch #digital #hybrid #exhibition
Note: a brief video documentation about one of fabric | ch's latest project – Atomized (Re)Staging – that was exhibited at ZKM during Matter. Non-Matter. Anti-Matter. The exhibition was curated by Lívía Nolasco-Roszás and Felix Koberstein and took place ibn the context of the European research project Beyond Matter.
-----
Posted by Patrick Keller
in fabric | ch, Architecture, Art, Culture & society, Interaction design
at
14:14
Defined tags for this entry: architecture, art, artificial reality, artists, automation, culture & society, curators, data, design (environments), digital, display, fabric | ch, history, hybrid, immaterial, interaction design, interferences, media, museum, real time, responsive, vr
Wednesday, November 30. 2022Past exhibitions as Digital Experiences @ZKM, fabric | ch | #digital #exhibition #experimentation
Note: The exhibition related to the project and European research Beyond Matter - Past Exhibitions as Digital Experiences will open next week at ZKM, with the digital versions (or should I rather say "versioning"?) of two past and renowned exhibitions: Iconoclash, at ZKM in 2002 (with Bruno Latour among the multiple curators) and Les Immateriaux, at Beaubourg in 1985 (in this case with Jean-François Lyotard, not so long after the release of his Postmodern Condition). An unusual combination from two different times and perspectives. The title of the exhibition will be Matter. Non-Matter. Anti-Matter, with an amazing contemporary and historic lineup of works and artists, as well as documentation material from both past shows. Working with digitized variants of iconic artworks from these past exhibitions (digitization work under the supervision of Matthias Heckel), fabric | ch has been invited by Livia Nolasco-Roszas, ZKM curator and head of the research, to present its own digital take in the form of a combination on these two historic shows, and by using the digital models produced by their research team and made available.
The result, a new fabric | ch project entitled Atomized (re-)Staging, will be presented at the ZKM in Karlsruhe from this Saturday on (03.12.2022 - 23.04.2023).
Via ZKM -----
Opening: Matter. Non-Matter. Anti-Matter.© ZKM | Center for Art and Media Karlsruhe, Visual: AKU Collective / Mirjam Reili Past exhibitions as Digital Experiences Fri, December 02, 2022 7 pm CET, Opening --- Free entry --- When past exhibitions come to life digitally, the past becomes a virtual experience. What this novel experience can look like in concrete terms is shown by the exhibition »Matter. Non-Matter. Anti-Matter«. As part of the project »Beyond Matter. Cultural Heritage on the Verge of Virtual Reality«, the ZKM | Karlsruhe and the Centre Pompidou, Paris, use the case studies »Les Immatériaux« (Centre Pompidou, 1985) and »Iconoclash« (ZKM | Karlsruhe, 2002) to investigate the possibility of reviving exhibitions through experiential methods of digital and spatial modeling. The digital model as an interactive presentation of exhibition concepts is a novel approach to exploring exhibition history, curatorial methods, and representation and mediation. The goal is not to create »digital twins«, that is, virtual copies of past assemblages of artifacts and their surrounding architecture, but to provide an independent sensory experience. On view will be digital models of past exhibitions, artworks and artifacts from those exhibitions, and accompanying contemporary commentary integrated via augmented reality. The exhibition will be accompanied by a conference on virtualizing exhibition histories. The exhibition will be accompanied by numerous events, such as specialist workshops, webinars, online and offline guided tours, and a conference. --- Program 7 – 7:30 p.m. Media Theater Moderation: Lívia Nolasco-Rózsás, curator 7:30 – 8:15 p.m. Media Theater 8:15 – 8:30 p.m. Improvisation on the piano 8:30 p.m. --- The exhibition will be open from 8 to 10 p.m. The mint Café is also looking forward to your visit.
--- Matter. Non-Matter. Anti-Matter.Past Exhibitions as Digital Experiences.Sat, December 03, 2022 – Sun, April 23, 2023
The EU project »Beyond Matter: Cultural Heritage on the Verge of Virtual Reality« researches ways to reexperience past exhibitions using digital and spatial modeling methods. The exhibition »Matter. Non-Matter. Anti-Matter.« presents the current state of the research project at ZKM | Karlsruhe.At the core of the event is the digital revival of the iconic exhibitions »Les Immatériaux« of the Centre Pompidou Paris in 1985 and »Iconoclash: Beyond the Image Wars in Science, Religion, and Art« of the ZKM | Karlsruhe in 2002. Based on the case studies of »Les Immatériaux« (Centre Pompidou, 1985) and »Iconoclash: Beyond the Image Wars in Science, Religion, and Art« (ZKM, 2002), ZKM | Karlsruhe and the Centre Pompidou Paris investigate possibilities of reviving exhibitions through experiential methods of digital and spatial modeling. Central to this is also the question of the particular materiality of the digital. At the heart of the Paris exhibition »Les Immatériaux« in the mid-1980s was the question of what impact new technologies and materials could have on artistic practice. When philosopher Jean-François Lyotard joined as cocurator, the project's focus eventually shifted to exploring the changes in the postmodern world that were driven by a flood of new technologies. The exhibition »Iconoclash« at ZKM | Karlsruhe focused on the theme of representation and its multiple forms of expression, as well as the social turbulence it generates. As emphasized by curators Bruno Latour and Peter Weibel, the exhibition was not intended to be iconoclastic in its approach, but rather to present a synopsis of scholarly exhibits, documents, and artworks about iconoclasms – a thought experiment that took the form of an exhibition – a so-called »thought exhibition.« »Matter. Non-Matter. Anti-Matter.« now presents in the 21st century the digital models of the two projects on the Immaterial Display, hardware that has been specially developed for exploring virtual exhibitions. On view are artworks and artifacts from the past exhibitions, as well as contemporary reflections and artworks created or expanded for this exhibition. These include works by Jeremy Bailey, damjanski, fabric|ch, Geraldine Juárez, Carolyn Kirschner, and Anne Le Troter that echo the 3D models of the two landmark exhibitions. They bear witness to the current digitization trend in the production, collection, and presentation of art. Case studies and examples of the application of digital curatorial reconstruction techniques that were created as part of the Beyond Matter project complement the presentation. The exhibition »Matter. Non-Matter. Anti-Matter.« is accompanied by an extensive program of events: A webinar series aimed at museum professionals and cultural practitioners will present examples of work in digital or hybrid museums; two workshops, coorganized with Andreas Broeckmann from Leuphana University Lüneburg, will focus on interdisciplinary curating and methods for researching historical exhibitions; workshops on »Performance-Oriented Design Methods for Audience Studies and Exhibition Evaluation« (PORe) will be held by Lily Díaz-Kommonen and Cvijeta Miljak from Aalto University. After the exhibition ends at the ZKM, a new edition of »Matter. Non-Matter. Anti-Matter.« will be on view at the Centre Pompidou in Paris from May to July 2023.
>>> Artists
>>>
Further locations and dates:
---
Project
Cooperation partners
Supported by
Posted by Patrick Keller
in fabric | ch, Architecture, Art, Culture & society, Interaction design, Science & technology
at
13:49
Defined tags for this entry: 3d, architects, architecture, art, artificial reality, artists, culture & society, digital, exhibitions, experimentation, fabric | ch, history, immaterial, interaction design, material, real time, science & technology, tele-
Tuesday, September 21. 2021"Essais climatiques" by P. Rahm | #2ndaugmentation #essay #AR #VR
Note: it is with great pleasure and interest that I read recently one of Philippe Rahm's last publication, "Essais climatiques" (published in French, by Editions B2), which consists in fact in a collection of articles published in the past 10+ years in various magazines, journals and exhibition catalogs. It is certainly less developed than the even more recent book, "L'histoire naturelle de l'architecture" (ed. Pavillon de l'Arsenal, 2020), but nonetheless an excellent and brief introduction to his thinking and work. Philippe Rahm's call for the "return" of an "objective architecture", climatic and free of narrative issues, is of great interest. Especially at a time when we need to reduce our CO2 emissions and will need to reach energy objectives of slenderness. The historical reading of the postmodern era (in architecture), in relation to oil, vaccines and antibiotics is also really valuable in this context, when we are all looking to move forward this time in cultural history. I also had the good surprise, and joy, to see the text "L'âge de la deuxième augmentation" finally published! It was written by Philippe back in 2009 probably, about the works of fabric | ch at the time, when we were preparing a publication that finally never came out... Though this text will also be part of a monographic publication that is expected to be finalized and self-published in 2022. ----- By Patrick Keller
Posted by Patrick Keller
in fabric | ch, Architecture, Culture & society, Sustainability, Territory
at
08:03
Defined tags for this entry: architecture, artificial reality, books, computing, conditioning, culture & society, digital, fabric | ch, infrastructure, interferences, sustainability, territory, thinking
Friday, July 23. 2021"Universal Machine": historical graphs on the relations and fluxes between art, architecture, design, and technology (19.. - 20..) | #art&sciences #history #graphs
Note: As part of my teaching at ECAL / University of Art and Design Lausanne (HES-SO), I've had the opportunity to dig into the history of the relationship between art and science (ongoing process, especially regarding a material history of the same period). Or rather the links between creative processes (in art, architecture and design) and the information sciences (the computer especially, or the "Universal Machine" as formulated by A.Turing, as a more evocative name, hence the title of the graphs below and of this post). I also had the occasion, through my practice at fabric | ch, and before that as an assistant at EPFL and then as a professor at ECAL, to experience first hand some of these massive transformations in society and culture. Thus, for my theory courses, I've sought to assemble "maps" of sorts that help me understand, visualize and explain the fluxes and timelines of interactions between people, artifacts and disciplines. These maps are by no means perfect... nor do they pretend to be. They remain a bit hazy (by intention, as well as constraints of sizes) and could be indefinitely "unfolded" and completed, according to various interests and points of view, beyond mine. I edit them regularly as a matter of fact. However, in the absence of a good written, visual and/or sensitive history of these techno-cultural phenomena taken as a whole, these maps remain a good approximation tool to apprehend the flows and exchanges that unite or divide them, to start build a personal knowledge about them and eventually dig deeper... This is the main reason why, despite their obvious fuzziness - or perhaps because of it - I share them on this blog (fabric | rblg), in an informal way. It's so that other artists/designers/researchers/teachers/students/... can start building on them, show different fluxes, develop before and after or, more interestingly, branching out from them (if so, I'd be interested in sharing new developments on this site. Feel free to contact me to do so, for suggestions or comments as well btw). ... Its also worth noting that the maps are structured horizontally on a linear timeline (late 18th century towards mid 21st, mainly the industrial period), and vertically approximately around disciplines (bottom would be related to engineering, middle to art and design, and top towards humanities, social events or movements). This linear timeline could certainly be questioned, to paraphrase writer B. Latour, what about a spiral timeline for instance? One that would still show a past and a future, but also historical proximities of topics between them, connecting in its circular developments past centuries and topics with our contemporaneity? But for now and while aknowledging it is limitations, I stick to the linear simplicity... Countless narratives can then be built as emergent proprieties of the graphs (and I emphasize, not as their origin). ... The choice of topics (code, scores-instructions, countercultural, network related, interaction) are for now related to the matters of my teaching but are likely to expand. Possibly toward an underlying layer that would show the material conditions that supported the whole process and also made it possible.
In any case, this could be a good starting point for some summer readings (or a new "Where's Wallie" kind of game...)!
Via fabric | ch ----- By Patrick Keller
Rem.: By clicking on the thumbnails below you'll get access to HD versions.
"Universal Machine", main map (late 18th to mid 21st centuries):
Flows in the map > "Code":
Flows in the map > "Scores, Partitions, ...":
Flows in the map > "Countercultural, Subcultural, ...":
Flows in the map > "Network Related":
Flows in the map > "Interaction":
...
To be continued (& completed) ...
Posted by Patrick Keller
in fabric | ch, Architecture, Art, Culture & society, Design, Interaction design, Science & technology
at
09:44
Defined tags for this entry: architecture, art, culture & society, data, design, engineering, fabric | ch, history, interaction design, science & technology, theory, thinking, tools, visualization
Friday, August 28. 2020"Data Materialization", N. Kane (V&A) in discussion with P. Keller (ECAL, fabric | ch) | #data #research
Note: the discussion about "Data Materialization" between Nathalie Kane (V&A Museum, London) and Patrick Keller (fabric | ch, ECAL / University of Art and Design Lausanne (HES-SO)), on the occasion of the ECAL Research Day, has been published on the dedicated website, along with other interesting talks. ----- Via ECAL
Research Day 2019 Natalie D. Kane from ECAL on Vimeo.
Posted by Patrick Keller
in fabric | ch, Culture & society, Interaction design
at
14:57
Defined tags for this entry: conferences, culture & society, curators, data, digital life, fabric | ch, interaction design, museum, schools, tangible
Friday, February 01. 2019The Yoda of Silicon Valley | #NewYorkTimes #Algorithms
Note: Yet another dive into history of computer programming and algorithms, used for visual outputs ...
Via The New York Times (on Medium) ----- By Siobhan Roberts
Donald Knuth at his home in Stanford, Calif. He is a notorious perfectionist and has offered to pay a reward to anyone who finds a mistake in any of his books. Photo: Brian Flaherty
For half a century, the Stanford computer scientist Donald Knuth, who bears a slight resemblance to Yoda — albeit standing 6-foot-4 and wearing glasses — has reigned as the spirit-guide of the algorithmic realm. He is the author of “The Art of Computer Programming,” a continuing four-volume opus that is his life’s work. The first volume debuted in 1968, and the collected volumes (sold as a boxed set for about $250) were included by American Scientist in 2013 on its list of books that shaped the last century of science — alongside a special edition of “The Autobiography of Charles Darwin,” Tom Wolfe’s “The Right Stuff,” Rachel Carson’s “Silent Spring” and monographs by Albert Einstein, John von Neumann and Richard Feynman. With more than one million copies in print, “The Art of Computer Programming” is the Bible of its field. “Like an actual bible, it is long and comprehensive; no other book is as comprehensive,” said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: “You should definitely send me a résumé if you can read the whole thing.” The volume opens with an excerpt from “McCall’s Cookbook”: Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect. Inside are algorithms, the recipes that feed the digital age — although, as Dr.Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field’s most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text — for instance, when you hit Command+F to search for a keyword in a document. Now 80, Dr. Knuth usually dresses like the youthful geek he was when he embarked on this odyssey: long-sleeved T-shirt under a short-sleeved T-shirt, with jeans, at least at this time of year. In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones. “Knuth made it clear that the system could actually be understood all the way down to the machine code level,” said Dr. Norvig. Nowadays, of course, with algorithms masterminding (and undermining) our very existence, the average programmer no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries. But an elite class of engineers occasionally still does the deep dive. “Here at Google, sometimes we just throw stuff together,” Dr. Norvig said, during a meeting of the Google Trips team, in Mountain View, Calif. “But other times, if you’re serving billions of users, it’s important to do that efficiently. A 10-per-cent improvement in efficiency can work out to billions of dollars, and in order to get that last level of efficiency, you have to understand what’s going on all the way down.” Dr. Knuth at the California Institute of Technology, where he received his Ph.D. in 1963. Photo: Jill Knuth
Or, as Andrei Broder, a distinguished scientist at Google and one of Dr. Knuth’s former graduate students, explained during the meeting: “We want to have some theoretical basis for what we’re doing. We don’t want a frivolous or sloppy or second-rate algorithm. We don’t want some other algorithmist to say, ‘You guys are morons.’” The Google Trips app, created in 2016, is an “orienteering algorithm” that maps out a day’s worth of recommended touristy activities. The team was working on “maximizing the quality of the worst day” — for instance, avoiding sending the user back to the same neighborhood to see different sites. They drew inspiration from a 300-year-old algorithm by the Swiss mathematician Leonhard Euler, who wanted to map a route through the Prussian city of Königsberg that would cross each of its seven bridges only once. Dr. Knuth addresses Euler’s classic problem in the first volume of his treatise. (He once applied Euler’s method in coding a computer-controlled sewing machine.) Following Dr. Knuth’s doctrine helps to ward off moronry. He is known for introducing the notion of “literate programming,” emphasizing the importance of writing code that is readable by humans as well as computers — a notion that nowadays seems almost twee. Dr. Knuth has gone so far as to argue that some computer programs are, like Elizabeth Bishop’s poems and Philip Roth’s “American Pastoral,” works of literature worthy of a Pulitzer. He is also a notorious perfectionist. Randall Munroe, the xkcd cartoonist and author of “Thing Explainer,” first learned about Dr. Knuth from computer-science people who mentioned the reward money Dr. Knuth pays to anyone who finds a mistake in any of his books. As Mr. Munroe recalled, “People talked about getting one of those checks as if it was computer science’s Nobel Prize.” Dr. Knuth’s exacting standards, literary and otherwise, may explain why his life’s work is nowhere near done. He has a wager with Sergey Brin, the co-founder of Google and a former student (to use the term loosely), over whether Mr. Brin will finish his Ph.D. before Dr. Knuth concludes his opus.
The dawn of the algorithm At age 19, Dr. Knuth published his first technical paper, “The Potrzebie System of Weights and Measures,” in Mad magazine. He became a computer scientist before the discipline existed, studying mathematics at what is now Case Western Reserve University in Cleveland. He looked at sample programs for the school’s IBM 650 mainframe, a decimal computer, and, noticing some inadequacies, rewrote the software as well as the textbook used in class. As a side project, he ran stats for the basketball team, writing a computer program that helped them win their league — and earned a segment by Walter Cronkite called “The Electronic Coach.” During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, “optimization” is truly an art, and this is articulated in another Knuthian proverb: “Premature optimization is the root of all evil.” Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the “analysis of algorithms.” A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers — a book about algorithms.
Left: Dr. Knuth in 1981, looking at the 1957 Mad magazine issue that contained his first technical article. He was 19 when it was published. Photo: Jill Knuth. Right: “The Art of Computer Programming,” volumes 1–4. “Send me a résumé if you can read the whole thing,” Bill Gates wrote in a blurb. Photo: Brian Flaherty
“By the time of the Renaissance, the origin of this word was in doubt,” it began. “And early linguists attempted to guess at its derivation by making combinations like algiros [painful] + arithmos [number].’” In fact, Dr. Knuth continued, the namesake is the 9th-century Persian textbook author Abū ‘Abd Allāh Muhammad ibn Mūsā al-Khwārizmī, Latinized as Algorithmi. Never one for half measures, Dr. Knuth went on a pilgrimage in 1979 to al-Khwārizmī’s ancestral homeland in Uzbekistan. When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, “Volume 4, Fascicle 5,” covering, among other things, “backtracking” and “dancing links,” was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present. In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor’s defining characteristic even in the early 1980s. Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth’s greatest contribution to the world, and the greatest contribution to typography since Gutenberg. This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, “When I told my girlfriend that we can’t do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, ‘This is something that is so stupid it must be true.’” When Knuth chooses to be physically present, however, he is 100-per-cent there in the moment. “It just makes you happy to be around him,” said Jennifer Chayes, a managing director of Microsoft Research. “He’s a maximum in the community. If you had an optimization function that was in some way a combination of warmth and depth, Don would be it.”
Dr. Knuth discussing typefaces with Hermann Zapf, the type designer. Many consider Dr. Knuth’s work on the TeX computer typesetting system to be the greatest contribution to typography since Gutenberg. Photo: Bettmann/Getty Images
Sunday with the algorithmist Dr. Knuth lives in Stanford, and allowed for a Sunday visitor. That he spared an entire day was exceptional — usually his availability is “modulo nap time,” a sacred daily ritual from 1 p.m. to 4 p.m. He started early, at Palo Alto’s First Lutheran Church, where he delivered a Sunday school lesson to a standing-room-only crowd. Driving home, he got philosophical about mathematics. “I’ll never know everything,” he said. “My life would be a lot worse if there was nothing I knew the answers about, and if there was nothing I didn’t know the answers about.” Then he offered a tour of his “California modern” house, which he and his wife, Jill, built in 1970. His office is littered with piles of U.S.B. sticks and adorned with Valentine’s Day heart art from Jill, a graphic designer. Most impressive is the music room, built around his custom-made, 812-pipe pipe organ. The day ended over beer at a puzzle party. Puzzles and games — and penning a novella about surreal numbers, and composing a 90-minute multimedia musical pipe-dream, “Fantasia Apocalyptica” — are the sorts of things that really tickle him. One section of his book is titled, “Puzzles Versus the Real World.” He emailed an excerpt to the father-son team of Martin Demaine, an artist, and Erik Demaine, a computer scientist, both at the Massachusetts Institute of Technology, because Dr. Knuth had included their “algorithmic puzzle fonts.” “I was thrilled,” said Erik Demaine. “It’s an honor to be in the book.” He mentioned another Knuth quotation, which serves as the inspirational motto for the biannual “FUN with Algorithms” conference: “Pleasure has probably been the main goal all along.” But then, Dr. Demaine said, the field went and got practical. Engineers and scientists and artists are teaming up to solve real-world problems — protein folding, robotics, airbags — using the Demaines’s mathematical origami designs for how to fold paper and linkages into different shapes. Of course, all the algorithmic rigmarole is also causing real-world problems. Algorithms written by humans — tackling harder and harder problems, but producing code embedded with bugs and biases — are troubling enough. More worrisome, perhaps, are the algorithms that are not written by humans, algorithms written by the machine, as it learns. Programmers still train the machine, and, crucially, feed it data. (Data is the new domain of biases and bugs, and here the bugs and biases are harder to find and fix). However, as Kevin Slavin, a research affiliate at M.I.T.’s Media Lab said, “We are now writing algorithms we cannot read. That makes this a unique moment in history, in that we are subject to ideas and actions and efforts by a set of physics that have human origins without human comprehension.” As Slavin has often noted, “It’s a bright future, if you’re an algorithm.”
Dr. Knuth at his desk at home in 1999. Photo: Jill Knuth
A few notes. Photo: Brian Flaherty
All the more so if you’re an algorithm versed in Knuth. “Today, programmers use stuff that Knuth, and others, have done as components of their algorithms, and then they combine that together with all the other stuff they need,” said Google’s Dr. Norvig. “With A.I., we have the same thing. It’s just that the combining-together part will be done automatically, based on the data, rather than based on a programmer’s work. You want A.I. to be able to combine components to get a good answer based on the data. But you have to decide what those components are. It could happen that each component is a page or chapter out of Knuth, because that’s the best possible way to do some task.” Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish “The Art of Computer Programming,” although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? “Definitely not,” said Dr. Knuth. “I am worried that algorithms are getting too prominent in the world,” he added. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”
Posted by Patrick Keller
in Culture & society, Interaction design, Science & technology
at
08:55
Defined tags for this entry: code, computing, culture & society, digital, history, interaction design, science & technology, scientists
Monday, December 17. 2018In 1968, computers got personal | #demo #newdemo?
Note: we went a bit historical recently on | rblg, digging into history of computing in relation to design/architecture/art. And the following one is certainly far from being the less know... History of computing, or rather personal computing. Yet the article by Maraget O'Mara bring new insights about Engelbart's "mother of all demos", asking in particular for a new "demo". It is also interesting to consider how some topics that we'd believe are very contemporary, were in fact already popping up pretty early in the history of the media. Questiond about the status of the machine in relation to us, humans, or about the collection of data, etc. If you go back to the early texts by Wiener and Turing (vulgarization texts... for my part), you might see that the questions we still have were already present.
Via The Conversation -----
On a crisp California afternoon in early December 1968, a square-jawed, mild-mannered Stanford researcher named Douglas Engelbart took the stage at San Francisco’s Civic Auditorium and proceeded to blow everyone’s mind about what computers could do. Sitting down at a keyboard, this computer-age Clark Kent calmly showed a rapt audience of computer engineers how the devices they built could be utterly different kinds of machines – ones that were “alive for you all day,” as he put it, immediately responsive to your input, and which didn’t require users to know programming languages in order to operate.
The prototype computer mouse Doug Engelbart used in his demo. Michael Hicks, CC BY
Engelbart typed simple commands. He edited a grocery list. As he worked, he skipped the computer cursor across the screen using a strange wooden box that fit snugly under his palm. With small wheels underneath and a cord dangling from its rear, Engelbart dubbed it a “mouse.” The 90-minute presentation went down in Silicon Valley history as the “mother of all demos,” for it previewed a world of personal and online computing utterly different from 1968’s status quo. It wasn’t just the technology that was revelatory; it was the notion that a computer could be something a non-specialist individual user could control from their own desk.
The first part of the "mother of all demos."
Shrinking the massive machines In the America of 1968, computers weren’t at all personal. They were refrigerator-sized behemoths that hummed and blinked, calculating everything from consumer habits to missile trajectories, cloistered deep within corporate offices, government agencies and university labs. Their secrets were accessible only via punch card and teletype terminals. The Vietnam-era counterculture already had made mainframe computers into ominous symbols of a soul-crushing Establishment. Four years before, the student protesters of Berkeley’s Free Speech Movement had pinned signs to their chests that bore a riff on the prim warning that appeared on every IBM punch card: “I am a UC student. Please don’t bend, fold, spindle or mutilate me.”
Hear Prof. O'Mara discuss this topic on our Heat and Light podcast.
Earlier in 1968, Stanley Kubrick’s trippy “2001: A Space Odyssey” mined moviegoers’ anxieties about computers run amok with the tale of a malevolent mainframe that seized control of a spaceship from its human astronauts. Voices rang out on Capitol Hill about the uses and abuses of electronic data-gathering, too. Missouri Senator Ed Long regularly delivered floor speeches he called “Big Brother updates.” North Carolina Senator Sam Ervin declared that mainframe power posed a threat to the freedoms guaranteed by the Constitution. “The computer,” Ervin warned darkly, “never forgets.” As the Johnson administration unveiled plans to centralize government data in a single, centralized national database, New Jersey Congressman Cornelius Gallagher declared that it was just another grim step toward scientific thinking taking over modern life, “leaving as an end result a stack of computer cards where once were human beings.” The zeitgeist of 1968 helps explain why Engelbart’s demo so quickly became a touchstone and inspiration for a new, enduring definition of technological empowerment. Here was a computer that didn’t override human intelligence or stomp out individuality, but instead could, as Engelbart put it, “augment human intellect.” While Engelbart’s vision of how these tools might be used was rather conventionally corporate – a computer on every office desk and a mouse in every worker’s palm – his overarching notion of an individualized computer environment hit exactly the right note for the anti-Establishment technologists coming of age in 1968, who wanted to make technology personal and information free.
The second part of the "mother of all demos."
Over the next decade, technologists from this new generation would turn what Engelbart called his “wild dream” into a mass-market reality – and profoundly transform Americans’ relationship to computer technology.
Government involvement In the decade after the demo, the crisis of Watergate and revelations of CIA and FBI snooping further seeded distrust in America’s political leadership and in the ability of large government bureaucracies to be responsible stewards of personal information. Economic uncertainty and an antiwar mood slashed public spending on high-tech research and development – the same money that once had paid for so many of those mainframe computers and for training engineers to program them. Enabled by the miniaturizing technology of the microprocessor, the size and price of computers plummeted, turning them into affordable and soon indispensable tools for work and play. By the 1980s and 1990s, instead of being seen as machines made and controlled by government, computers had become ultimate expressions of free-market capitalism, hailed by business and political leaders alike as examples of what was possible when government got out of the way and let innovation bloom. There lies the great irony in this pivotal turn in American high-tech history. For even though “the mother of all demos” provided inspiration for a personal, entrepreneurial, government-is-dangerous-and-small-is-beautiful computing era, Doug Engelbart’s audacious vision would never have made it to keyboard and mouse without government research funding in the first place. Engelbart was keenly aware of this, flashing credits up on the screen at the presentation’s start listing those who funded his research team: the Defense Department’s Advanced Projects Research Agency, later known as DARPA; the National Aeronautics and Space Administration; the U.S. Air Force. Only the public sector had the deep pockets, the patience and the tolerance for blue-sky ideas without any immediate commercial application. Although government funding played a less visible role in the high-tech story after 1968, it continued to function as critical seed capital for next-generation ideas. Marc Andreessen and his fellow graduate students developed their groundbreaking web browser in a government-funded university laboratory. DARPA and NASA money helped fund the graduate research project that Sergey Brin and Larry Page would later commercialize as Google. Driverless car technology got a jump-start after a government-sponsored competition; so has nanotechnology, green tech and more. Government hasn’t gotten out of Silicon Valley’s way; it remained there all along, quietly funding the next generation of boundary-pushing technology.
The third part of the "mother of all demos."
Today, public debate rages once again on Capitol Hill about computer-aided invasions of privacy. Hollywood spins apocalyptic tales of technology run amok. Americans spend days staring into screens, tracked by the smartphones in our pockets, hooked on social media. Technology companies are among the biggest and richest in the world. It’s a long way from Engelbart’s humble grocery list. But perhaps the current moment of high-tech angst can once again gain inspiration from the mother of all demos. Later in life, Engelbart described his life’s work as a quest to “help humanity cope better with complexity and urgency.” His solution was a computer that was remarkably different from the others of that era, one that was humane and personal, that augmented human capability rather than boxing it in. And he was able to bring this vision to life because government agencies funded his work. Now it’s time for another mind-blowing demo of the possible future, one that moves beyond the current adversarial moment between big government and Big Tech. It could inspire people to enlist public and private resources and minds in crafting the next audacious vision for our digital future. - Our new podcast “Heat and Light” features Prof. O'Mara discussing this story in depth.
Related Links:Friday, July 13. 2018Thinking Machines at MOMA | #computing #art&design&architecture #history #avantgarde
Note: following the exhibition Thinking Machines: Art and Design in the Computer Age, 1959–1989 until last April at MOMA, images of the show appeared on the museum's website, with many references to projects. After Archeology of the Digital at CCA in Montreal between 2013-17, this is another good contribution to the history of the field and to the intricate relations between art, design, architecture and computing. How cultural fields contributed to the shaping of this "mass stacked media" that is now built upon the combinations of computing machines, networks, interfaces, services, data, data centers, people, crowds, etc. is certainly largely underestimated. Literature start to emerge, but it will take time to uncover what remained "out of the radars" for a very long period. They acted in fact as some sort of "avant-garde", not well estimated or identified enough, even by specialized institutions and at a time when the name "avant-garde" almost became a "s-word"... or was considered "dead". Unfortunately, no publication seems to have been published in relation to the exhibition, on the contrary to the one at CCA, which is accompanied by two well documented books.
Via MOMA ----- Thinking Machines: Art and Design in the Computer Age, 1959–1989
November 13, 2017–
Drawn primarily from MoMA's collection, Thinking Machines: Art and Design in the Computer Age, 1959–1989 brings artworks produced using computers and computational thinking together with notable examples of computer and component design. The exhibition reveals how artists, architects, and designers operating at the vanguard of art and technology deployed computing as a means to reconsider artistic production. The artists featured in Thinking Machines exploited the potential of emerging technologies by inventing systems wholesale or by partnering with institutions and corporations that provided access to cutting-edge machines. They channeled the promise of computing into kinetic sculpture, plotter drawing, computer animation, and video installation. Photographers and architects likewise recognized these technologies' capacity to reconfigure human communities and the built environment. Thinking Machines includes works by John Cage and Lejaren Hiller, Waldemar Cordeiro, Charles Csuri, Richard Hamilton, Alison Knowles, Beryl Korot, Vera Molnár, Cedric Price, and Stan VanDerBeek, alongside computers designed by Tamiko Thiel and others at Thinking Machines Corporation, IBM, Olivetti, and Apple Computer. The exhibition combines artworks, design objects, and architectural proposals to trace how computers transformed aesthetics and hierarchies, revealing how these thinking machines reshaped art making, working life, and social connections. Organized by Sean Anderson, Associate Curator, Department of Architecture and Design, and Giampaolo Bianconi, Curatorial Assistant, Department of Media and Performance Art. - More images HERE.
Posted by Patrick Keller
in Architecture, Culture & society, Design, Interaction design, Science & technology
at
09:19
Defined tags for this entry: archeology, architects, architecture, art, artists, computing, culture & society, design, design (interactions), designers, history, interaction design, mediated, research, science & technology
Friday, June 22. 2018The empty brain | #neurosciences #nometaphor
Via Aeon ----- Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
Img. by Jan Stepnov (Twenty20).
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’. Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer. To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections. A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced. Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving. But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever. We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not. Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word. Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’. Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers? In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence. In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least. The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while. By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
"The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain" Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics. This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain. Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure. The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question. But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge. Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them. The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors. Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly. If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost? In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences. Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times. What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing? Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
"The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?" A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers. The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell? So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent. The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before. Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music. From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor. As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways. We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded. Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary. A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable. A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world. My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball. That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
"We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading." Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor. One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity. Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing. Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences. This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above). This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised. Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.) Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down. We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
Posted by Patrick Keller
in Culture & society, Science & technology
at
14:32
Defined tags for this entry: cognition, computing, culture & society, intelligence, neurosciences, research, science & technology, thinking
Wednesday, June 06. 2018What Happened When Stephen Hawking Threw a Cocktail Party for Time Travelers (2009) | #time #dimensions
Note: speaking about time, not in time, out of time, etc. and as a late tribute to Stephen Hawking, this experiement full of malice from him regarding the possibilities of time travel. To be seen on Open Culture.
Via Open Culture -----
Who among us has never fantasized about traveling through time? But then, who among us hasn't traveled through time? Every single one of us is a time traveler, technically speaking, moving as we do through one second per second, one hour per hour, one day per day. Though I never personally heard the late Stephen Hawking point out that fact, I feel almost certain that he did, especially in light of one particular piece of scientific performance art he pulled off in 2009: throwing a cocktail party for time travelers — the proper kind, who come from the future.
"Hawking’s party was actually an experiment on the possibility of time travel," writes Atlas Obscura's Anne Ewbank. "Along with many physicists, Hawking had mused about whether going forward and back in time was possible. And what time traveler could resist sipping champagne with Stephen Hawking himself?" " By publishing the party invitation in his mini-series Into the Universe With Stephen Hawking, Hawking hoped to lure futuristic time travelers. You are cordially invited to a reception for Time Travellers, the invitation read, along with the the date, time, and coordinates for the event. The theory, Hawking explained, was that only someone from the future would be able to attend." Alas, no time travelers turned up. Since someone possessed of that technology at any point in the future would theoretically be able to attend, does Hawking's lonely party, which you can see in the clip above, prove that time travel will never become possible? Maybe — or maybe the potential time-travelers of the future know something about the space-time-continuum-threatening risks of the practice that we don't. As for Dr. Hawking, I have to imagine that he came away satisfied from the shindig, even though his hoped-for Ms. Universe from the future never walked through the door. “I like simple experiments… and champagne,” he said, and this champagne-laden simple experiment will continue to remind the rest of us to enjoy our time on Earth, wherever in that time we may find ourselves. - Related Content: The Lighter Side of Stephen Hawking: The Physicist Cracks Jokes and a Smile with John Oliver Professor Ronald Mallett Wants to Build a Time Machine in this Century … and He’s Not Kidding
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Posted by Patrick Keller
in Culture & society, Science & technology
at
15:54
Defined tags for this entry: culture & society, dimensions, experience, experimentation, opensource, science & technology, time
(Page 1 of 42, totaling 417 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendar
Syndicate This BlogArchivesBlog Administration |