As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
It is a rare but much-apreciated recognition of our work by the region where we've been working all those years (and, at the same occasion, also one to show our faces)! We're still waiting for an invitation to exhibit fabric | ch's work somewhere in our hometown though 😉
So rejoice, and let's celebrate together during the following drinks reception!
Donald Knuth at his home in Stanford, Calif. He is a notorious perfectionist and has offered to pay a reward to anyone who finds a mistake in any of his books. Photo: Brian Flaherty
For half a century, the Stanford computer scientist Donald Knuth, who bears a slight resemblance to Yoda — albeit standing 6-foot-4 and wearing glasses — has reigned as the spirit-guide of the algorithmic realm.
He is the author of “The Art of Computer Programming,” a continuing four-volume opus that is his life’s work. The first volume debuted in 1968, and the collected volumes (sold as a boxed set for about $250) were included by American Scientist in 2013 on its list of books that shaped the last century of science — alongside a special edition of “The Autobiography of Charles Darwin,” Tom Wolfe’s “The Right Stuff,” Rachel Carson’s “Silent Spring” and monographs by Albert Einstein, John von Neumann and Richard Feynman.
With more than one million copies in print, “The Art of Computer Programming” is the Bibleof its field. “Like an actual bible, it is long and comprehensive; no other book is as comprehensive,” said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: “You should definitely send me a résumé if you can read the whole thing.”
The volume opens with an excerpt from “McCall’s Cookbook”:
Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.
Inside are algorithms, the recipes that feed the digital age — although, as Dr.Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field’s most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text — for instance, when you hit Command+F to search for a keyword in a document.
Now 80, Dr. Knuth usually dresses like the youthful geek he was when he embarked on this odyssey: long-sleeved T-shirt under a short-sleeved T-shirt, with jeans, at least at this time of year. In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones.
“Knuth made it clear that the system could actually be understood all the way down to the machine code level,” said Dr. Norvig. Nowadays, of course, with algorithms masterminding (and undermining) our very existence, the average programmer no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries. But an elite class of engineers occasionally still does the deep dive.
“Here at Google, sometimes we just throw stuff together,” Dr. Norvig said, during a meeting of the Google Trips team, in Mountain View, Calif. “But other times, if you’re serving billions of users, it’s important to do that efficiently. A 10-per-cent improvement in efficiency can work out to billions of dollars, and in order to get that last level of efficiency, you have to understand what’s going on all the way down.”
Dr. Knuth at the California Institute of Technology, where he received his Ph.D. in 1963. Photo: Jill Knuth
Or, as Andrei Broder, a distinguished scientist at Google and one of Dr. Knuth’s former graduate students, explained during the meeting: “We want to have some theoretical basis for what we’re doing. We don’t want a frivolous or sloppy or second-rate algorithm. We don’t want some other algorithmist to say, ‘You guys are morons.’”
The Google Trips app, created in 2016, is an “orienteering algorithm” that maps out a day’s worth of recommended touristy activities. The team was working on “maximizing the quality of the worst day” — for instance, avoiding sending the user back to the same neighborhood to see different sites. They drew inspiration from a 300-year-old algorithm by the Swiss mathematician Leonhard Euler, who wanted to map a route through the Prussian city of Königsberg that would cross each of its seven bridges only once. Dr. Knuth addresses Euler’s classic problem in the first volume of his treatise. (He once applied Euler’s method in coding a computer-controlled sewing machine.)
Following Dr. Knuth’s doctrine helps to ward off moronry. He is known for introducing the notion of “literate programming,” emphasizing the importance of writing code that is readable by humans as well as computers — a notion that nowadays seems almost twee. Dr. Knuth has gone so far as to argue that some computer programs are, like Elizabeth Bishop’s poems and Philip Roth’s “American Pastoral,” works of literature worthy of a Pulitzer.
He is also a notorious perfectionist. Randall Munroe, the xkcd cartoonist and author of “Thing Explainer,” first learned about Dr. Knuth from computer-science people who mentioned the reward money Dr. Knuth pays to anyone who finds a mistake in any of his books. As Mr. Munroe recalled, “People talked about getting one of those checks as if it was computer science’s Nobel Prize.”
Dr. Knuth’s exacting standards, literary and otherwise, may explain why his life’s work is nowhere near done. He has a wager with Sergey Brin, the co-founder of Google and a former student (to use the term loosely), over whether Mr. Brin will finish his Ph.D. before Dr. Knuth concludes his opus.
The dawn of the algorithm
At age 19, Dr. Knuth published his first technical paper, “The Potrzebie System of Weights and Measures,” in Mad magazine. He became a computer scientist before the discipline existed, studying mathematics at what is now Case Western Reserve University in Cleveland. He looked at sample programs for the school’s IBM 650 mainframe, a decimal computer, and, noticing some inadequacies, rewrote the software as well as the textbook used in class. As a side project, he ran stats for the basketball team, writing a computer program that helped them win their league — and earned a segment by Walter Cronkite called “The Electronic Coach.”
During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, “optimization” is truly an art, and this is articulated in another Knuthian proverb: “Premature optimization is the root of all evil.”
Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the “analysis of algorithms.” A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers — a book about algorithms.
Left: Dr. Knuth in 1981, looking at the 1957 Mad magazine issue that contained his first technical article. He was 19 when it was published. Photo: Jill Knuth. Right: “The Art of Computer Programming,” volumes 1–4. “Send me a résumé if you can read the whole thing,” Bill Gates wrote in a blurb. Photo: Brian Flaherty
“By the time of the Renaissance, the origin of this word was in doubt,” it began. “And early linguists attempted to guess at its derivation by making combinations like algiros [painful] + arithmos [number].’” In fact, Dr. Knuth continued, the namesake is the 9th-century Persian textbook author Abū ‘Abd Allāh Muhammad ibn Mūsā al-Khwārizmī, Latinized as Algorithmi. Never one for half measures, Dr. Knuth went on a pilgrimage in 1979 to al-Khwārizmī’s ancestral homeland in Uzbekistan.
When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, “Volume 4, Fascicle 5,” covering, among other things, “backtracking” and “dancing links,” was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.
In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor’s defining characteristic even in the early 1980s.
Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth’s greatest contribution to the world, and the greatest contribution to typography since Gutenberg.
This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, “When I told my girlfriend that we can’t do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, ‘This is something that is so stupid it must be true.’”
When Knuth chooses to be physically present, however, he is 100-per-cent there in the moment. “It just makes you happy to be around him,” said Jennifer Chayes, a managing director of Microsoft Research. “He’s a maximum in the community. If you had an optimization function that was in some way a combination of warmth and depth, Don would be it.”
Dr. Knuth discussing typefaces with Hermann Zapf, the type designer. Many consider Dr. Knuth’s work on the TeX computer typesetting system to be the greatest contribution to typography since Gutenberg. Photo: Bettmann/Getty Images
Sunday with the algorithmist
Dr. Knuth lives in Stanford, and allowed for a Sunday visitor. That he spared an entire day was exceptional — usually his availability is “modulo nap time,” a sacred daily ritual from 1 p.m. to 4 p.m. He started early, at Palo Alto’s First Lutheran Church, where he delivered a Sunday school lesson to a standing-room-only crowd. Driving home, he got philosophical about mathematics.
“I’ll never know everything,” he said. “My life would be a lot worse if there was nothing I knew the answers about, and if there was nothing I didn’t knowthe answers about.” Then he offered a tour of his “California modern” house, which he and his wife, Jill, built in 1970. His office is littered with piles of U.S.B. sticks and adorned with Valentine’s Day heart art from Jill, a graphic designer. Most impressive is the music room, built around his custom-made, 812-pipe pipe organ. The day ended over beer at a puzzle party.
Puzzles and games — and penning a novella about surreal numbers, and composing a 90-minute multimedia musical pipe-dream, “Fantasia Apocalyptica” — are the sorts of things that really tickle him. One section of his book is titled, “Puzzles Versus the Real World.” He emailed an excerpt to the father-son team of Martin Demaine, an artist, and Erik Demaine, a computer scientist, both at the Massachusetts Institute of Technology, because Dr. Knuth had included their “algorithmic puzzle fonts.”
“I was thrilled,” said Erik Demaine. “It’s an honor to be in the book.” He mentioned another Knuth quotation, which serves as the inspirational motto for the biannual “FUN with Algorithms” conference: “Pleasure has probably been the main goal all along.”
But then, Dr. Demaine said, the field went and got practical. Engineers and scientists and artists are teaming up to solve real-world problems — protein folding, robotics, airbags — using the Demaines’s mathematical origami designs for how to fold paper and linkages into different shapes.
Of course, all the algorithmic rigmarole is also causing real-world problems. Algorithms written by humans — tackling harder and harder problems, but producing code embedded with bugs and biases — are troubling enough. More worrisome, perhaps, are the algorithms that are not written by humans, algorithms written by the machine, as it learns.
Programmers still train the machine, and, crucially, feed it data. (Data is the new domain of biases and bugs, and here the bugs and biases are harder to find and fix). However, as Kevin Slavin, a research affiliate at M.I.T.’s Media Lab said, “We are now writing algorithms we cannot read. That makes this a unique moment in history, in that we are subject to ideas and actions and efforts by a set of physics that have human origins without human comprehension.” As Slavin has often noted, “It’s a bright future, if you’re an algorithm.”
Dr. Knuth at his desk at home in 1999. Photo: Jill Knuth
A few notes. Photo: Brian Flaherty
All the more so if you’re an algorithm versed in Knuth. “Today, programmers use stuff that Knuth, and others, have done as components of their algorithms, and then they combine that together with all the other stuff they need,” said Google’s Dr. Norvig.
“With A.I., we have the same thing. It’s just that the combining-together part will be done automatically, based on the data, rather than based on a programmer’s work. You want A.I. to be able to combine components to get a good answer based on the data. But you have to decide what those components are. It could happen that each component is a page or chapter out of Knuth, because that’s the best possible way to do some task.”
Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish “The Art of Computer Programming,” although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? “Definitely not,” said Dr. Knuth.
“I am worried that algorithms are getting too prominent in the world,” he added. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”
Listen to the discussion online HERE (Youtube, 1h02).
...
Vlatko Vedral on Decoding Reality -- The Universe as Quantum Information. What is the nature of reality? Why is there something rather than nothing? These are the deepest questions that human beings have asked, that thinkers East and West have pondered over millennia. For a physicist, all the world is information. The Universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the blueprints for our basic forms to future generations using a digital code called DNA.
Decoding Reality asks some of the deepest questions about the Universe and considers the implications of interpreting it in terms of information. It explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. It describes the bizarre effects of quantum behaviour such as 'entanglement', which Einstein called 'spooky action at a distance' and explores cutting edge work on harnessing quantum effects in hyperfast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach up into our reality.
The book concludes by considering the answer to the ultimate question: where did all of the information in the Universe come from? The answers considered are exhilarating and challenge our concept of the nature of matter, of time, of free will, and of reality itself.
Note: the title and beginning of the article is very promissing or teasing, so to say... But unfortunately not freely accessible without a subscription on the New Scientist. Yet as it promisses an interesting read, I do archive it on | rblg for record and future readings.
In the meantime, here's also an interesting interview (2010) from Vlatko, at the time when he published his book Decoding Reality () about Information with physicist Vlatko Vedral for The Guardian.
And an extract from the article on the New Scientist:
I’m building a machine that breaks the rules of reality
We thought only fools messed with the cast-iron laws of thermodynamics – but quantum trickery is rewriting the rulebook, says physicist Vladko Vedral.
Martin Leon Barreto
By Vlatko Vedral
A FEW years ago, I had an idea that may sound a little crazy: I thought I could see a way to build an engine that works harder than the laws of physics allow.
You would be within your rights to baulk at this proposition. After all, the efficiency of engines is governed by thermodynamics, the most solid pillar of physics. This is one set of natural laws you don’t mess with.
Yet if I leave my office at the University of Oxford and stroll down the corridor, I can now see an engine that pays no heed to these laws. It is a machine of considerable power and intricacy, with green lasers and ions instead of oil and pistons. There is a long road ahead, but I believe contraptions like this one will shape the future of technology.
Better, more efficient computers would be just the start. The engine is also a harbinger of a new era in science. To build it, we have had to uncover a field called quantum thermodynamics, one set to retune our ideas about why life, the universe – everything, in fact – are the way they are.
Thermodynamics is the theory that describes the interplay between temperature, heat, energy and work. As such, it touches on pretty much everything, from your brain to your muscles, car engines to kitchen blenders, stars to quasars. It provides a base from which we can work out what sorts of things do and don’t happen in the universe. If you eat a burger, you must burn off the calories – or …
The visual optics plates were realized by scientist Thomas Young at that time, when he was studying light (wave theory of light). It took another 100 (and fifty) years to truly access the art world...
My question would be: what kind of "plates" are getting drawn today? (and this drives us to Leonardo, to art-sciences programs of different sorts, etc.)
"(...). Nevertheless, in the early-19th century Young put forth a number of theoretical reasons supporting the wave theory of light, and he developed two enduring demonstrations to support this viewpoint.
In particular, in the frame of this research project, as a source of critical inspiration for a workshop we were preparing to lead with students at that time (critical because "magic" in the context of technology means what it means: tricked and not understanding, therefore believing or "stupefied").
For the matter of documentation, I reblog this post as well on | rblg as it brings different ideas about the "sublime" related to data or data centers, creation and contemporary technology in general.
It may be a bit hard to follow without the initial context (a brief by the invited guests, Random International and the general objectives of the project), but this context can be accessed from within the post -below-, for the ones interested to digg deeper.
...
As a matter of fact, this whole topic make me also think of the film The Prestige by Christopher Nolan. In which the figure of Nikola Tesla (played by "The Man Who Fell to Earth himself, a.k.a. David Bowie) is depicted as a character very close to a magician, his inventions with electricity being understood at the margin between sciences and magic.
Following the publication of Dev Joshi‘s brief on I&IC documentary blog yesterday (note: 10.11.2015), I took today the opportunity to briefly introduce it to the interaction design students that will be involved in the workshop next week. Especially, I focused on some points of the brief that were important but possibly quite new concepts for them. I also extended some implicit ideas with images that could obviously bring ideas about devices to build to access some past data, or “shadows” as Dev’s names them.
What comes out in a very interesting way for our research in Dev’s brief is the idea that the data footprints each of us leaves online on a daily basis (while using all type of digital services) could be considered as past entities of ourselves, or trapped, forgotten, hidden, … (online) fragments of our personalities… waiting to be contacted again.
How many different versions of you are there in the cloud? If they could speak, what would they say?
Yet, interestingly, if the term “digital footprint” is generally used in English to depict this situation (the data traces each of us leaves behind), we rather use in French the term “ombre numérique” (literally “digital shadow”). That’s why we’ve decided with Dev that it was preferable to use this term as the title for the workshop (The Everlasting Shadows): it is somehow a more vivid expression that could bring quite direct ideas when it comes to think about designing “devices” to “contact” these “digital entities” or make them visible again in some ways.
Philippe Ramette, “L’ombre de celui que j’étais / Shadow of my former self “, 2007. Light installation, mixed media.
By extension, we could also start to speak about “digital ghosts” as this expression is also commonly used (not to mention the “corps sans organes” of G. Deleuze/F. Gattari and previously A. Artaud). Many “ghosts”/facets of ourselves? All trapped online in the form of zombie data?
Your digital ghosts are trapped on islands around the cloud – is there a way to rescue them? Maybe they just need a shelter to live in now that you have moved on?
… or a haunted house?
And this again is a revealing parallel, because it opens the whole conceptual idea to beliefs… (about ghosts? about personal traces and shadows? about clouds? and finally, about technology? …)
What about then to work with inspirations that would come from the spiritualism domain, its rich iconography and produce “devices” to communicate with your dead past data entities?
Fritz Lang. “Dr. Mabuse, the Gambler”, movie, 1922.
Or even start to think about some kind of “wearables”, and then become a new type of fraud technological data psychic?
Fraud medium Colin Evans in levitation, 13 June 1938 (source Wikipedia).
We could even digg deeper into these “beliefs” and start looking at old illustrations and engravings that depicts relations to “things that we don’t understand”, that are “beyond our understanding”… and that possibly show “tools” or strange machinery to observe or communicate with these “unknown things” (while trying to understand them)?
This last illustration could also drive us, by extension and a very straight shortcut , to the idea of the Sublime (in art, but also in philosophy), especially the romantic works of the painters from that period (late 18th and early 19th centuries, among them W. Turner, C. S. Friedrich, E. Delacroix, T. Cole, etc.)
Submerged by the presentiment of a nature that was in all dimensions dominating humans, that remained at that time mostly unexplained and mysterious, if not dangerous and feared, some painters took on this feeling, named “sublime” after Edmund Burke’s Philosophical Enquiry (1757), and start painting dramatic scenes of humans facing the forces of nature.
Thomas Cole, “The Voyage of Life: Old Age”, 1842. National Gallery of Art, Washington DC.
It is not by chance of course that I’ll end my “esoteric comments about the brief” post with this idea of the Sublime. This is because recently, the concept found a new life in regard to technology and its central yet “unexplained, mysterious, if not dangerous and feared” role in our contemporary society. The term got completed at this occasion to become the “Technological Sublime”, thus implicitly comparing the once dominant and “beyond our understanding” Nature to our contemporary technology.
“American Technological Sublime” by D. E. Nye, published in 1994 (MIT Press) was certainly one of the first book to join the two terms. It continues the exploration of the social construction of technology initiated in his previous book, “Electrifying America” (MIT Press, 1990). More recently in 2011, the idea popup again on the blog of Next Nature in an article simply entitled The Technological Sublime.
So, to complete my post with a last question, is the Cloud, that everybody uses but nobody seems to understand, a technologically sublime artifact? Wouldn’ it be ironic that an infrastructure, which aim is to be absolutely rational and functional, ultimately contributes to creates a completely opposite feeling?
For decades, biologists spurned emotion and feeling as uninteresting. But Antonio Damasio demonstrated that they were central to the life-regulating processes of almost all living creatures.
Damasio’s essential insight is that feelings are “mental experiences of body states,” which arise as the brain interprets emotions, themselves physical states arising from the body’s responses to external stimuli. (The order of such events is: I am threatened, experience fear, and feel horror.) He has suggested that consciousness, whether the primitive “core consciousness” of animals or the “extended” self-conception of humans, requiring autobiographical memory, emerges from emotions and feelings.
His insight, dating back to the early 1990s, stemmed from the clinical study of brain lesions in patients unable to make good decisions because their emotions were impaired, but whose reason was otherwise unaffected—research made possible by the neuroanatomical studies of his wife and frequent coauthor, Hanna Damasio. Their work has always depended on advances in technology. More recently, tools such as functional neuroimaging, which measures the relationship between mental processes and activity in parts of the brain, have complemented the Damasios’ use of neuroanatomy.
A professor of neuroscience at the University of Southern California, Damasio has written four artful books that explain his research to a broader audience and relate its discoveries to the abiding concerns of philosophy. He believes that neurobiological research has a distinctly philosophical purpose: “The scientist’s voice need not be the mere record of life as it is,” he wrote in a book on Descartes. “If only we want it, deeper knowledge of brain and mind will help achieve … happiness.”
Antonio Damasio talked with Jason Pontin, the editor in chief of MIT Technology Review.
When you were a young scientist in the late 1970s, emotion was not thought a proper field of inquiry.
We were told very often, “Well, you’re going to be lost, because there’s absolutely nothing there of consequence.” We were pitied for our poor choice.
How so?
William James had tackled emotion richly and intelligently. But his ideas [mainly that emotions are the brain’s mapping of body states, ideas that Damasio revived and experimentally verified] had led to huge controversies in the beginning of the 20th century that ended nowhere. Somehow researchers had the sense that emotion would not, in the end, be sufficiently distinctive—because animals had emotions, too. But what animals don’t have, researchers told themselves, is language like we do, nor reason or creativity—so let’s study that, they thought. And in fact, it’s true that most creatures on the face of the earth do have something that could be called emotion, and something that could be called feeling. But that doesn’t mean we humans don’t use emotions and feelings in particular ways.
Because we have a conscious sense of self?
Yes. What’s distinctive about humans is that we make use of fundamental processes of life regulation that include things like emotion and feeling, but we connect them with intellectual processes in such a way that we create a whole new world around us.
What made you so interested in emotions as an area of study?
There was something that appealed to me because of my interest in literature and music. It was a way of combining what was important to me with what I thought was going to be important scientifically.
What have you learned?
There are certain action programs that are obviously permanently installed in our organs and in our brains so that we can survive, flourish, procreate, and, eventually, die. This is the world of life regulation—homeostasis—that I am so interested in, and it covers a wide range of body states. There is an action program of thirst that leads you to seek water when you are dehydrated, but also an action program of fear when you are threatened. Once the action program is deployed and the brain has the possibility of mapping what has happened in the body, then that leads to the emergence of the mental state. During the action program of fear, a collection of things happen in my body that change me and make me behave in a certain way whether I want to or not. As that is happening to me, I have a mental representation of that body state as much as I have a mental representation of what frightened me.
And out of that “mapping” of something happening within the body comes a feeling, which is different from an emotion?
Exactly. For me, it’s very important to separate emotion from feeling. We must separate the component that comes out of actions from the component that comes out of our perspective on those actions, which is feeling. Curiously, it’s also where the self emerges, and consciousness itself. Mind begins at the level of feeling. It’s when you have a feeling (even if you’re a very little creature) that you begin to have a mind and a self.
But that would imply that only creatures with a fully formed sense of their minds could have fully formed feelings—
No, no, no. I’m ready to give the very teeny brain of an insect—provided it has the possibility of representing its body states—the possibility of having feelings. In fact, I would be flabbergasted to discover that they don’t have feelings. Of course, what flies don’t have is all the intellect around those feelings that could make use of them: to found a religious order, or develop an art form, or write a poem. They can’t do that; but we can. In us, having feelings somehow allows us also to have creations that are responses to those feelings.
Do other animals have a kind of responsiveness to their feelings?
I’m not sure that I even understand your question.
Are dogs aware that they feel?
Of course. Of course dogs feel.
No, not “Do dogs feel?” I mean: is my dog Ferdinando conscious of feeling? Does he have feelings about his feelings?
[Thinks.] I don’t know. I would have my doubts.
But humans are certainly conscious of being responsive.
Yes. We’re aware of our feelings and are conscious of the pleasantness or unpleasantness associated with them. Look, what are the really powerful feelings that you deal with every day? Desires, appetites, hunger, thirst, pain—those are the basic things.
How much of the structure of civilization is devoted to controlling those basic things? Spinoza says that politics seeks to regulate such instincts for the common good.
We wouldn’t have music, art, religion, science, technology, economics, politics, justice, or moral philosophy without the impelling force of feelings.
Do people emote in predictable ways regardless of their culture? For instance, does everyone hear the Western minor mode in music as sad?
We now know enough to say yes to that question.
At the Brain and Creativity Institute [which Damasio directs], we have been doing cross-cultural studies of emotion. At first we thought we would find very different patterns, especially with social emotions. In fact, we don’t. Whether you are studying Chinese, Americans, or Iranians, you get very similar responses. There are lots of subtleties and lots of ways in which certain stimuli elicit different patterns of emotional response with different intensities, but the presence of sadness or joy is there with a uniformity that is strongly and beautifully human.
Could our emotions be augmented with implants or some other brain-interfacing technology?
Inasmuch as we can understand the neural processes behind any of these complex functions, once we do, the possibility of intervening is always there. Of course, we interface with brain function all the time: with diet, with alcohol, and with medications. So it’s not that surgical interventions will be any great novelty. What will be novel is to make those interventions cleanly so that they are targeted. No, the more serious issue is the moral situations that might arise.
Why?
Because it really depends on what the intervention is aimed at achieving.
Suppose the intervention is aimed at resuscitating your lost ability to move a limb, or to see or hear. Do I have any moral problem? Of course not. But what if it interferes with states of the brain that are influential in how you make your decisions? Then you are entering a realm that should be reserved for the person alone.
What has been the most useful technology for understanding the biological basis of consciousness?
Imaging technologies have made a powerful contribution. At the same time, I’m painfully aware that they are limited in what they give us.
If you could wish into existence a better technology for observing the brain, what would it be?
I would not want to go to only one level, because I don’t think the really interesting things occur at just one level. What we need are new techniques to understand the interrelation of levels. There are people who have spent a good part of their lives studying systems, which is the case with my wife and most of the people in our lab. We have done our work on neuroanatomy, and gone into cells only occasionally. But now we are actually studying the state of the functions of axons [nerve fibers in the brain], and we desperately need ways in which we can scale up from what we’ve found to higher and higher levels.
What two crustaceans and a timepiece have to do with the future of medical electronics.
By Nidhi Subbaraman
In Evgeny Katz’s vision of the future, medical implants will use the human body as a battery. They’ll just run on the same juice that powers us human beings. His lab at Clarkson University has been building a biofuel cell—an energy harvester--that has successfully drawn electrical energy from glucose coursing the blood streams of snails, clams, and now, lobsters.
Human medical implants powered by what we eat are a long way away, but in a new paper, Katz and his team demonstrate how their technology is maturing towards such a reality. That’s where the lobsters come in. Researchers from Clarkson University and the University of Vermont College of Medicine explain how they’ve powered a watch using glucose from two lobsters, connected as batteries would be, in series. They also show that it’s possible to keep a pacemaker ticking with glucose levels usually seen in the human body.
The key to this setup is an enzyme stationed at implanted electrodes made of carbon nanotubes. Together, the two efficiently convert chemical energy from glucose in an animal’s circulatory system to electricity.
In the past, these energy-harvesting biofuel cells have been tested in the ear of rabbits, in the abdomen of insects, in the body cavity of snails and clams. But the lobsters are different. It’s the first time living organisms have powered up a piece of electronics.
With electrodes in their abdomen, the two lobsters powered the watch for an hour, until the lobsters’ glucose levels near the electrode dropped. (They don’t feel any pain, a member of the team has explained, because they don’t have nerve endings where the electrodes were implanted.) The voltage picked up though, and the crustaceans powered the watch for as long as they remained alive in the lab.
People with pacemakers are ideal bio-battery candidates. As an early test of the idea, the team hooked up a pacemaker to an artificial setup resembling the human circulatory system. It contained serum spiked with glucose at different levels--to represent glucose levels in the blood immediately after you hit the gym, or while sitting at your desk at work, or if you’re diabetic. (Serum is blood with the proteins and cells filtered out.)
With its battery removed, the pacemaker became the first of its kind to run solely on glucose derived from body fluid for five hours. It won’t be the last though, Katz and co. have a list of other medical devices waiting their turn.
Frederic Kaplan began his talk by stating that the number of object we have at home is huge (nearly 3500), all of them have different “value profile”. he showed curves that capture the evolution of the experienced value of an object). See the curve below. A roomba for example follows a curve such as a corkscrew (c) whereas an Aibo, an entertainment robot, follows more a “notebook” curve: where value augment over time through the relationship with the owner(s).
Frederic stated how we know how to deal with the mid to end part of the curve but not the beginning, namely how to create the first part of the robot-owner relationship, which is a crux question in general for robots/communicating objects designers. There are many reasons for that: in the west, it’s not easy in the occidental culture, to “raise” and talk a robot; most people try but stop, and show it only when friends come visit. So the robot is a pretty expensive gadget.
After moving from Sony to the CRAFT laboratory, Frederic started moving form robot to interactive furnitures and became interested in how objects can be “robotized” and the fact that perhaps robots should not always look like robots. Since 1984, computers have not changed much (shapes, icons have been modified but still it’s always the same story). We changed the way we used computers (listen to music, watch photos, get the news, that was not what computers were intended for) but they did not change, so they thought it would be an idea to build a robotic computer as in the former Apple commercial. They therefore designed the Wizkid, an “expressive computer” which recognizes people, gestures proposing a new sort of interactivity with people. To some extent, he showed how you can have expressivity without any anthropomorphic robot (unlike the demo we had of the Speecys robot).
Some use cases:
- in the living room, the Wizkid can act as a central interface to the media players: showing a CD make the robot playing it; it can also take pictures autonomously and create a visual summary of the event that can be sent to guests afterwards. It’s like an automatic logging system that remembers and use that information.
- in the kitchen: the wizkid can help you cook and shop. When the owner prepare a recipe, the wizkid will help following it step by step, tracking face and gestures (ans also doing some suggestions). It would be possible to show an item and the wizkid add it to the shopping list.
- games are also an interesting field: you can play augmented reality games with the wizkid: you look at yourself in the screen and see yourself in imaginary worlds.
As a conclusion, Frederic said that most people things that robots will look like objects but he claims that everyday objects can become robot and the next generation of computer interfaces will be robotic. People used to go to the machine to interact but now interactivity comes to you. Computers used to live in their own world, now they live in yours.
Then Bruno Bonnell in his “from robota to homo robotus: revisiting Asimov’s laws of robotic” took up the floor and gave an insightful presentation about robot designers should revisit the definition of “robots” (and therefore Asimov’ laws). To him, there is a vocabulary problem when it comes to robots.
In Czech, “robot” means “work” and it pervaded our representation of what is a robot, that is to say, a mechanic slave. Hence the laws of robotics for Asimov. These laws work well for military or industrial robots but what about leisure robots such as the Aibo, the roomba, iRobiq? We had the same problem with the word “computer”. it’s only since World War II that the word “computer” (from Latin computare, “to reckon,” “sum up”) been applied to machines. The Oxford English Dictionary still describes a computer as “a person employed to make calculations in an observatory, in surveying, etc.”. We moved that into machines and computers took over the successive activities: Systematic tasks, support creation tools, became and artistic Medium and finally an amplifier of imagination. And it’s the same with animals: it used to be food, then working forces, companions and finally friends. In addition, we don’t talk just about “animals”: there are ponies, dogs, etc. with a classification: animals, mammals, equids, horses. It would be possible to classify computers according to the same classification: order/family/genre/specy.
So, what about robots? are the very different robots all the same? Couldn’t we classify them in a classification: a family of static robot, a family of moving robot, etc. So now, it’s no more “robot, robot, robot” but “Robots,Mover,Humanoide,IrobiQ”. What is important here is that all these robots in the classifications are recognized as having different features and characteristics. We start recognizing that they are all not the same species. By classifying (giving a name), you generate some different applications and can improve the quality of the product that you are designing. Putting names on things helps creating them. It allows to go beyond the limits of the robot vision: and it allows to reconcile the vision of having of both an anthropomorphic robot (like Speecys’ robot we saw first) and a different one (like Frederic’s Wizkid) since they are from two different “species”.
After this classification, we can go into the evolution, how to branch out the future of robots. there could be the following path: mechanical slave, the alternative to human actions, the substitute of human care, the companions and finally the amplifier of human body and mind. Is it scifi or Reality ? Today or Tomorrow ? Is it possible technically? We don’t know but what is important is to start today and look ahead?
An interesting path to do so is to move away from practical robots and investigate useless robots, as well as not being afraid of technical limitations (think about the guys who designed Pong at Atari). To the question “what does the robot do?”, the answer is simple: to create an emotional bond with humans (that would be recipe for a robot success). The important characteristics are therefore: fun, thrilling, etc. Which is very close to video games do: they creatine a emotional bond with the players because they are faithful to a reality, they are reliable, available, adaptable, and above all TRUSTFUL. In the same fashion, robots should be trustful. The bottom line is thus that we should forget the Asimov laws and invent the Tao of robotic where the “gameplay” is the key to accept them as part of our reality.
Also, the funny part of the session was the first talk where Tomoaki Kasuga’s demonstration of his robot, which “charm point” is the hip (or something else as attested by the picture below), especially when dancing on stage. What Tomoaki showed is that expressivity (through dance, movement, the quality of the pieces) is very important for human-computer interaction.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.