As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.
A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.
Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.
Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?
In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.
In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
"The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain"
Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.
This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.
The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.
Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.
The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.
If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost?
In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.
What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
"The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?"
A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.
The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?
So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.
The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.
Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.
From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.
A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
"We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading."
Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.
One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.
Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).
This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.
Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in TheNew York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)
Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.
Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).
In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!
This to confirm that the brain is certainly not a computer (made out of flesh)...
When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?
Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.
But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.
They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.
The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.
For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”
Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.
In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”
It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.
First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.
As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.
For the first time, a team of researchers have used neuroimaging to visualise the effect of LSD on the human brain.
A lot of research has been conducted into how psychedelic drug lysergic acid diethylamide, or LSD, affects human behaviour, but what does it actually do to the brain? To find out, a team of researchers from Imperial College London gave some test subjects the drug, and documented the results using brain imaging techniques.
Harry L Williams administers LSD 25 to Carl Pfeiffer, chairman of Emory University's Pharmacological Department, in 1955. The experiment was documented using the microphone. Bettmann/Corbis.
LSD is known for its hallucinogenic properties and altered consciousness, and the results of the study revealed why.
"We observed brain changes under LSD that suggested our volunteers were 'seeing with their eyes shut' -- albeit they were seeing things from their imagination rather than from the outside world," explained study leader Robin Carhart-Harris in a statement.
"We saw that many more areas of the brain than normal were contributing to visual processing under LSD -- even though the volunteers' eyes were closed. Furthermore, the size of this effect correlated with volunteers' ratings of complex, dreamlike visions."
The top row shows the brains of the study participants on the placebo, the bottom row shows the study participants on LSD. Imperial College London.
The study involved 20 healthy participants, each of whom had previously taken some form of psychedelic drug. Each participant received either 75 micrograms of LSD or a placebo, and their brains were then imaged.
The results revealed that the barriers between the sections of the brain that perform specialised functions break down under the influence of LSD. This means that, as mentioned, more of the brain is involved in visual processing, which causes the hallucinations, but it also contributes to the altered consciousness associated with LSD.
"It is also related to what people sometimes call 'ego-dissolution', which means the normal sense of self is broken down and replaced by a sense of reconnection with themselves, others and the natural world. This experience is sometimes framed in a religious or spiritual way -- and seems to be associated with improvements in well-being after the drug's effects have subsided," Carhart-Harris said.
"Our brains become more constrained and compartmentalised as we develop from infancy into adulthood, and we may become more focused and rigid in our thinking as we mature. In many ways, the brain in the LSD state resembles the state our brains were in when we were infants: free and unconstrained. This also makes sense when we consider the hyper-emotional and imaginative nature of an infant's mind."
Adding music to the mix caused even more interesting changes in brain activity, causing the visual cortex to receive more information from the region of the brain associated with mental imagery and personal memory. Under the influence of both music and LSD, the study participants reported seeing even more complex visions, such as memories played out as scenes.
"Scientists have waited 50 years for this moment -- the revealing of how LSD alters our brain biology," said senior researcher David Nutt , Edmon J Safra Chair in Neuropsychopharmacology.
"For the first time we can really see what's happening in the brain during the psychedelic state, and can better understand why LSD had such a profound impact on self-awareness in users and on music and art. This could have great implications for psychiatry, and helping patients overcome conditions such as depression."
For decades, biologists spurned emotion and feeling as uninteresting. But Antonio Damasio demonstrated that they were central to the life-regulating processes of almost all living creatures.
Damasio’s essential insight is that feelings are “mental experiences of body states,” which arise as the brain interprets emotions, themselves physical states arising from the body’s responses to external stimuli. (The order of such events is: I am threatened, experience fear, and feel horror.) He has suggested that consciousness, whether the primitive “core consciousness” of animals or the “extended” self-conception of humans, requiring autobiographical memory, emerges from emotions and feelings.
His insight, dating back to the early 1990s, stemmed from the clinical study of brain lesions in patients unable to make good decisions because their emotions were impaired, but whose reason was otherwise unaffected—research made possible by the neuroanatomical studies of his wife and frequent coauthor, Hanna Damasio. Their work has always depended on advances in technology. More recently, tools such as functional neuroimaging, which measures the relationship between mental processes and activity in parts of the brain, have complemented the Damasios’ use of neuroanatomy.
A professor of neuroscience at the University of Southern California, Damasio has written four artful books that explain his research to a broader audience and relate its discoveries to the abiding concerns of philosophy. He believes that neurobiological research has a distinctly philosophical purpose: “The scientist’s voice need not be the mere record of life as it is,” he wrote in a book on Descartes. “If only we want it, deeper knowledge of brain and mind will help achieve … happiness.”
Antonio Damasio talked with Jason Pontin, the editor in chief of MIT Technology Review.
When you were a young scientist in the late 1970s, emotion was not thought a proper field of inquiry.
We were told very often, “Well, you’re going to be lost, because there’s absolutely nothing there of consequence.” We were pitied for our poor choice.
How so?
William James had tackled emotion richly and intelligently. But his ideas [mainly that emotions are the brain’s mapping of body states, ideas that Damasio revived and experimentally verified] had led to huge controversies in the beginning of the 20th century that ended nowhere. Somehow researchers had the sense that emotion would not, in the end, be sufficiently distinctive—because animals had emotions, too. But what animals don’t have, researchers told themselves, is language like we do, nor reason or creativity—so let’s study that, they thought. And in fact, it’s true that most creatures on the face of the earth do have something that could be called emotion, and something that could be called feeling. But that doesn’t mean we humans don’t use emotions and feelings in particular ways.
Because we have a conscious sense of self?
Yes. What’s distinctive about humans is that we make use of fundamental processes of life regulation that include things like emotion and feeling, but we connect them with intellectual processes in such a way that we create a whole new world around us.
What made you so interested in emotions as an area of study?
There was something that appealed to me because of my interest in literature and music. It was a way of combining what was important to me with what I thought was going to be important scientifically.
What have you learned?
There are certain action programs that are obviously permanently installed in our organs and in our brains so that we can survive, flourish, procreate, and, eventually, die. This is the world of life regulation—homeostasis—that I am so interested in, and it covers a wide range of body states. There is an action program of thirst that leads you to seek water when you are dehydrated, but also an action program of fear when you are threatened. Once the action program is deployed and the brain has the possibility of mapping what has happened in the body, then that leads to the emergence of the mental state. During the action program of fear, a collection of things happen in my body that change me and make me behave in a certain way whether I want to or not. As that is happening to me, I have a mental representation of that body state as much as I have a mental representation of what frightened me.
And out of that “mapping” of something happening within the body comes a feeling, which is different from an emotion?
Exactly. For me, it’s very important to separate emotion from feeling. We must separate the component that comes out of actions from the component that comes out of our perspective on those actions, which is feeling. Curiously, it’s also where the self emerges, and consciousness itself. Mind begins at the level of feeling. It’s when you have a feeling (even if you’re a very little creature) that you begin to have a mind and a self.
But that would imply that only creatures with a fully formed sense of their minds could have fully formed feelings—
No, no, no. I’m ready to give the very teeny brain of an insect—provided it has the possibility of representing its body states—the possibility of having feelings. In fact, I would be flabbergasted to discover that they don’t have feelings. Of course, what flies don’t have is all the intellect around those feelings that could make use of them: to found a religious order, or develop an art form, or write a poem. They can’t do that; but we can. In us, having feelings somehow allows us also to have creations that are responses to those feelings.
Do other animals have a kind of responsiveness to their feelings?
I’m not sure that I even understand your question.
Are dogs aware that they feel?
Of course. Of course dogs feel.
No, not “Do dogs feel?” I mean: is my dog Ferdinando conscious of feeling? Does he have feelings about his feelings?
[Thinks.] I don’t know. I would have my doubts.
But humans are certainly conscious of being responsive.
Yes. We’re aware of our feelings and are conscious of the pleasantness or unpleasantness associated with them. Look, what are the really powerful feelings that you deal with every day? Desires, appetites, hunger, thirst, pain—those are the basic things.
How much of the structure of civilization is devoted to controlling those basic things? Spinoza says that politics seeks to regulate such instincts for the common good.
We wouldn’t have music, art, religion, science, technology, economics, politics, justice, or moral philosophy without the impelling force of feelings.
Do people emote in predictable ways regardless of their culture? For instance, does everyone hear the Western minor mode in music as sad?
We now know enough to say yes to that question.
At the Brain and Creativity Institute [which Damasio directs], we have been doing cross-cultural studies of emotion. At first we thought we would find very different patterns, especially with social emotions. In fact, we don’t. Whether you are studying Chinese, Americans, or Iranians, you get very similar responses. There are lots of subtleties and lots of ways in which certain stimuli elicit different patterns of emotional response with different intensities, but the presence of sadness or joy is there with a uniformity that is strongly and beautifully human.
Could our emotions be augmented with implants or some other brain-interfacing technology?
Inasmuch as we can understand the neural processes behind any of these complex functions, once we do, the possibility of intervening is always there. Of course, we interface with brain function all the time: with diet, with alcohol, and with medications. So it’s not that surgical interventions will be any great novelty. What will be novel is to make those interventions cleanly so that they are targeted. No, the more serious issue is the moral situations that might arise.
Why?
Because it really depends on what the intervention is aimed at achieving.
Suppose the intervention is aimed at resuscitating your lost ability to move a limb, or to see or hear. Do I have any moral problem? Of course not. But what if it interferes with states of the brain that are influential in how you make your decisions? Then you are entering a realm that should be reserved for the person alone.
What has been the most useful technology for understanding the biological basis of consciousness?
Imaging technologies have made a powerful contribution. At the same time, I’m painfully aware that they are limited in what they give us.
If you could wish into existence a better technology for observing the brain, what would it be?
I would not want to go to only one level, because I don’t think the really interesting things occur at just one level. What we need are new techniques to understand the interrelation of levels. There are people who have spent a good part of their lives studying systems, which is the case with my wife and most of the people in our lab. We have done our work on neuroanatomy, and gone into cells only occasionally. But now we are actually studying the state of the functions of axons [nerve fibers in the brain], and we desperately need ways in which we can scale up from what we’ve found to higher and higher levels.
In 2007, The New York Times published an article entitled ‘For The Brain Remembering Is Like Reliving’. Dutch Neuroscientists had evidence to prove that the act of recollection is not significantly different, in terms of the crests and falls of brainwaves and shooting of electro neurons, to the act of doing.
We know from our daily lives that there is a mental capacity to relive spaces, experiences and conversations without the dissonance of representation. In psychology, psychoanalysis and neurology, the memories of spaces and activities in our past dictate our actions in the present. The field of psychogeography is founded on the spatial effects of places and movements through space and psychoanalysis is based on the premise that the suppression of feelings from the past can emerge unconsciously in the reconstruction of the past, through writing or discussion.
The site of the exhibition, the traditional site of display and representation is the field of operation for artist and architect Alex Schweder. Schweder’s work deals precisely with possibility that spaces are scripted and informed by bodies and occupation. That the boundaries between them are permeable and behavioural patterns can be manipulated with careful intervention.
In this one-off unique work held in the Opus gallery for four days, the artist will be working with the architecture of the space – using the architecture of walls, doors, memories, history and conversation to script the space and through strategic means, transform the reading and semiotics of space for the visitor.
In seven minutes the intention of the artist will become clear.
Architecture researchers in Edinburgh have completed a breakthrough study on brain activity recorded in situ by using mobile electroencephalography (EEG) technology, which records live neural impressions of subjects moving through a city. Excitingly, this technology could help us define how different urban environments affect us, a discovery that could have provocative implications for architecture. Read the full story on Salon. Also, check out this article from Fast Company about how a similar mobile technology could show us the effects of urban design – not on our brains, but on our bodies.
Personal comment:
One day after the official start of the Blue Brain Project, --one of the biggest joint effort at this day to map and understand the brain-- just a few miles away from our office, there will be undoubtedly an incredible research future in the more than likely meeting of architecture, environment design and neurosciences...
Enhancing the flow of information through the brain could be crucial to making neuroprosthetics practical.
The abilities to learn, remember, evaluate, and decide are central to who we are and how we live. Damage to or dysfunction of the brain circuitry that supports these functions can be devastating, leading to Alzheimer’s, schizophrenia, PTSD, or many other disorders. Current treatments, which are drug-based or behavioral, have limited efficacy in treating these problems. There is a pressing need for something more effective.
One promising approach is to build an interactive device to help the brain learn, remember, evaluate, and decide. One might, for example, construct a system that would identify patterns of brain activity tied to particular experiences and then, when called upon, impose those patterns on the brain. Ted Berger, Sam Deadwyler, Robert Hampsom, and colleagues have used this approach (see “Memory Implants”). They are able to identify and then impose, via electrical stimulation, specific patterns of brain activity that improve a rat’s performance in a memory task. They have also shown that in monkeys stimulation can help the animal perform a task where it must remember a particular item.
Their ability to improve performance is impressive. However, there are fundamental limitations to an approach where the desired neural pattern must be known and then imposed. The animals used in their studies were trained to do a single task for weeks or months and the stimulation was customized to produce the right outcome for that task. This is only feasible for a few well-learned experiences in a predictable and constrained environment.
New and complex experiences engage large numbers of neurons scattered across multiple brain regions. These individual neurons are physically adjacent to other neurons that contribute to other memories, so selectively stimulating the right neurons is difficult if not impossible. And to make matters even more challenging, the set of neurons involved in storing a particular memory can evolve as that memory is processed in the brain. As a result, imposing the right patterns for all desired experiences, both past and future, requires technology far beyond what is possible today.
I believe the answer to be an alternative approach based on enhancing flows of information through the brain. The importance of information flow can be appreciated when we consider how the brain makes and uses memories. During learning, information from the outside world drives brain activity and changes in the connections between neurons. This occurs most prominently in the hippocampus, a brain structure critical for laying down memories for the events of daily life. Thus, during learning, external information must flow to the hippocampus if memories are to be stored.
Once information has been stored in the hippocampus, a different flow of information is required to create a long-lasting memory. During periods of rest and sleep, the hippocampus “reactivates” stored memories, driving activity throughout the rest of the brain. Current theories suggest that the hippocampus acts like a teacher, repeatedly sending out what it has learned to the rest of the brain to help engrain memories in more stable and distributed brain networks. This “consolidation” process depends on the flow of internal information from the hippocampus to the rest of the brain.
Finally, when a memory is retrieved a similar pattern of internally driven flow is required. For many memories, the hippocampus is required for memory retrieval, and once again hippocampal activity drives the reinstatement of the memory pattern throughout the brain. This process depends on the same hippocampal reactivation events that contribute to memory consolidation.
Different flows of information can be engaged at different intensities as well. Some memories stay with us and guide our choices for a lifetime, while others fade with time. We and others have shown that new and rewarded experiences drive both profound changes in brain activity, and strong memory reactivation. Familiar and unrewarded experiences drive smaller changes and weaker reactivation. Further, we have recently shown that the intensity of memory reactivation in the hippocampus, measured as the number of neurons active together during each reactivation event, can predict whether an the next decision an animal makes is going to be right or wrong. Our findings suggest that when the animal reactivates effectively, it does a better job of considering possible future options (based on past experiences) and then makes better choices.
These results point to an alternative approach to helping the brain learn, remember and decide more effectively. Instead of imposing a specific pattern for each experience, we could enhance the flow of information to the hippocampus during learning and the intensity of memory reactivation from the hippocampus during memory consolidation and retrieval. We are able to detect signatures of different flows of information associated with learning and remembering. We are also beginning to understand the circuits that control this flow, which include neuromodulatory regions that are often damaged in disease states. Importantly, these modulatory circuits are more localized and easier to manipulate than the distributed populations of neurons in the hippocampus and elsewhere that are activated for each specific experience.
Thus, an effective cognitive neuroprosthetic would detect what the brain is trying to do (learn, consolidate or retrieve) and then amplify activity in the relevant control circuits to enhance the essential flows of information. We know that even in diseases like Alzheimer’s where there is substantial damage to the brain, patients have good days and bad days. On good days the brain smoothly transitions among distinct functions, each associated with a particular flow of information. On bad days these functions may become less distinct and the flows of information muddled. Our goal then, would be to restore the flows of information underlying different mental functions.
A prosthetic device has the potential to adapt to the moment-by-moment changes in information flow necessary for different types of mental processing. By contrast, drugs that seek to treat cognitive dysfunction may effectively amplify one type of processing but cannot adapt to the dynamic requirements of mental function. Thus, constructing a device that makes the brain’s control circuits work more effectively offers a powerful approach to treating disease and maximizing mental capacity.
Loren M. Frank is a professor at the Center for Integrative Neuroscience and the Department of Physiology at the University of California, San Francisco.
A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines.
Miguel Nicolelis, a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”
“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.
The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.
Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering and who has been predicting that not only will machine intelligence exceed our own but that people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You”).
Nicolelis calls that idea sheer bunk. “Downloads will never happen,” Nicolelis said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”
The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.
Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed, in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”
But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, non-linear interactions amongst billions of cells, Nicolelis says.
“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”
The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).
In a study published last week, for instance, Nicolelis’ group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.
The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.
That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.
Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense X-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.
Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.
In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines“R”Us.
But, if he’s right, us ain’t machines, and never will be.
Optogenetics allows researchers to explore a growing range of behavior.
By Emily Singer
Three new experiments highlight the power of optogenetics—a type of genetic engineering that allows scientists to control brain cells with light.
Karl Deisseroth and colleagues at Stanford University used light to trigger and then alleviate social deficits in mice that resemble those seen in autism. Researchers targeted a highly evolved part of the brain called the prefrontal cortex, which is well connected to other brain regions and involved in planning, execution, personality and social behavior. They engineered cells to become either hyperactive or underactive in response to specific wavelengths of light.
The experimental mice exhibited no difference from the normal mice in tests of their anxiety levels, their tendency to move around or their curiosity about new objects. But, the team observed, the animals in whose medial prefrontal cortex excitability had been optogenetically stimulated lost virtually all interest in engaging with other mice to whom they were exposed. (The normal mice were much more curious about one another.)
The findings support one of the theories behind the neurodevelopmental deficits of autism and schizophrenia; that in these disorders, the brain is wired in a way that makes it hyperactive, or overly susceptible to overstimulation. That may explain why many autistic children are very sensitive to loud noises or other environmental stimuli.
"Boosting their excitatory nerve cells largely abolished their social behavior," said Deisseroth, [associate professor of psychiatry and behavioral sciences and of bioengineering and the study's senior author]. In addition, these mice's brains showed the same gamma-oscillation pattern that is observed among many autistic and schizophrenic patients. "When you raise the firing likelihood of excitatory cells in the medial prefrontal cortex, you see an increased gamma oscillation right away, just as one would predict it would if this change in the excitatory/inhibitory balance were in fact relevant."
In a second study, from Japan, researchers used optogenetics to make mice fall asleep by engineering a specific type of neuron in the hypothalamus, part of the brain that regulates sleep. Shining light on these neurons inhibited their activity, sending the mice into dreamless (or non-REM) sleep. The research, published this month in the Journal of Neuroscience, might shed light on narcolepsy, a disorder of sudden sleep attacks.
Rather than making mice fall asleep, a third group of researchers used optogenetics disrupt sleep in mice, which in turn affected their memory. Previous research has shown that sleep is important for consolidating, or storing, memories, and that diseases characterized by sleep deficits, such as sleep apnea, often have memory deficits as well. But it has been difficult to analyze the effect of more subtle disruptions to sleep.
The new study shows that "regardless of the total amount of sleep, a minimal unit of uninterrupted sleep is crucial for memory consolidation," the authors write in the study published online July 25 in the Proceedings of the National Academy of Sciences.
They genetically engineered a group of neurons involved in switching between sleep and wake to be sensitive to light. Stimulating these cells with 10-second bursts of light fragmented the animals' sleep without affecting total sleep time or quality and composition of sleep.
After manipulating the mice's sleep, the researchers had the animals undergo a task during which they were placed in a box with two objects: one to which they had previously been exposed, and another that was new to them. Rodents' natural tendency is to explore novel objects, so if they spent more time with the new object, it would indicate that they remembered the other, now familiar object. In this case, the researchers found that the mice with fragmented sleep didn't explore the novel object longer than the familiar one — as the control mice did — showing that their memory was affected.
The findings, "point to a specific characteristic of sleep — continuity — as being critical for memory," said [H. Craig Heller, professor of biology at Stanford and one of the authors of the study.]
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.