As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 09.2023), via all its tags!
FIND BELOW ALL THE TAGS THAT CAN BE USED TO NAVIGATE IN THE CONTENTS OF | RBLG BLOG:
(to be seen just below if you're navigating on the blog's html pages or here for rss readers)
--
Note that we had to hit the "pause" button on our reblogging activities a while ago (mainly because we ran out of time, but also because we received complaints from a major image stock company about some images that were displayed on | rblg, an activity that we felt was still "fair use" - we've never made any money or advertised on this site).
Nevertheless, we continue to publish from time to time information on the activities of fabric | ch, or content directly related to its work (documentation).
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.
A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.
Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.
Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?
In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.
In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
"The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain"
Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.
This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.
The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.
Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.
The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.
If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost?
In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.
What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
"The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?"
A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.
The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?
So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.
The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.
Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.
From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.
A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
"We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading."
Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.
One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.
Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).
This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.
Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in TheNew York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)
Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
Note: a proto-smart-architecture project by Cedric Price dating back from the 70ies, which sounds much more intersting than almost all contemporary smart architecture/cities proposals.
These lattest being in most cases glued into highly functional approaches driven by the "paths of less resistance-frictions", supported if not financed by data-hungry corporations. That's not a desirable future to my point of view.
"(...). If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation (...)"
Cedric Price’s proposal for the Gilman Corporation was a series of relocatable structures on a permanent grid of foundation pads on a site in Florida.
Cedric Price asked John and Julia Frazer to work as computer consultants for this project. They produced a computer program to organize the layout of the site in response to changing requirements, and in addition suggested that a single-chip microprocessor should be embedded in every component of the building, to make it the controlling processor.
This would result in an “intelligent” building which controlled its own organisation in response to use. If not changed, the building would have become “bored” and proposed alternative arrangements for evaluation, learning how to improve its own evaluation, learning how to improve its own organisation on the basis of this experience.
The Brief
Generator (1976-79) sought to create conditions for shifting, changing personal interactions in a reconfigurable and responsive architectural project.
It followed this open-ended brief:
"A building which will not contradict, but enhance, the feeling of being in the middle of nowhere; has to be accessible to the public as well as to private guests; has to create a feeling of seclusion conducive to creative impulses, yet…accommodate audiences; has to respect the wildness of the environment while accommodating a grand piano; has to respect the continuity of the history of the place while being innovative."
The proposal consisted of an orthogonal grid of foundation bases, tracks and linear drains, in which a mobile crane could place a kit of parts comprised of cubical module enclosures and infill components (i.e. timber frames to be filled with modular components raging from movable cladding wall panels to furniture, services and fittings), screening posts, decks and circulation components (i.e. walkways on the ground level and suspended at roof level) in multiple arrangements.
When Cedric Price approached John and Julia Frazer he wrote:
"The whole intention of the project is to create an architecture sufficiently responsive to the making of a change of mind constructively pleasurable."
Generator Project
They proposed four programs that would use input from sensors attached to Generator’s components: the first three provided a “perpetual architect” drawing program that held the data and rules for Generator’s design; an inventory program that offered feedback on utilisation; an interface for “interactive interrogation” that let users model and prototype Generator’s layout before committing the design.
The powerful and curious boredom program served to provoke Generator’s users. “In the event of the site not being re-organized or changed for some time the computer starts generating unsolicited plans and improvements,” the Frazers wrote. These plans would then be handed off to Factor, the mobile crane operator, who would move the cubes and other elements of Generator. “In a sense the building can be described as being literally ‘intelligent’,” wrote John Frazer—Generator “should have a mind of its own.” It would not only challenge its users, facilitators, architect and programmer—it would challenge itself.
The Frazers’ research and techniques
The first proposal, associated with a level of ‘interactive’ relationship between ‘architect/machine’, would assist in drawing and with the production of additional information, somewhat implicit in the other parallel developments/ proposals.
The second proposal, related to the level of ‘interactive/semiautomatic’ relationship of ‘client–user/machine’, was ‘a perpetual architect for carrying out instructions from the Polorizer’ and for providing, for instance, operative drawings to the crane operator/driver; and the third proposal consisted of a ‘[. . .] scheduling and inventory package for the Factor [. . .] it could act as a perpetual functional critic or commentator.’
The fourth proposal, relating to the third level of relationship, enabled the permanent actions of the users, while the fifth proposal consisted of a ‘morphogenetic program which takes suggested activities and arranges the elements on the site to meet the requirements in accordance with a set of rules.’
Finally, the last proposal was [. . .] an extension [. . .] to generate unsolicited plans, improvements and modifications in response to users’ comments, records of activities, or even by building in a boredom concept so that the site starts to make proposals about rearrangements of itself if no changes are made. The program could be heuristic and improve its own strategies for site organisation on the basis of experience and feedback of user response.
Self Builder Kit and the Cal Build Kit, Working Models
In a certain way, the idea of a computational aid in the Generator project also acknowledged and intended to promote some degree of unpredictability. Generator, even if unbuilt, had acquired a notable position as the first intelligent building project. Cedric Price and the Frazers´ collaboration constituted an outstanding exchange between architecture and computational systems. The Generator experience explored the impact of the new techno-cultural order of the Information Society in terms of participatory design and responsive building. At an early date, it took responsiveness further; and postulates like those behind the Generator, where the influence of new computational technologies reaches the level of experience and an aesthetics of interactivity, seems interesting and productive.
Resources
John Frazer, An Evolutionary Architecture, Architectural Association Publications, London 1995. http://www.aaschool.ac.uk/publications/ea/exhibition.html
Frazer to C. Price, (Letter mentioning ‘Second thoughts but using the same classification system as before’), 11 January 1979. Generator document folio DR1995:0280:65 5/5, Cedric Price Archives (Montreal: Canadian Centre for Architecture).
Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.
Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).
In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!
This to confirm that the brain is certainly not a computer (made out of flesh)...
When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?
Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.
But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.
They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.
The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.
For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”
Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.
In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”
It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.
First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.
As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.
What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well.
Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration.
With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation).
The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.”
Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell.
Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.”
Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch).
The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos
(…)
The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.
(…)
This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.
There is a great, undiscovered potential in virtual reality development. Sure, you can create lifelike virtual worlds, but you can also make players sick. Oculus VR founder Palmer Luckey and VP of product Nate Mitchell hosted a panel at GDC Europe last week, instructing developers on how to avoid the VR development pitfalls that make players uncomfortable. It was a lovely service for VR developers, but we saw a much greater opportunity. Inadvertently, the panel explained how to make players as queasy and uncomfortable as possible.
And so, we now present the VR developer's guide to manipulating your players right down to the vestibular level. Just follow these tips and your players will be tossing their cookies in minutes.
Note: If you'd rather not make your players horribly ill and angry, just do the opposite of everything below.
Include lots of small, tight spaces
In virtual reality, small and closed-off areas truly feel small, said Luckey. "Small corridors are really claustrophobic. It's actually one of the worst things you can do for most people in VR, is to put them in a really small corridor with the walls and the ceiling closing in on them, and then tell them to move rapidly through it."
Meanwhile, open spaces are a "relief," he said, so you'll want to avoid those.
Possible applications: Air duct exploration game.
Create a user interface that neglects depth and head-tracking
Virtual reality is all about depth and immersion, said Mitchell. So, if you want to break that immersion, your ideal user interface should be as traditional and flat as possible.
For example, put targeting reticles on a 2D plane in the center of a player's field of view. Maybe set it up so the reticle floats a couple of feet away from the player's face. "That is pretty uncomfortable for most players and they'll just try to grapple with what do they converge on: That near-field reticle or that distant mech that they're trying to shoot at?" To sample this effect yourself, said Mitchell, you can hold your thumb in front of your eyes. When you focus on a distant object, your thumb will appear to split in two. Now just imagine that happening to something as vital a targeting reticle!
You might think that setting the reticle closer to the player will make things even worse, and you're right. "The sense of personal space can make people actually feel uncomfortable, like there's this TV floating righting in front of their face that they try to bat out of the way." Mitchell said a dynamic reticle that paints itself onto in-game surfaces feels much more natural, so don't do that.
You can use similar techniques to create an intrusive, annoying heads-up display. Place a traditional HUD directly in front of the player's face. Again, they'll have to deal with double vision as their eyes struggle to focus on different elements of the game. Another option, since VR has a much wider field of view than monitors, is to put your HUD elements in the far corners of the display, effectively putting it into a player's peripheral vision. "Suddenly it's too far for the player to glance at, and they actually can't see pretty effectively." What's more, when players try to turn their head to look at it, the HUD will turn with them. Your players will spin around wildly as they desperately try to look at their ammo counter.
Possible applications: Any menu or user interface from Windows 3.1.
Disable head-tracking or take control away from the player
"Simulator sickness," when players become sick in a VR game, is actually the inverse of motion sickness, said Mitchell. Motion sickness is caused by feeling motion without being able to see it ? Mitchell cited riding on a boat rocking in the ocean as an example. "There's all this motion, but visually you don't perceive that the floor, ceiling and walls are moving. And that's what that sensory disconnect ? mainly in your vestibular senses ? is what creates that conflict that makes you dizzy." Simulator sickness he said, is the opposite. "You're in an environment where you perceive there to be motion, visually, but there is no motion. You're just sitting in a chair."
If you disable head-tracking in part of your game, it artificially creates just that sort of sensory disconnect. Furthermore, if you move the camera without player input, say to display a cut-scene, it can be very disorienting. When you turn your head in VR, you expect the world to turn with you. When it doesn't, you can have an uncomfortable reaction.
Possible applications:Frequent, Unskippable Cutscenes: The Game.
Feature plenty of backwards and lateral movement
Forward movement in a VR game tends not to cause problems, but many users have trouble dealing with backwards movement, said Mitchell. "You can imagine sometimes if you sit on a train and you perceive no motion, and the train starts moving backwards very quickly, or you see another car pulling off, all of those different sensations are very similar to that discomfort that comes from moving backwards in space." Lateral movement ? i.e. sideways movement ? has a similar effect, Mitchell said. "Being able to sort of strafe on a dime doesn't always cause the most comfortable experience."
Possible applications: Backwards roller coaster simulator.
Quick changes in altitude
"Quick changes in altitude do seem to cause disorientation," said Mitchell. Exactly why that happens isn't really understood, but it seems to hold true among VR developers. This means that implementing stairs or ramps into your games can throw players for a loop ? which, remember, is exactly what we're after.Don't use closed elevators, as these prevent users from perceiving the change in altitude, and is generally much more comfortable.
Possible applications: A VR version of the last level from Ghostbusters on NES. Also: Backwards roller coaster simulator.
Don't include visual points of reference
When players look down in VR, they expect to see their character's body. Likewise, in a space combat or mech game, they expect to see the insides of the cockpit when they look around. "Having a visual identity is really crucial to VR. People don't want to look down and be a disembodied head." For the purposes of this guide, that makes a disembodied head the ideal avatar for aggravating your players.
Possible applications:Disembodied Heads ... in ... Spaaaaaace. Also: Disembodied head in a backwards roller coaster.
Shift the horizon line
Okay, this is probably one of the most devious ways to manipulate your players. Mitchell imagines a simulation of sitting on a beach, watching the sunset. "If you subtly tilt the horizon line very, very minimally, a couple degrees, the player will start to become dizzy and disoriented and won't know why."
Possible applications:Drunk at the Beach.
Shoot for a low frame rate, disable V-sync
"With VR, having the world tear non-stop is miserable." Enough said. Furthermore, a low frame rate can be disorienting as well. When players move their heads and the world doesn't move at the same rate of speed, its jarring to their natural senses.
Possible applications: Limitless.
In Closing
Virtual reality is still a fledgling technology and, as Luckey and Mitchell explained, there's still a long way to go before both players and developers fully understand it.There are very few points of reference, and there is no widely established design language that developers can draw from.
What Luckey and Mitchell have detailed - and what we've decided to ignore - is a basic set of guidelines on maintaining player comfort in the VR space. Fair warning though, if you really want to design a game that makes players sick, the developers of AaaaaAAaaaAAAaaAAAAaAAAAA!!! already beat you to it.
New Scientist published an interesting article this week about the influence of the body's positioning in space on one's thought processes. According to recent research, space and the body are actually much more connected to the mind than has been traditionally accepted. The article cites a study by researchers at the University of Melbourne in Parkville, Australia which found that the eye movements of 12 right handed male subjects could be used to predict the size of each in a series of numbers that the participants were asked to generate; left and downwards meant a smaller number than the previous one, while up and to the right meant a larger number. A separate study at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, asked 24 students to move marbles from a box on a higher shelf to one on a lower shelf while answering a neutral question, such as "tell me what happened yesterday". The resulted showed that the subjects were more likely to talk of positive events when moving marbles upwards, and negative events when moving them downwards.
The notion that our bodies' direct physical relationship to space can influence thoughts is exciting, and reopens arguments against the ontological distinction between mind and body that is most commonly identified with Descartes, as well as associated questions of physical determinism vs. indeterminism . Going further, I suspect that less overt interactions between the body and its surrounding environment could be also included in this discussion, such as the psychological perceptions of temperature, humidity, and other similarly invisible environmental characteristics. The New Scientist article also references a 2008 study from the Rotman School of Management in Toronto that shows that social exclusion has the effect of making people feel colder. The issue of causality abounds here: if social exclusion or inclusion affects a person's temperature perception, would variant temperatures also be able to yield varying types of associated social behavior? Could we extend this discussion to the somewhat perverse notion that a carefully controlled interior environment is actually a form of mind control? ...
A drawing from RenéDecartes' Meditations on First Philosophy illustrates his
belief that the immaterial (mind, soul, "animal spirit") and material (body)
interact through the pineal gland in the center of the brain.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.