Songdo in South Korea: a ‘smart city’ whose roads and water, waste and electricity systems are dense with electronic sensors. Photograph: Hotaik Sung/Alamy.
A woman drives to the outskirts of the city and steps directly on to a train; her electric car then drives itself off to park and recharge. A man has a heart attack in the street; the emergency services send a drone equipped with a defibrillator to arrive crucial minutes before an ambulance can. A family of flying maintenance robots lives atop an apartment block – able to autonomously repair cracks or leaks and clear leaves from the gutters.
Such utopian, urban visions help drive the “smart city” rhetoric that has, for the past decade or so, been promulgated most energetically by big technology, engineering and consulting companies. The movement is predicated on ubiquitous wireless broadband and the embedding of computerised sensors into the urban fabric, so that bike racks and lamp posts, CCTV and traffic lights, as well as geeky home appliances such as internet fridges and remote-controlled heating systems, become part of the so-called “internet of things” (the global market for which is now estimated at $1.7tn). Better living through biochemistry gives way to a dream of better living through data. You can even take an MSc in Smart Cities at University College, London.
Yet there are dystopian critiques, too, of what this smart city vision might mean for the ordinary citizen. The phrase itself has sparked a rhetorical battle between techno-utopianists and postmodern flâneurs: should the city be an optimised panopticon, or a melting pot of cultures and ideas?
And what role will the citizen play? That of unpaid data-clerk, voluntarily contributing information to an urban database that is monetised by private companies? Is the city-dweller best visualised as a smoothly moving pixel, travelling to work, shops and home again, on a colourful 3D graphic display? Or is the citizen rightfully an unpredictable source of obstreperous demands and assertions of rights? “Why do smart cities offer only improvement?” asks the architect Rem Koolhaas. “Where is the possibility of transgression?”
Smart beginnings: a crowd watches as new, automated traffic lights are erected at Ludgate Circus, London, in 1931. Photograph: Fox Photos/Getty Images
The smart city concept arguably dates back at least as far as the invention of automated traffic lights, which were first deployed in 1922 in Houston, Texas. Leo Hollis, author of Cities Are Good For You, says the one unarguably positive achievement of smart city-style thinking in modern times is the train indicator boards on the London Underground. But in the last decade, thanks to the rise of ubiquitous internet connectivity and the miniaturisation of electronics in such now-common devices as RFID tags, the concept seems to have crystallised into an image of the city as a vast, efficient robot – a vision that originated, according to Adam Greenfield at LSE Cities, with giant technology companies such as IBM, Cisco and Software AG, all of whom hoped to profit from big municipal contracts.
“The notion of the smart city in its full contemporary form appears to have originated within these businesses,” Greenfield notes in his 2013 book Against the Smart City, “rather than with any party, group or individual recognised for their contributions to the theory or practice of urban planning.”
Whole new cities, such as Songdo in South Korea, have already been constructed according to this template. Its buildings have automatic climate control and computerised access; its roads and water, waste and electricity systems are dense with electronic sensors to enable the city’s brain to track and respond to the movement of residents. But such places retain an eerie and half-finished feel to visitors – which perhaps shouldn’t be surprising. According to Antony M Townsend, in his 2013 book Smart Cities, Songdo was originally conceived as “a weapon for fighting trade wars”; the idea was “to entice multinationals to set up Asian operations at Songdo … with lower taxes and less regulation”.
In India, meanwhile, prime minister Narendra Modi has promised to build no fewer than 100 smart cities – a competitive response, in part, to China’s inclusion of smart cities as a central tenet of its grand urban plan. Yet for the near-term at least, the sites of true “smart city creativity” arguably remain the planet’s established metropolises such as London, New York, Barcelona and San Francisco. Indeed, many people think London is the smartest city of them all just now — Duncan Wilson of Intel calls it a “living lab” for tech experiments.
So what challenges face technologists hoping to weave cutting-edge networks and gadgets into centuries-old streets and deeply ingrained social habits and patterns of movement? This was the central theme of the recent “Re.Work Future Cities Summit” in London’s Docklands – for which two-day public tickets ran to an eye-watering £600.
The event was structured like a fast-cutting series of TED talks, with 15-minute investor-friendly presentations on everything from “emotional cartography” to biologically inspired buildings. Not one non-Apple-branded laptop could be spotted among the audience, and at least one attendee was seen confidently sporting the telltale fat cyan arm of Google Glass on his head.
“Instead of a smart phone, I want you all to have a smart drone in your pocket,” said one entertaining robotics researcher, before tossing up into the auditorium a camera-equipped drone that buzzed around like a fist-sized mosquito. Speakers enthused about the transport app Citymapper, and how the city of Zurich is both futuristic and remarkably civilised. People spoke about the “huge opportunity” represented by expanding city budgets for technological “solutions”.
Usman Haque’s project Thingful is billed as a ‘search engine for the internet of things’
Strikingly, though, many of the speakers took care to denigrate the idea of the smart city itself, as though it was a once-fashionable buzzphrase that had outlived its usefulness. This was done most entertainingly by Usman Haque, of the urban consultancy Umbrellium. The corporate smart-city rhetoric, he pointed out, was all about efficiency, optimisation, predictability, convenience and security. “You’ll be able to get to work on time; there’ll be a seamless shopping experience, safety through cameras, et cetera. Well, all these things make a city bearable, but they don’t make a city valuable.”
As the tech companies bid for contracts, Haque observed, the real target of their advertising is clear: “The people it really speaks to are the city managers who can say, ‘It wasn’t me who made the decision, it was the data.’”
Of course, these speakers who rejected the corporate, top-down idea of the smart city were themselves demonstrating their own technological initiatives to make the city, well, smarter. Haque’s project Thingful, for example, is billed as a search engine for the internet of things. It could be used in the morning by a cycle commuter: glancing at a personalised dashboard of local data, she could check local pollution levels and traffic, and whether there are bikes in the nearby cycle-hire rack.
“The smart city was the wrong idea pitched in the wrong way to the wrong people,” suggested Dan Hill, of urban innovators the Future Cities Catapult. “It never answered the question: ‘How is it tangibly, materially going to affect the way people live, work, and play?’” (His own work includes Cities Unlocked, an innovative smartphone audio interface that can help visually impaired people navigate the streets.) Hill is involved with Manchester’s current smart city initiative, which includes apparently unglamorous things like overhauling the Oxford Road corridor – a bit of “horrible urban fabric”. This “smart stuff”, Hill tells me, “is no longer just IT – or rather IT is too important to be called IT any more. It’s so important you can’t really ghettoise it in an IT city. A smart city might be a low-carbon city, or a city that’s easy to move around, or a city with jobs and housing. Manchester has recognised that.”
One take-home message of the conference seemed to be that whatever the smart city might be, it will be acceptable as long as it emerges from the ground up: what Hill calls “the bottom-up or citizen-led approach”. But of course, the things that enable that approach – a vast network of sensors amounting to millions of electronic ears, eyes and noses – also potentially enable the future city to be a vast arena of perfect and permanent surveillance by whomever has access to the data feeds.
Inside Rio de Janeiro’s centre of operations: ‘a high-precision control panel for the entire city’. Photograph: David Levene
One only has to look at the hi-tech nerve centre that IBM built for Rio de Janeiro to see this Nineteen Eighty-Four-style vision already alarmingly realised. It is festooned with screens like a Nasa Mission Control for the city. As Townsend writes: “What began as a tool to predict rain and manage flood response morphed into a high-precision control panel for the entire city.” He quotes Rio’s mayor, Eduardo Paes, as boasting: “The operations centre allows us to have people looking into every corner of the city, 24 hours a day, seven days a week.”
What’s more, if an entire city has an “operating system”, what happens when it goes wrong? The one thing that is certain about software is that it crashes. The smart city, according to Hollis, is really just a “perpetual beta city”. We can be sure that accidents will happen – driverless cars will crash; bugs will take down whole transport subsystems or the electricity grid; drones could hit passenger aircraft. How smart will the architects of the smart city look then?
A less intrusive way to make a city smarter might be to give those who govern it a way to try out their decisions in virtual reality before inflicting them on live humans. This is the idea behind city-simulation company Simudyne, whose projects include detailed computerised models for planning earthquake response or hospital evacuation. It’s like the strategy game SimCity – for real cities. And indeed Simudyne now draws a lot of its talent from the world of videogames. “When we started, we were just mathematicians,” explains Justin Lyon, Simudyne’s CEO. “People would look at our simulations and joke that they were inscrutable. So five or six years ago we developed a new system which allows you to make visualisations – pretty pictures.” The simulation can now be run as an immersive first-person gameworld, or as a top-down SimCity-style view, where “you can literally drop policy on to the playing area”.
Another serious use of “pretty pictures” is exemplified by the work of ScanLAB Projects, which uses Lidar and ground-penetrating radar to make 3D visualisations of real places. They can be used for art installations and entertainment: for example, mapping underground ancient Rome for the BBC. But the way an area has been used over time, both above and below ground, can also be presented as a layered historical palimpsest, which can serve the purposes of archaeological justice and memory – as with ScanLAB’s Living Death Camps project with Forensic Architecture, on two concentration-camp sites in the former Yugoslavia.
The former German pavilion at Staro Sajmište, Belgrade – produced from terrestrial laser scanning and ground-penetrating radar as part of the Living Death Camps project. Photograph: ScanLAB Projects
For Simudyne’s simulations, meanwhile, the visualisations work to “gamify” the underlying algorithms and data, so that anyone can play with the initial conditions and watch the consequences unfold. Will there one day be convergence between this kind of thing and the elaborately realistic modelled cities that are built for commercial videogames? “There’s absolutely convergence,” Lyon says. A state-of-the art urban virtual reality such as the recreation of Chicago in this year’s game Watch Dogs requires a budget that runs to scores of millions of dollars. But, Lyon foresees, “Ten years from now, what we see in Watch Dogs today will be very inexpensive.”
What if you could travel through a visually convincing city simulation wearing the VR headset, Oculus Rift? When Lyon first tried one, he says, “Everything changed for me.” Which prompts the uncomfortable thought that when such simulations are indistinguishable from the real thing (apart from the zero possibility of being mugged), some people might prefer to spend their days in them. The smartest city of the future could exist only in our heads, as we spend all our time plugged into a virtual metropolitan reality that is so much better than anything physically built, and fail to notice as the world around us crumbles.
In the meantime, when you hear that cities are being modelled down to individual people – or what in the model are called “agents” – you might still feel a jolt of the uncanny, and insist that free-will makes your actions in the city unpredictable. To which Lyon replies: “They’re absolutely right as individuals, but collectively they’re wrong. While I can’t predict what you are going to do tomorrow, I can have, with some degree of confidence, a sense of what the crowd is going to do, what a group of people is going to do. Plus, if you’re pulling in data all the time, you use that to inform the data of the virtual humans.
“Let’s say there are 30 million people in London: you can have a simulation of all 30 million people that very closely mirrors but is not an exact replica of London. You have the 30 million agents, and then let’s have a business-as-usual normal commute, let’s have a snowstorm, let’s shut down a couple of train lines, or have a terrorist incident, an earthquake, and so on.” Lyons says you will get a highly accurate sense of how people, en masse, will respond to these scenarios. “While I’m not interested in a specific individual, I’m interested in the emergent behaviour of the crowd.”
City-simulation company Simudyne creates computerised models ‘with pretty pictures’ to aid disaster-response planning
But what about more nefarious bodies who are interested in specific individuals? As citizens stumble into a future where they will be walking around a city dense with sensors, cameras and drones tracking their every movement – even whether they are smiling (as has already been tested at the Cheltenham Jazz Festival) or feeling gloomy – there is a ticking time-bomb of arguments about surveillance and privacy that will dwarf any previous conversations about Facebook or even, perhaps, government intelligence agencies scanning our email. Unavoidable advertising spam everywhere you go, as in Minority Report, is just the most obvious potential annoyance. (There have already been “smart billboards” that recognised Minis driving past and said hello to them.) The smart city might be a place like Rio on steroids, where you can never disappear.
“If you have a mobile phone, and the right sensors are deployed across the city, people have demonstrated the ability to track those individual phones,” Lyon points out. “And there’s nothing that would prevent you from visualising that movement in a SimCity-like landscape, like in Watch Dogs where you see an avatar moving through the city and you can call up their social-media profile. If you’re trying to search a very large dataset about how someone’s moving, it’s very hard to get your head around it, but as soon as you fire up a game-style visualisation, it’s very easy to see, ‘Oh, that’s where they live, that’s where they work, that’s where their mistress must be, that’s where they go to drink a lot.’”
This is potentially an issue with open-data initiatives such as those currently under way in Bristol and Manchester, which is making publicly available the data it holds about city parking, procurement and planning, public toilets and the fire service. The democratic motivation of this strand of smart-city thinking seems unimpugnable: the creation of municipal datasets is funded by taxes on citizens, so citizens ought to have the right to use them. When presented in the right way – “curated”, if you will, by the city itself, with a sense of local character – such information can help to bring “place back into the digital world”, says Mike Rawlinson of consultancy City ID, which is working with Bristol on such plans.
But how safe is open data? It has already been demonstrated, for instance, that the openly accessible data of London’s cycle-hire scheme can be used to track individual cyclists. “There is the potential to see it all as Big Brother,” Rawlinson says. “If you’re releasing data and people are reusing it, under what purpose and authorship are they doing so?” There needs, Hill says, to be a “reframed social contract”.
The interface of Simudyne’s City Hospital EvacSim
Sometimes, at least, there are good reasons to track particular individuals. Simudyne’s hospital-evacuation model, for example, needs to be tied in to real data. “Those little people that you see [on screen], those are real people, that’s linking to the patient database,” Lyon explains – because, for example, “we need to be able to track this poor child that’s been burned.” But tracking everyone is a different matter: “There could well be a backlash of people wanting literally to go off-grid,” Rawlinson says. Disgruntled smart citizens, unite: you have nothing to lose but your phones.
In truth, competing visions of the smart city are proxies for competing visions of society, and in particular about who holds power in society. “In the end, the smart city will destroy democracy,” Hollis warns. “Like Google, they’ll have enough data not to have to ask you what you want.”
You sometimes see in the smart city’s prophets a kind of casual assumption that politics as we know it is over. One enthusiastic presenter at the Future Cities Summit went so far as to say, with a shrug: “Internet eats everything, and internet will eat government.” In another presentation, about a new kind of “autocatalytic paint” for street furniture that “eats” noxious pollutants such as nitrous oxide, an engineer in a video clip complained: “No one really owns pollution as a problem.” Except that national and local governments do already own pollution as a problem, and have the power to tax and regulate it. Replacing them with smart paint ain’t necessarily the smartest thing to do.
And while some tech-boosters celebrate the power of companies such as Über – the smartphone-based unlicensed-taxi service now banned in Spain and New Delhi, and being sued in several US states – to “disrupt” existing transport infrastructure, Hill asks reasonably: “That Californian ideology that underlies that user experience, should it really be copy-pasted all over the world? Let’s not throw away the idea of universal service that Transport for London adheres to.”
Perhaps the smartest of smart city projects needn’t depend exclusively – or even at all – on sensors and computers. At Future Cities, Julia Alexander of Siemens nominated as one of the “smartest” cities in the world the once-notorious Medellin in Colombia, site of innumerable gang murders a few decades ago. Its problem favelas were reintegrated into the city not with smartphones but with publicly funded sports facilities and a cable car connecting them to the city. “All of a sudden,” Alexander said, “you’ve got communities interacting” in a way they never had before. Last year, Medellin – now the oft-cited poster child for “social urbanism” – was named the most innovative city in the world by the Urban Land Institute.
One sceptical observer of many presentations at the Future Cities Summit, Jonathan Rez of the University of New South Wales, suggests that “a smarter way” to build cities “might be for architects and urban planners to have psychologists and ethnographers on the team.” That would certainly be one way to acquire a better understanding of what technologists call the “end user” – in this case, the citizen. After all, as one of the tribunes asks the crowd in Shakespeare’s Coriolanus: “What is the city but the people?”
Many smartphone apps use a device’s sensors to try to measure people’s physical well-being, for example by counting every step they take. A new app developed by researchers at Dartmouth College suggests that a phone’s sensors can also be used to peek inside a person’s mind and gauge mental health.
When 48 students let the app collect information from their phones for an entire 10-week term, patterns in the data matched up with changes in stress, depression, and loneliness that showed up when they took the kind of surveys doctors use to assess their patients’ mood and mental health. Trends in the phone data also correlated with students’ grades.
The results suggest that smartphone apps could offer people and doctors new ways to manage mental well-being, says Andrew Campbell, the Dartmouth professor who led the research.
Previous studies have shown that custom-built mobile gadgets could indirectly gauge mental states. The Dartmouth study, however, used Android smartphones like those owned by millions of people, says Campbell. “We’re the first to use standard phones and sensors that are just carried without any user interaction,” he says. A paper on the research was presented last week at the ACM International Joint Conference on Pervasive and Ubiquitous Computing in Seattle.
Campbell’s app, called StudentLife, collects data including a phone’s motion and location and the timing of calls and texts, and occasionally activates the microphone on a device to run software that can tell if a conversation is taking place nearby. Algorithms process that information into logs of a person’s physical activity, communication patterns, sleeping patterns, visits to different places, and an estimate of how often they were involved in face-to-face conversation. Many changes in those patterns were found to correlate significantly with changes in measures of depression, loneliness, and stress. For example, decline in exposure to face-to-face conversations was indicative of depression.
The surveys used as a benchmark for mental health in the study are more normally used by doctors to assess patients who seek help for mental health conditions. In the future, data from a person’s phone could provide a richer picture to augment a one-off survey when a person seeks help, says Campbell. He is also planning further research into how data from his app might be used to tip off individuals or their caregivers when behavioral patterns indicate that their mental health could be changing. In the case of students, that approach could provide a way to reduce dropout rates or help people improve their academic performance, says Campbell.
“Intervention is the next step,” he says. “It could be something simple like telling a person they should go and engage in conversations to improve their mood, or that, statistically, if you party only three nights a week you will get more decent grades.” Campbell is also working on a study testing whether a similar app could help predict relapses in people with schizophrenia.
A startup called Ginger.io with an app similar to Campbell’s is already testing similar ideas with some health-care providers. In one trial with diabetics, changes in a person’s behavior triggered an alert to nurses, who reach out to make sure that the patient was adhering to his medication (see “Smartphone Tracker Gives Doctors Remote Viewing Powers”).
Anmol Madan, CEO and cofounder of Ginger.io, says the Dartmouth study adds to the evidence that those ideas are valuable. However, he notes, much larger studies are needed to really convince doctors and health-care providers to adopt a new approach. Ginger.io has found similar associations between its own data and clinical scales for depression, says Madan, although results have not been published.
Both Ginger.io and the Dartmouth work were inspired by research at the MIT Media Lab that established the idea that data from personal devices offers a new way to study human behavior (see “TR10: Social Physics”). Yaniv Altshuler, a researcher who helped pioneer that approach, says the Dartmouth study is an interesting addition to that body of work, but it’s also a reminder that there will be downsides to the mobile data trove. Being able to use mobile devices to learn very sensitive information about people could raise new privacy risks.
Campbell—who got clearance for his study from an ethical review board—notes that his results show how existing privacy rules can be left behind by data mining. A health-care provider collecting data using standard mental health surveys would be bound by HIPAA data privacy regulations in the United States. It’s less clear what rules apply when that same data is derived from a phone app. “If you have signals you can use to work out, say, that I am a manic depressive, what governs use of that data is not well accepted,” he says.
Whatever the answer, apps that log the kind of rich data Campbell collected are likely to become more common. Smartphone sensors have become much more energy-efficient, so detailed, round-the-clock data logging is now feasible without wiping out battery life. “As of six months ago phones got to the point where we could do 24/7 sensing,” says Campbell. “All the technology has now arrived that you can do these things.”
The future we see in Her is one where technology has dissolved into everyday life.
A few weeks into the making of Her, Spike Jonze’s new flick about romance in the age of artificial intelligence, the director had something of a breakthrough. After poring over the work of Ray Kurzweil and other futurists trying to figure out how, exactly, his artificially intelligent female lead should operate, Jonze arrived at a critical insight: Her, he realized, isn’t a movie about technology. It’s a movie about people. With that, the film took shape. Sure, it takes place in the future, but what it’s really concerned with are human relationships, as fragile and complicated as they’ve been from the start.
Of course on another level Her is very much a movie about technology. One of the two main characters is, after all, a consciousness built entirely from code. That aspect posed a unique challenge for Jonze and his production team: They had to think like designers. Assuming the technology for AI was there, how would it operate? What would the relationship with its “user” be like? How do you dumb down an omniscient interlocutor for the human on the other end of the earpiece?
When AI is cheap, what does all the other technology look like?
For production designer KK Barrett, the man responsible for styling the world in which the story takes place, Her represented another sort of design challenge. Barrett’s previously brought films like Lost in Translation, Marie Antoinette, and Where the Wild Things Are to life, but the problem here was a new one, requiring more than a little crystal ball-gazing. The big question: In a world where you can buy AI off the shelf, what does all the other technology look like?
In Her, the future almost looks more like the past.
Technology Shouldn’t Feel Like Technology
One of the first things you notice about the “slight future” of Her, as Jonze has described it, is that there isn’t all that much technology at all. The main character Theo Twombly, a writer for the bespoke love letter service BeautifulHandwrittenLetters.com, still sits at a desktop computer when he’s at work, but otherwise he rarely has his face in a screen. Instead, he and his fellow future denizens are usually just talking, either to each other or to their operating systems via a discrete earpiece, itself more like a fancy earplug anything resembling today’s cyborgian Bluetooth headsets.
In this “slight future” world, things are low-tech everywhere you look. The skyscrapers in this futuristic Los Angeles haven’t turned into towering video billboards a la Blade Runner; they’re just buildings. Instead of a flat screen TV, Theo’s living room just has nice furniture.
This is, no doubt, partly an aesthetic concern; a world mediated through screens doesn’t make for very rewarding mise en scene. But as Barrett explains it, there’s a logic to this technological sparseness. “We decided that the movie wasn’t about technology, or if it was, that the technology should be invisible,” he says. “And not invisible like a piece of glass.” Technology hasn’t disappeared, in other words. It’s dissolved into everyday life.
Here’s another way of putting it. It’s not just that Her, the movie, is focused on people. It also shows us a future where technology is more people-centric. The world Her shows us is one where the technology has receded, or one where we’ve let it recede. It’s a world where the pendulum has swung back the other direction, where a new generation of designers and consumers have accepted that technology isn’t an end in itself–that it’s the real world we’re supposed to be connecting to. (Of course, that’s the ideal; as we see in the film, in reality, making meaningful connections is as difficult as ever.)
Theo still has a desktop display at work and at home, but elsewhere technology is largely invisible.
Theo Twombly still sits at a desktop computer when he’s at work, but otherwise he rarely has his face in a screen.
Jonze had help in finding the contours of this slight future, including conversations with designers from New York-based studio Sagmeister & Walsh and an early meeting with Elizabeth Diller and Ricardo Scofidio, principals at architecture firm DS+R. As the film’s production designer, Barrett was responsible for making it a reality.
Throughout that process, he drew inspiration from one of his favorite books, a visual compendium of futuristic predictions from various points in history. Basically, the book reminded Barrett what not to do. “It shows a lot of things and it makes you laugh instantly, because you say, ‘those things never came to pass!’” he explains. “But often times, it’s just because they over-thought it. The future is much simpler than you think.”
That’s easy to say in retrospect, looking at images of Rube Goldbergian kitchens and scenes of commute by jet pack. But Jonze and Barrett had the difficult task of extrapolating that simplification forward from today’s technological moment.
Theo’s home gives us one concise example. You could call it a “smart house,” but there’s little outward evidence of it. What makes it intelligent isn’t the whizbang technology but rather simple, understated utility. Lights, for example, turn off and on as Theo moves from room to room. There’s no app for controlling them from the couch; no control panel on the wall. It’s all automatic. Why? “It’s just a smart and efficient way to live in a house,” says Barrett.
Today’s smartphones were another object of Barrett’s scrutiny. “They’re advanced, but in some ways they’re not advanced whatsoever,” he says. “They need too much attention. You don’t really want to be stuck engaging them. You want to be free.” In Barrett’s estimation, the smartphones just around the corner aren’t much better. “Everyone says we’re supposed to have a curved piece of flexible glass. Why do we need that? Let’s make it more substantial. Let’s make it something that feels nice in the hand.”
Theo’s smartphone was designed to be “substantial,” something that first and foremost “feels good in the hand.”
Theo’s phone in the film is just that–a handsome hinged device that looks more like an art deco cigarette case than an iPhone. He uses it far less frequently than we use our smartphones today; it’s functional, but it’s not ubiquitous. As an object, it’s more like a nice wallet or watch. In terms of industrial design, it’s an artifact from a future where gadgets don’t need to scream their sophistication–a future where technology has progressed to the point that it doesn’t need to look like technology.
All of these things contribute to a compelling, cohesive vision of the future–one that’s dramatically different from what we usually see in these types of movies. You could say that Her is, in fact, a counterpoint to that prevailing vision of the future–the anti-Minority Report. Imagining its world wasn’t about heaping new technology on society as we know it today. It was looking at those places where technology could fade into the background, integrate more seamlessly. It was about envisioning a future, perhaps, that looked more like the past. “In a way,” says Barrett, “my job was to undesign the design.”
The Holy Grail: A Discrete User Interface
The greatest act of undesigning in Her, technologically speaking, comes with the interface used throughout the film. Theo doesn’t touch his computer–in fact, while he has a desktop display at home and at work, neither have a keyboard. Instead, he talks to it. “We decided we didn’t want to have physical contact,” Barrett says. “We wanted it to be natural. Hence the elimination of software keyboards as we know them.”
Again, voice control had benefits simply on the level of moviemaking. A conversation between Theo and Sam, his artificially intelligent OS, is obviously easier for the audience to follow than anything involving taps, gestures, swipes or screens. But the voice-based UI was also a perfect fit for a film trying to explore what a less intrusive, less demanding variety of technology might look like.
The main interface in the film is voice–Theo communicates to his AI OS through a discrete ear plug.
Indeed, if you’re trying to imagine a future where we’ve managed to liberate ourselves from screens, systems based around talking are hard to avoid. As Barrett puts it, the computers we see in Her “don’t ask us to sit down and pay attention” like the ones we have today. He compares it to the fundamental way music beats out movies in so many situations. Music is something you can listen to anywhere. It’s complementary. It lets you operate in 360 degrees. Movies require you to be locked into one place, looking in one direction. As we see in the film, no matter what Theo’s up to in real life, all it takes to bring his OS into the fold is to pop in his ear plug.
Looking at it that way, you can see the audio-based interface in Her as a novel form of augmented reality computing. Instead of overlaying our vision with a feed, as we’ve typically seen it, Theo gets a one piped into his ear. At the same time, the other ear is left free to take in the world around him.
Barrett sees this sort of arrangement as an elegant end point to the trajectory we’re already on. Think about what happens today when we’re bored at the dinner table. We check our phones. At the same time, we realize that’s a bit rude, and as Barrett sees it, that’s one of the great promises of the smartwatch: discretion.
“They’re a little more invisible. A little sneakier,” he says. Still, they’re screens that require eyeballs. Instead, Barrett says, “imagine if you had an ear plug in and you were getting your feed from everywhere.” Your attention would still be divided, but not nearly as flagrantly.
Theo chops it up with a holographic videogame character.
Of course, a truly capable voice-based UI comes with other benefits. Conversational interfaces make everything easier to use. When every different type of device runs an OS that can understand natural language, it means that every menu, every tool, every function is accessible simply by requesting it.
That, too, is a trend that’s very much alive right now. Consider how today’s mobile operating systems, like iOS and ChromeOS, hide the messy business of file systems out of sight. Theo, with his voice-based valet as intermediary, is burdened with even less under-the-hood stuff than we are today. As Barrett puts it: “We didn’t want him fiddling with things and fussing with things.” In other words, Theo lives in a future where everything, not just his iPad, “just works.”
Theo lives in a future where everything, not just his iPad, “just works.”
AI: the ultimate UX challenge
The central piece of invisible design in Her, however, is that of Sam, the artificially intelligent operating system and Theo’s eventual romantic partner. Their relationship is so natural that it’s easy to forget she’s a piece of software. But Jonze and company didn’t just write a girlfriend character, label it AI, and call it a day. Indeed, much of the film’s dramatic tension ultimately hinges not just on the ways artificial intelligence can be like us but the ways it cannot.
Much of Sam’s unique flavor of AI was written into the script by Jonze himself. But her inclusion led to all sorts of conversations among the production team about the nature of such a technology. “Anytime you’re dealing with trying to interact with a human, you have to think of humans as operating systems. Very advanced operating systems. Your highest goal is to try to emulate them,” Barrett says. Superficially, that might mean considering things like voice pattern and sensitivity and changing them based on the setting or situation.
Even more quesitons swirled when they considered how an artificially intelligent OS should behave. Are they a good listener? Are they intuitive? Do they adjust to your taste and line of questioning? Do they allow time for you to think? As Barrett puts it, “you don’t want a machine that’s always telling you the answer. You want one that approaches you like, ‘let’s solve this together.’”
In essence, it means that AI has to be programmed to dumb itself down. “I think it’s very important for OSes in the future to have a good bedside manner.” Barrett says. “As politicians have learned, you can’t talk at someone all the time. You have to act like you’re listening.”
AI’s killer app, as we see in the film, is the ability to adjust to the emotional state of its user.
As we see in the film, though, the greatest asset of AI might be that it doesn’t have one fixed personality. Instead, its ability to figure out what a person needs at a given moment emerges as the killer app.
Theo, emotionally desolate in the midst of a hard divorce, is having a hard time meeting people, so Sam goads him into going on a blind date. When Theo’s friend Amy splits up with her husband, her own artificially intelligent OS acts as a sort of therapist. “She’s helping me work through some things,” Amy says of her virtual friend at one point.
In our own world, we may be a long way from computers that are able to sense when we’re blue and help raise our spirits in one way or another. But we’re already making progress down this path. In something as simple as a responsive web layout or iOS 7′s “Do Not Disturb” feature, we’re starting to see designs that are more perceptive about the real world context surrounding them–where or how or when they’re being used. Google Now and other types of predictive software are ushering in a new era of more personalized, more intelligent apps. And while Apple updating Siri with a few canned jokes about her Hollywood counterpart might not amount to a true sense of humor, it does serve as another example of how we’re making technology more human–a preoccupation that’s very much alive today.
Personal comment:
While I do agree with the idea that technology is becoming in some ways banal --or maybe, to use a better word, just common-- and that the future might not be about flying cars, fancy allover hologram interfaces or backup video cities populated with personal clones), that it might be "in service of", will "vanish" or "recede" into our daily atmospheres, environments, architectures, furnitures, clothes if not bodies or cells, we have to keep in mind that this could (will) make it even more intrusive. When technology won't make debate anymore (when it will be common), when it will be both ambient and invisible, passively accepted if not undergone, then there will be lots of room for ... (gap to be filled by many names) to fulfill their wildest dreams.
We also have to keep in mind that when technology enters an untapped domain (let's say as an example social domain, for the last ten years), it engineers what was, in some cases and before, a common good (you didn't had to pay or trade some data to talk with somebody before). So to say, to engineer common goods (i.e. social relationships, but why not in the future love, air, genome --already the case--, etc.) turns them into products: commodification. Not always a good thing if I could say so... This definitely looks like a goal from the economy: how to turn everything into something you can sell, and information technology is quite good in helping do that.
-
And btw... even so technology might "recede", I'm not so keen with the rather "skeumorph" design of the "ai cell phone" in the movie so far (haven't seen the movie yet)..;). and oh my, I definitely hope this won't be our future. It looks like a Jobs design!
Is our urban future bright or bleak? Peter Bradshaw provides a selection of celluloid cities you might consider moving to - or avoiding - if you are looking to relocate any time in the next 200 years or so.
METROPOLIS (1927) (dir. Fritz Lang)
Metropolis is the architectural template for all futurist cities in the movies. It has glitzy skyscrapers; it has streets crowded with folk who swarm through them like ants; most importantly, it has high-up freeways linking the buildings, criss-crossing the sky, on which automobiles and trains casually run — the sine qua non of the futurist city. Metropolis is a gigantic 21st-century European city state, a veritable utopia for that elite few fortunate enough to live above ground in its gleaming urban spaces. But it’s awful for the untermensch race of workers who toil underground. Photograph: Ronald Grant Archive.
ESCAPE FROM NEW YORK (1981) (dir. John Carpenter)
Made when New York still had its tasty crime-capital reputation, Carpenter’s dystopian sci-fi presents us with the New York of the future, ie 1988, and imagines that the authorities have given up policing it entirely and simply walled the city off and established a 24/7 patrol for the perimeter, re-purposing the city as a licensed hellhole of Darwinian violence into which serious prisoners will just be slung and then forgotten about, to survive or not as they can. Then in 1997 the President’s plane goes down in the city and he has to be rescued. New York is re-imagined as a lawless, dimly-lit nightmare. Not a great place to live. Photograph: Allstar/Cinetext/MGM.
LOGAN’S RUN (1976) (dir. Michael Anderson) This is set in an enclosed dome city in the post-apocalyptic world of 2274. It looks like an exciting, go-ahead place to live and it’s certainly a great city for twentysomethings. There are the much-loved overhead monorails and people wear the sleek, figure-hugging leotards, unitards, and miniskirts. The issue is that people here get killed on their 30th birthday. Some people escape the dome city to find themselves in deserted Washington DC, which is a wreck by comparison. Photograph: Ronald Grant Archive
BLADE RUNNER (1982) (dir. Ridley Scott)
This film presents us with Los Angeles 2019, a daunting megalopolis in which “replicants” may be hiding out — that is, super-sophisticated organically correct servant-robots indistinguishable from actual humans, who have defied the rules forbidding them to enter the city. Special cops called “blade runners” have to hunt them down. The city is colossal, headspinningly big, a virtual planet itself; it is cursed with terrible weather, with very rainy nights, but interestingly hints at the economic and cultural might of Asia with loads of billboard ads from the Far East. Again, the crime figures may make this city a bit of a no-no. Photograph: Allstar/Cinetext/Warner Bros.
ALPHAVILLE (1965) (dir. Jean-Luc Godard)
Alphaville is a city on a distant planet and a very grim place, subject to Orwellian repression and thought-control by its tyrannical ruler, an AI computer called Alpha 60. The city is seen largely at night, with drab buildings which have neither the techno-futurist furniture nor the obvious decay that you expect from sci-fi dystopia. This is because it was filmed in 1960s inner-city Paris: Alphaville is the French capital’s intergalactic banlieue twin-town. Again, this isn’t a great movie-futurist city to settle down in, although the property prices are probably reasonable. Photograph: British Film Institute.
THINGS TO COME (1936) (dir. William Cameron Menzies)
The state-of-the-urban-art British city of Everytown is here shown, from the year 1940 to 2036. A pleasant place is entirely ruined by a catastrophic war which lasts decades and plunges the city into the familiar future-city mode B: post-apocalyptic chaos. The city is basically rubble and hideous gas and poison warfare have made matters worse. Cynical and ambitious types aspire to control Everytown, but the place has been made the nexus for human vanity and ambition. Another future city with a bad reputation. Photograph: ITV/REX
AKIRA (1988) (dir. Katsuhiro Otomo)
Neo-Tokyo, 2019. This gigantic city is like an impossibly huge sentient robot life-form in itself. It was built to replace the “old” Tokyo which was immolated in a huge explosion. Now the new city is a teeming, prosperous, hi-tech place but more than a little anarchic and strange, always apparently on the verge of breaking down, and also incubating weird spiritual forces. Biker gangs do battle there: the Capsules versus the Clowns. An exciting place to settle down — seen in the right light.
SLEEPER (1973) (dir. Woody Allen)
Greenwich Village in 2173 is a startling place: part of the 22nd-century police state in which the people are kept in a placid condition with brainwashing. A nerdy bespectacled health-food-store owner, who has been cryogenically frozen in 1973 and awakened into this brave new world, must now battle against the forces of mind-control. This Huxleyan futureworld actually looks rather pleasant, the architecture, décor and public transport are all not bad, and these are cities with “Orgasmatrons” which are guaranteed to give sexual satistfaction to those inside them. Photograph: Ronald Grant Archive
MINORITY REPORT (2002) (dir. Steven Spielberg)
Washington in the year 2054 is eerie and disorientating: a shadowy, noir-y city which appears often to be underlit, and yet it certainly enjoys the benefits of the digital revolution. Moving posters are the norm (actually, they’re commonplace in cities now) and images, text and data on screens can be manipulated with extraordinary ease. The populace is policed by a specialist unit called “PreCrime” which can predict and pre-empt the lawbreakers, but their prophetic dominance has created a spiritual malaise in the city’s atmosphere.
BABELDOM (2013) (dir. Paul Bush) This cult cine-essay by Paul Bush is all about a fictional mega-city called Babeldom. Where this city is supposed to be is a moot point. It is everywhere and nowhere. At first it is glimpsed through a misty fog: it is the city of Babel imagined by the elder Breughel in his Tower Of Babel. Then Bush gives us glimpses of a place made up of actual cities and then computer graphic displays take us through how a city develops its distinctive lineaments and growth patterns. Of all the future-cities on this list, Babeldom is probably the weirdest.
This article was amended on 30 January 2014 to correct the spelling of Paul Bush's name.
The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives. So should we be more wary of their power?
The financial sector has long used algorithms to predict market fluctuations, but they can also help police identify crime hot spots or online shops target their customers. Photograph: Danil Melekhin/Getty Images
On 4 August 2005, the police department of Memphis, Tennessee, made so many arrests over a three-hour period that it ran out of vehicles to transport the detainees to jail. Three days later, 1,200 people had been arrested across the city – a new police department record. Operation Blue Crush was hailed a huge success.
Larry Godwin, the city's new police director, quickly rolled out the scheme and by 2011 crime across the city had fallen by 24%. When it was revealed Blue Crush faced budget cuts earlier this year, there was public outcry. "Crush" policing is now perceived to be so successful that it has reportedly been mimicked across the globe, including in countries such as Poland and Israel. In 2010, it was reported that two police forces in the UK were using it, but their identities were not revealed.
Crush stands for "Criminal Reduction Utilising Statistical History". Translated, it means predictive policing. Or, more accurately, police officers guided by algorithms. A team of criminologists and data scientists at the University of Memphis first developed the technique using IBM predictive analytics software. Put simply, they compiled crime statistics from across the city over time and overlaid it with other datasets – social housing maps, outside temperatures etc – then instructed algorithms to search for correlations in the data to identify crime "hot spots". The police then flooded those areas with highly targeted patrols.
"It's putting the right people in the right places on the right day at the right time," said Dr Richard Janikowski, an associate professor in the department of criminology and criminal justice at the University of Memphis, when the scheme launched. But not everyone is comfortable with the idea. Some critics have dubbed it "Minority Report" policing, in reference to the sci-fi film in which psychics are used to guide a "PreCrime" police unit.
The use of algorithms in policing is one example of their increasing influence on our lives. And, as their ubiquity spreads, so too does the debate around whether we should allow ourselves to become so reliant on them – and who, if anyone, is policing their use. Such concerns were sharpened further by the continuing revelations about how the US National Security Agency (NSA) has been using algorithms to help it interpret the colossal amounts of data it has collected from its covert dragnet of international telecommunications.
"For datasets the size of those the NSA collect, using algorithms is the only way to operate for certain tasks," says James Ball, the Guardian's data editor and part of the paper's NSA Files reporting team. "The problem is how the rules are set: it's impossible to do this perfectly. If you're, say, looking for terrorists, you're looking for something very rare. Set your rules too tight and you'll miss lots of, probably most, potential terror suspects. But set them more broadly and you'll drag lots of entirely blameless people into your dragnet, who will then face further intrusion or even formal investigation. We don't know exactly how the NSA or GCHQ use algorithms – or how extensively they're applied. But we do know they use them, including on the huge data trawls revealed in the Guardian."
From dating websites and City trading floors, through to online retailing and internet searches (Google's search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola), algorithms are increasingly determining our collective futures. "Bank approvals, store cards, job matches and more all run on similar principles," says Ball. "The algorithm is the god from the machine powering them all, for good or ill."
But what is an algorithm? Dr Panos Parpas, a lecturer in the quantitative analysis and decision science ("quads") section of the department of computing at Imperial College London, says that wherever we use computers, we rely on algorithms: "There are lots of types, but algorithms, explained simply, follow a series of instructions to solve a problem. It's a bit like how a recipe helps you to bake a cake. Instead of having generic flour or a generic oven temperature, the algorithm will try a range of variations to produce the best cake possible from the options and permutations available."
Parpas stresses that algorithms are not a new phenomenon: "They've been used for decades – back to Alan Turing and the codebreakers, and beyond – but the current interest in them is due to the vast amounts of data now being generated and the need to process and understand it. They are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It's also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn't blame our tools."
The "mistakes" Parpas refers to are events such as the "flash crash" of 6 May 2010, when the Dow Jones industrial average fell 1,000 points in just a few minutes, only to see the market regain itself 20 minutes later. The reasons for the sudden plummet has never been fully explained, but most financial observers blame a "race to the bottom" by the competing quantitative trading (quants) algorithms widely used to perform high-frequency trading. Scott Patterson, a Wall Street Journal reporter and author of The Quants, likens the use of algorithms on trading floors to flying a plane on autopilot. The vast majority of trades these days are performed by algorithms, but when things go wrong, as happened during the flash crash, humans can intervene.
"By far the most complicated algorithms are to be found in science, where they are used to design new drugs or model the climate," says Parpas. "But they are done within a controlled environment with clean data. It is easy to see if there is a bug in the algorithm. The difficulties come when they are used in the social sciences and financial trading, where there is less understanding of what the model and output should be, and where they are operating in a more dynamic environment. Scientists will take years to validate their algorithm, whereas a trader has just days to do so in a volatile environment."
Most investment banks now have a team of computer science PhDs coding algorithms, says Parpas, who used to work on such a team. "With City trading, everyone is running very similar algorithms," he says. "They all follow each other, meaning you get results such as the flash crash. They use them to speed up the process and to break up big trades to disguise them from competitors when a big investment is being made. It's an on-going, live process. They will run new algorithms for a few days to test them before letting them loose with real money. In currency trading, an algorithm lasts for about two weeks before it is stopped because it is surpassed by a new one. In equities, which is a less complicated market, they will run for a few months before a new one replaces them. It takes a day or two to write a currency algorithm. It's hard to find out information about them because, for understandable reasons, they don't like to advertise when they are successful. Goldman Sachs, though, has a strong reputation across the investment banks for having a brilliant team of algorithm scientists. PhDs students in this field will usually be employed within a few months by an investment bank."
The idea that the world's financial markets – and, hence, the wellbeing of our pensions, shareholdings, savings etc – are now largely determined by algorithmic vagaries is unsettling enough for some. But, as the NSA revelations exposed, the bigger questions surrounding algorithms centre on governance and privacy. How are they being used to access and interpret "our" data? And by whom?
Dr Ian Brown, the associate director of Oxford University's Cyber Security Centre, says we all urgently need to consider the implications of allowing commercial interests and governments to use algorithms to analyse our habits: "Most of us assume that 'big data' is munificent. The laws in the US and UK say that much of this [the NSA revelations] is allowed, it's just that most people don't realise yet. But there is a big question about oversight. We now spend so much of our time online that we are creating huge data-mining opportunities."
Algorithms can run the risk of linking some racial groups to particular crimes. Photograph: Alamy
Brown says that algorithms are now programmed to look for "indirect, non-obvious" correlations in data. "For example, in the US, healthcare companies can now make assessments about a good or bad insurance risk based, in part, on the distance you commute to work," he says. "They will identity the low-risk people and market their policies at them. Over time, this creates or exacerbates societal divides. Professor Oscar Gandy, at the University of Pennsylvania, has done research into 'secondary racial discrimination', whereby credit and health insurance, which relies greatly on postcodes, can discriminate against racial groups because they happen to live very close to other racial groups that score badly."
Brown harbours similar concerns over the use of algorithms to aid policing, as seen in Memphis where Crush's algorithms have reportedly linked some racial groups to particular crimes: "If you have a group that is disproportionately stopped by the police, such tactics could just magnify the perception they have of being targeted."
Viktor Mayer-Schönberger, professor of internet governance and regulation at the Oxford Internet Institute, also warns against humans seeing causation when an algorithm identifies a correlation in vast swaths of data. "This transformation presents an entirely new menace: penalties based on propensities," he writes in his new book, Big Data: A Revolution That Will Transform How We Live, Work and Think, which is co-authored by Kenneth Cukier, the Economist's data editor. "That is the possibility of using big-data predictions about people to judge and punish them even before they've acted. Doing this negates ideas of fairness, justice and free will. In addition to privacy and propensity, there is a third danger. We risk falling victim to a dictatorship of data, whereby we fetishise the information, the output of our analyses, and end up misusing it. Handled responsibly, big data is a useful tool of rational decision-making. Wielded unwisely, it can become an instrument of the powerful, who may turn it into a source of repression, either by simply frustrating customers and employees or, worse, by harming citizens."
Mayer-Schönberger presents two very different real-life scenarios to illustrate how algorithms are being used. First, he explains how the analytics team working for US retailer Target can now calculate whether a woman is pregnant and, if so, when she is due to give birth: "They noticed that these women bought lots of unscented lotion at around the third month of pregnancy, and that a few weeks later they tended to purchase supplements such as magnesium, calcium and zinc. The team ultimately uncovered around two dozen products that, used as proxies, enabled the company to calculate a 'pregnancy prediction' score for every customer who paid with a credit card or used a loyalty card or mailed coupons. The correlations even let the retailer estimate the due date within a narrow range, so it could send relevant coupons for each stage of the pregnancy."
Harmless targeting, some might argue. But what happens, as has already reportedly occurred, when a father is mistakenly sent nappy discount vouchers instead of his teenage daughter whom a retailer has identified is pregnant before her own father knows?
Mayer-Schönberger's second example on the reliance upon algorithms throws up even more potential dilemmas and pitfalls: "Parole boards in more than half of all US states use predictions founded on data analysis as a factor in deciding whether to release somebody from prison or to keep him incarcerated.
Norah Jones: a specially developed algorithm predicted that her debut album contained a disproportionately high number of hit records. Photograph: Olycom SPA/Rex Features
Christopher Steiner, author of Automate This: How Algorithms Came to Rule Our World, has identified a wide range of instances where algorithms are being used to provide predictive insights – often within the creative industries. In his book, he tells the story of a website developer called Mike McCready, who has developed an algorithm to analyse and rate hit records. Using a technique called advanced spectral deconvolution, the algorithm breaks up each hit song into its component parts – melody, tempo, chord progression and so on – and then uses that to determine common characteristics across a range of No 1 records. McCready's algorithm correctly predicted – before they were even released – that the debut albums by both Norah Jones and Maroon 5 contained a disproportionately high number of hit records.
The next logical step – for profit-seeking record companies, perhaps – is to use algorithms to replace the human songwriter. But is that really an attractive proposition? "Algorithms are not yet writing pop music," says Steiner. He pauses, then laughs. "Not that we know of, anyway. If I were a record company executive or pop artist, I wouldn't tell anyone if I'd had a number one written by an algorithm."
Steiner argues that we should not automatically see algorithms as a malign influence on our lives, but we should debate their ubiquity and their wide range of uses. "We're already halfway towards a world where algorithms run nearly everything. As their power intensifies, wealth will concentrate towards them. They will ensure the 1%-99% divide gets larger. If you're not part of the class attached to algorithms, then you will struggle. The reason why there is no popular outrage about Wall Street being run by algorithms is because most people don't yet know or understand it."
But Steiner says we should welcome their use when they are used appropriately to aid and speed our lives. "Retail algorithms don't scare me," he says. "I find it useful when Amazon tells me what I might like. In the US, we know we will not have enough GP doctors in 15 years, as not enough are being trained. But algorithms can replace many of their tasks. Pharmacists are already seeing some of their prescribing tasks replaced by algorithms. Algorithms might actually start to create new, mundane jobs for humans. For example, algorithms will still need a human to collect blood and urine samples for them to analyse."
There can be a fine line, though, between "good" and "bad" algorithms, he adds: "I don't find the NSA revelations particularly scary. At the moment, they just hold the data. Even the best data scientists would struggle to know what to do with all that data. But it's the next step that we need to keep an eye on. They could really screw up someone's life with a false prediction about what they might be up to."
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.