Thursday, July 25. 2013
Via MIT Technology Review via @chrstphggnrd
-----
New tricks will enable a life-logging app called Saga to figure out not only where you are, but what you’re doing.
By Tom Simonite
Having mobile devices closely monitoring our behavior could make them more useful, and open up new business opportunities.
Many of us already record the places we go and things we do by using our smartphone to diligently snap photos and videos, and to update social media accounts. A company called ARO is building technology that automatically collects a more comprehensive, automatic record of your life.
ARO is behind an app called Saga that automatically records every place that a person goes. Now ARO’s engineers are testing ways to use the barometer, cameras, and microphones in a device, along with a phone’s location sensors, to figure out where someone is and what they are up to. That approach should debut in the Saga app in late summer or early fall.
The current version of Saga, available for Apple and Android phones, automatically logs the places a person visits; it can also collect data on daily activity from other services, including the exercise-tracking apps FitBit and RunKeeper, and can pull in updates from social media accounts like Facebook, Instagram, and Twitter. Once the app has been running on a person’s phone for a little while, it produces infographics about his or her life; for example, charting the variation in times when they leave for work in the morning.
Software running on ARO’s servers creates and maintains a model of each user’s typical movements. Those models power Saga’s life-summarizing features, and help the app to track a person all day without requiring sensors to be always on, which would burn too much battery life.
“If I know that you’re going to be sitting at work for nine hours, we can power down our collection policy to draw as little power as possible,” says Andy Hickl, CEO of ARO. Saga will wake up and check a person’s location if, for example, a phone’s accelerometer suggests he or she is on the move; and there may be confirmation from other clues, such as the mix of Wi-Fi networks in range of the phone. Hickl says that Saga typically consumes around 1 percent of a device’s battery, significantly less than many popular apps for e-mail, mapping, or social networking.
That consumption is low enough, says Hickl, that Saga can afford to ramp up the information it collects by accessing additional phone sensors. He says that occasionally sampling data from a phone’s barometer, cameras, and microphones will enable logging of details like when a person walked into a conference room for a meeting, or when they visit Starbucks, either alone or with company.
The Android version of Saga recently began using the barometer present in many smartphones to distinguish locations close to one another. “Pressure changes can be used to better distinguish similar places,” says Ian Clifton, who leads development of the Android version of ARO. “That might be first floor versus third floor in the same building, but also inside a vehicle versus outside it, even in the same physical space.”
ARO is internally testing versions of Saga that sample light and sound from a person’s environment. Clifton says that using a phone’s microphone to collect short acoustic fingerprints of different places can be a valuable additional signal of location, and allow inferences about what a person is doing. “Sometimes we’re not sure if you’re in Starbucks or the bar next door,” says Clifton. “With acoustic fingerprints, even if the [location] sensor readings are similar, we can distinguish that.”
Occasionally sampling the light around a phone using its camera provides another kind of extra signal of a person’s activity. “If you go from ambient light to natural light, that would say to us your context has changed,” says Hickl, and it should be possible for Saga to learn the difference between, say, the different areas of an office.
The end result of sampling light, sound, and pressure data will be Saga’s machine learning models being able to fill in more details of a users’ life, says Hickl. “[When] I go home today and spend 12 hours there, to Saga that looks like a wall of nothing,” he says, noting that Saga could use sound or light cues to infer when during that time at home he was, say, watching TV, playing with his kids, or eating dinner.
Andrew Campbell, who leads research into smartphone sensing at Dartmouth College, says that adding more detailed, automatic life-logging features is crucial for Saga or any similar app to have a widespread impact. “Automatic sensing relieves the user of the burden of inputting lots of data,” he says. “Automatic and continuous sensing apps that minimize user interaction are likely to win out.”
Campbell says that automatic logging coupled with machine learning should allow apps to learn more about users’ health and welfare, too. He recently started analyzing data from a trial in which 60 students used a life-logging app that Campbell developed called Biorhythm. It uses various data collection tricks, including listening for nearby voices to determine when a student is in a conversation. “We can see many interesting patterns related to class performance, personality, stress, sociability, and health,” says Campbell. “This could translate into any workplace performance situation, such as a startup, hospital, large company, or the home.”
Campbell’s project may shape how he runs his courses, but it doesn’t have to make money. ARO, funded by Microsoft cofounder Paul Allen, ultimately needs to make life-logging pay. Hickl says that he has already begun to rent out some of ARO’s technology to other companies that want to be able to identify their users’ location or activities. Aggregate data from Saga users should also be valuable, he says.
“Now we’re getting a critical mass of users in some areas and we’re able to do some trend-spotting,” he says. “The U.S. national soccer team was in Seattle, and we were able to see where activity was heating up around the city.” Hickl says the data from that event could help city authorities or businesses plan for future soccer events in Seattle or elsewhere. He adds that Saga could provide similar insights into many other otherwise invisible patterns of daily life.
Personal comment:
Or how to build up knowledge and minable data from low end "sensors". Finally, how some trivial inputs from low cost sensors can, combined with others, reveal deeper patterns in our everyday habits.
But who's Aro? Who founded it? Who are the "business angels" behind it and what are they up to? What does the technology exactly do? Where are its legal headquarters located (under which law)? That's the first questions you should ask yourself before eventually giving your data to a private company... (I know, this is usual suspects these days). But that's pretty hard to find! CEO is Mr Andy Hickl based in Seattle, having 1096 followers on Twitter and a "Sorry, this page isn't available" on Facebook, you can start to digg from there and mine for him on Google...
-
We are in need of some sort of efficient Creative Commons equivalent for data. But that would be respected by companies. As well as some open source equivalent for Facebook, Google, Dropbox, etc. (but also MS, Apple, etc.), located in countries that support these efforts through their laws and where these "Creative Commons" profiles and data would be implemented. Then, at least, we would have some choice.
In Switzerland, we had a term to describe how the landscape has been progressivily used since the 60ies to build small individual or holiday houses: "mitage du territoire" ("urban sprawl" sounds to be the equivalent in english, but "mitage" is related to "moths" to be precise, so rather ""mothed" landscape" if I could say so, which says what it says) and we had the opportunity to vote against it recently, with success. I believe that now the same thing is happening with personal and/or public sphere, with our lives: it is sprawled or "mothed" by private interests.
So, it is time to ask for the opportunity to "vote" (against it) everybody and have the choice between keeping the ownership of your data, releasing them as public or being paid for them (like a share in the company('s product))!
Monday, June 03. 2013
Note: interesting post by Léopold Lambert about body, presence and activism, as a tribute to all "occupy" movements of parks, streets, squares, etc. and in particular the recent ones in Turkey: body presence in public physical spheres completed by social media communication.
Via The Funambulist
-----
de Léopold Lambert
A Body of Gezi Park. 31 May 2013. From Yücel Tunca via Nar Photos.
For the last five days, the small park of Gezi near Taksim square in Istanbul has been occupied by dozens of thousands of people protesting, at first, against the urban project in development for this site that involves a shopping mall. Such a project that transforms a public space into an instrument of capitalism is part of a long series of others that has been changing Istanbul’s urban landscape and politics in the last decade. Very quickly however, the protest generalized itself and reached other cities of Turkey (Ankara, Izmir and more) in an attempt to globally constitute a strong resistance against the conservative and religious Turkish government and its Prime Minister, Recep Tayyip Erdoğan. The latter used to be Istanbul’s mayor and still has strong interests in its development. The police violently attacked the protesters, injuring severely some of them, but reinforcing the movement’s determination and legitimacy.
It is interesting to observe that such a news has been spread out much rapidly on the international level than on the national one since the Turkish Press – just like the American one, including the New York Times, at the beginning of the Occupy movement – did not communicate about this information in a clear submission to the political status quo. In New York, hundreds of occupiers went back on Zucchotti Park to show their international solidarity with the Turkish movement of the same name.
For the last two years, many “professional politicians” in power learned what it is to be afraid of the multitude. All answered with brutality (from Cairo to Santiago, via Benghazi, Damascus, Athens, Montreal, New York and many more), some stepped down, some kept their status, some others are still ordering massacres against their own people but all of them seems to have feared the power of the crowds, gathered by their common will to resist against totalitarianism and capitalism. Something needs to be understood here: despite all the media attempts to “surf” on these political waves with a common approach of the use of social media as a new form of political act – to a certain extent, it is not completely wrong – the thing that veritably choked the status quo is the gathering of bodies in the public space. Of course, some gathering of bodies are less political than others – sport events related ones for example – and therefore, there needs to be a certain performativity involved in this process; however, there is something inherently political in this act of forming a group of bodies in the public realms. As I have been writing often, especially to exclaim the sense of this notion of occupying, our body can only be at one place at a time and, because of its materiality, no other body can be at the very same place at the same time. This involves a certain necessity as our body is always spatialized but, at the very same time, it also involves the radical choice for this space at the exclusion of every other in the world. At each moment of our life, we have therefore to re-accomplish the necessary yet radical choice of the localization of our body. When thousands of bodies choose to be localized together in the streets or on a square, in such a way that they are not participating to the economy and might even have to confront the physical violent encounter with the various forces of suppression, rather than choosing the comfort of the private realms, a strong political gesture is being created.
It would be too easy to necessarily applaud any political gesture of this kind. The recent numerous demonstrations of catholic extremists and other movement of right wing activists in France against the legislation authorizing gay marriage – now in vigor - prove it well. In this latter case, the bodies that were demonstrating were the bodies representing the norm: white Christians heterosexuals. The latter do not really suffer from the way society is organized as they constitute the bodies that society considers to organize itself. The streets of Istanbul, on the other hand, are filled by people whose bodies are getting more and more constrained by the conservative religious dominant ideology – by dominant, I don’t imply as much a question of majority than one of relationships of power.
As always, architecture is not innocent here. The fact is that these bodies are gathering in the public realms, but more precisely, outside, in the streets, on the squares, in the parks. Architecture through its internality always has a limitation of the amount of bodies it can host (the maximum occupancy as the urban code defines it); the outdoor world does not really. Choosing for our body to be outside is to potentially contribute to a crowd that theoretically won’t be limited in its number by physical borders, hence the fear of politicians to see the movement spreading. Architecture is inherently participating to the striation of space, nevertheless, it can attempt to create a substantial porosity between the space it contains and the public one that surrounds it, in such a way that the political bodies can appropriate it.
For an excellent reflective digest about Occupy Gezi and these last five days in Istanbul, read this article on Jadaliyya.
Wednesday, May 08. 2013
Via Slash Gear via Computed·Blg
-----
We’ve been hearing a lot about Google‘s self-driving car lately, and we’re all probably wanting to know how exactly the search giant is able to construct such a thing and drive itself without hitting anything or anyone. A new photo has surfaced that demonstrates what Google’s self-driving vehicles see while they’re out on the town, and it looks rather frightening.
The image was tweeted by Idealab founder Bill Gross, along with a claim that the self-driving car collects almost 1GB of data every second (yes, every second). This data includes imagery of the cars surroundings in order to effectively and safely navigate roads. The image shows that the car sees its surroundings through an infrared-like camera sensor, and it even can pick out people walking on the sidewalk.
Of course, 1GB of data every second isn’t too surprising when you consider that the car has to get a 360-degree image of its surroundings at all times. The image we see above even distinguishes different objects by color and shape. For instance, pedestrians are in bright green, cars are shaped like boxes, and the road is in dark blue.
However, we’re not sure where this photo came from, so it could simply be a rendering of someone’s idea of what Google’s self-driving car sees. Either way, Google says that we could see self-driving cars make their way to public roads in the next five years or so, which actually isn’t that far off, and Tesla Motors CEO Elon Musk is even interested in developing self-driving cars as well. However, they certainly don’t come without their problems, and we’re guessing that the first batch of self-driving cars probably won’t be in 100% tip-top shape.
Friday, May 25. 2012
Via MIT Technology Review
-----
Siri may not be the smartest AI in the world, but it's the most socially adept.
By Will Knight
Me: "Should I go to bed, Siri?"
Siri: "I think you should sleep on it."
It's hard not to admire a smart-aleck reply like that. Siri—the "intelligent personal assistant" built into Apple's iPhone 4S—often displays this kind of attitude, especially when asked a question that pokes fun at its artificial intelligence. But the answer is not some snarky programmers' joke. It's a crucial part of why Siri works so well.
The popularity of Siri shows that a digital assistant needs more than just intelligence to succeed; it also needs tact, charm, and surprisingly, wit. Errors cause frustration and annoyance with any computer interface. The risk is amplified dramatically with one that poses as a conversational personal assistant, a fact that has undone some socially stunted virtual assistants in the past. So for Siri, being likable and occasionally kooky may be just as important as dazzling with feats of machine intelligence.
Siri has its origins in a research project begun in 2003 and funded by the U.S. military's Defense Advanced Research Projects Agency (DARPA). The effort was led by SRI International, which in 2007 spun off a company that released the original version of Siri as an iPhone app in February 2010 (the technology was named among Technology Review's 10 Emerging Technologies in 2009). This earlier Siri could do fewer things than the one that later came built into the iPhone 4S. It was able to access a handful of online services for making restaurant reservations, buying movie tickets, and booking taxis, but it was error-prone and never made a big hit with users. Apple bought the startup behind Siri for an undisclosed sum just two months after the app made its debut.
The Siri that appeared a year and a half later works astonishingly well. It listens to spoken commands (in English, French, German, and Japanese) and responds with either an appropriate action or an answer spoken in a calm, suitably robotic female voice. Ask Siri to wake you up at 8:00 a.m. and it will set the phone's alarm clock accordingly. Tell Siri to send a text message to a friend and it will dutifully take dictation before firing off your missive. Say "Where can I find a burrito, Siri?" and Siri will serve up a list of well-reviewed nearby Mexican restaurants, found by querying the phone's location sensor and performing a Web and map search. Siri also has countless facts and figures at its fingertips, thanks to the online "answer engine" Wolfram Alpha, which has access to many databases. Ask "What's the radius of Jupiter?" and Siri will casually inform you that it's 42,982 miles.
Siri's charismatic quality is entirely lacking in other natural-language interfaces. Several companies sell virtual customer service agents capable of chatting with customers online in typed text. One example is Eva, created by the Spanish company Indysis. Eva can chat comfortably unless the conversation begins to stray from the areas it's been trained to talk about. If it does, then Eva will rather rudely attempt to push you back toward those topics.
Siri also has some closer competitors in the form of apps available for iPhones and Android devices. Evi, made by True Knowledge; Dragon Go, from the voice-recognition company Nuance; and Iris, made by the Indian software company Dexetra, are all variations on the theme of a voice-controlled personal assistant, and they can often match Siri's ability to understand and carry out simple tasks, or to retrieve information. But they are much less socially adept. When I asked Iris if it thought I should go to sleep, "Perhaps you could use the rest" was its flat, humorless response.
Impressive though Siri is, however, the AI involved is not all that sophisticated. Boris Katz, a principal research scientist at MIT's Computer Science and Artificial Intelligence Lab, who's been building machines that parse human language for decades, suspects that Siri doesn't put much effort into analyzing what a person is asking. Instead of figuring out how the words in a sentence work together to convey meaning, he believes, Siri often just recognizes a few keywords and matches them with a limited number of preprogrammed responses. "They taught it a few things, and the system expects those things," he says. "They're very clever about what people normally ask."
In contrast, conventional artificial-intelligence research has strived to parse more complex meaning in conversations. In 1985, Katz began building a system called START to answer questions by processing sentence structure. That system answers typed questions by analyzing how the words are arranged, to interpret the meaning of what's being asked. This enables START to answer questions phrased in complex ways or with some degree of ambiguity.
In 2006—a year before SRI spun off its startup—Katz and colleagues demonstrated a software assistant based on START that could be accessed by typing queries into a mobile phone. The concept is remarkably similar to Siri, but this part of the START project never progressed any further. It remained less important than Katz's pursuit of his real objective—to create a machine that can better match the human ability to use language.
To understand how difficult it is to get communication right, you need look no further than the infamous intelligent assistant Clippy, introduced by Microsoft in 1997.
START is just a tiny offshoot of the research into artificial intelligence that began some 50 years earlier as an attempt to understand the functioning of the human mind and to create something analogous in machines. That effort has produced many truly remarkable technologies, capable of performing computational tasks that are impossibly complicated for humans. But artificial-intelligence research has failed to re-create many aspects of human intellect, including language and communication. As Katz explains, a simple conversation between two people can tap into the full depth of a person's life experiences, and this remains impossible to mimic in a machine. So even as AI systems have become better at accessing, processing, and presenting information, human communication has continued to elude them.
Despite being less capable than START at dealing with the complexities of language, Siri shows that a machine can pull off just enough tricks to fool users into feeling as if they're having something approximately like a real conversation. To understand how difficult it is to get even simple text-based communication right, you need look no further than the infamous intelligent assistant introduced by Microsoft back in 1997. This annoying virtual paper clip, called Clippy, would pop up whenever a user created a document, offering assistance with a message such as the infuriating line "It looks like you're writing a letter. Would you like help?" Microsoft expected users to love Clippy. Bill Gates thought fans would design Clippy T-shirts, mugs, and websites. So the company was stunned, and confused, when users hated Clippy, creating T-shirts, mugs, and websites dedicated to disparaging it. The response was so bad that Microsoft killed Clippy off in 2007.
Before it did, Microsoft hired Stanford professor Clifford Nass, an expert on human-computer interaction, to investigate why the program had inspired so much unpleasantness. Nass, who is the author of The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships, has spent years studying similar phenomena, and his work suggests a fairly simple cause: people instinctively apply the rules of human social interactions to dealings with computers, cell phones, robots, in-car navigation systems, and similar machines. Nass realized that Clippy broke just about every norm of acceptable social behavior. It made the same mistakes again and again, and constantly pestered users who wanted to be left alone. "Clippy's problem was it said 'I'll do everything' and then proceeded to disappoint," says Nass. Just as a person who repeats the same answer again and again makes us feel insulted, Nass says, so does a computer interface—even if we know full well we're dealing with a machine.
Clippy showed that attempting more humanlike communication can backfire spectacularly if the subtleties of social behavior aren't understood and respected. Nass says Apple did everything possible to make Siri likable. Siri doesn't impose itself on the user at all. The application runs in the background on the iPhone, leaping to attention only when the user holds down the "home" button or puts the phone to his or her ear and starts speaking. It also avoids making the same mistake twice, trying different answers when the user repeats a question. Even the tone of Siri's voice was carefully chosen to be inoffensive, Nass believes.
Apple also limited the tasks Siri can perform and the answers it can give, most probably to avoid disappointment. If you ask Siri to post something to Twitter, for example, it'll sheepishly admit that it doesn't know how. But since the alternative could be accidentally broadcasting garbled tweets, this strategy is understandable.
The accuracy of Siri's voice recognition also helps avoid disappointment. The system does sometimes mishear words, often with amusing results. "I'm sorry, Will, I don't understand 'I need pajamas'" was a curious response to a question that had nothing to do with pajamas. But mostly the voice system works remarkably well. It has no problem with my English accent or with many complex words and phrases, and this overall accuracy makes the odd mistake that much more acceptable.
A key challenge for Apple was that soon after meeting Siri, a person may experience a powerful urge to trip up this virtual know-it-all: to ask it the meaning of life, whether it believes in God, or whether it knows R2D2. Apple chose to handle this phenomenon in an inventive way: by making sure Siri gets the joke and plays along. Thus it has a clever answer for just about any curveball thrown at it and even varies its responses, a trick that makes it seem eerily human at times.
This banter also helps lessen the blow when Siri misunderstands something or is stumped by a surprisingly simple question. Once, when I asked who won the Super Bowl, it proudly converted one Korean won into dollars for me. I knew this was just an algorithmic error in a distant bank of computer servers, but I also felt the urge to interpret it as Siri being zany.
Nass says the way Siri handles humor is inspired. Research has revealed, he notes, that humor makes people seem smarter and more likable. "Intermittent, innocent humor has been shown, for both people and computers, to be effective," Nass says. "It's very positive, even for the most boring, staid computer interface."
But Katz, as someone who has been striving for decades to give machines the ability to use language, hopes eventually to see something much more sophisticated than Siri emerge: a machine capable of holding real conversations with people. Such machines could provide fundamental insights into the nature of human intelligence, he says, and they might provide a more natural way to teach machines how to be smarter.
That might continue to be the dream of AI researchers. For the rest of us, though, the arrival of a virtual assistant that is actually useful is just as fundamental a breakthrough. In Katz's office at MIT, I showed him some of the amusing answers Siri comes up with when provoked. He chuckled and remarked at the cleverness of the engineers who designed Siri, but he also spoke as an AI researcher using meanings and words that Siri would undoubtedly struggle with. "There's nothing wrong with having gimmicks," he said, "but it would be nice if it could actually analyze deeply what you said. The conversations with the user will be that much richer."
Katz is right that a more revolutionary intelligent personal assistant—one that's capable of performing many more complicated tasks—will need more advanced AI. But this also underplays an important innovation behind Siri. After testing the app a while longer, Katz confessed that he admires entrepreneurs who know how to turn advances in computer science into something that ordinary people will use every day. "I wish I knew how people do that," he admits.
For the answer, perhaps he just needs to keep talking to Siri.
Will Knight is Technology Review 's online editor.
Copyright Technology Review 2012.
Friday, May 11. 2012
Via Creative Applications
-----
Feel Me is a project by Marco Triverio that explores the gap between synchronous and asynchronous communication using our mobile device in attempt to “connect differently” and enrich digital communications. Whereas we draw lines between phone conversations and sms messages, Feel Me looks for space in between that would allow you to be intimate in realtime, non-verbally using touch.
Based on the finding for which communications with a special person are not about content going back and forth but rather about perceiving the presence of the other person on the other side, Feel Me opens a real-time interactive channel.
Feel Me first appears to be a text messaging application. When two people are both looking at the conversation they are having, touches on the screen of one side are shown on the other side as small dots. Touching the same spot triggers a small reaction, such as a vibration or a sound, acknowledging that both parts are there at the same time. Feel Me creates a playful link with the person on the other side, opening a channel for a non-verbal and interactive connection.
“Feel Me” was awarded honors at CIID. Marco is currently working as an interaction designer at IDEO.
See also concept development videos below.
Project Page
Monday, March 28. 2011
Via Vague Terrain
-----
by Kevin Hamilton
Richard Sumner: Often when we meet people for the first time, some physical characteristic strikes us. Now what is the first thing you notice in a person?
Bunny Watson: Whether the person is male or female.
I followed Watson's debut on Jeopardy about as much as the next guy - the story was unavoidable there for awhile. I read Richard Powers' essay on the pre-paywall version of the New York Times, watched the flashy documentary about Flashy designer Josh Davis, responsible for the avatar seen on screen.
I assumed like others that the AI software was named for Thomas Watson, IBM's founder, or perhaps even for the sidekicks to Alexander Graham Bell or Sherlock Holmes. (Though each of the latter options seemed a mismatch.)
Having finally watched the 1957 film Desk Set, starring Hepburn and Tracy, I think I have found Watson's true origins – in Hepburn's character Bunny Watson.
In the film (adapted from a play), Watson has just returned from a demonstration of the new IBM Electronic Brain (announced by Thomas J. Watson?), to find that her office at a large national television network has been occupied by an IBM "methods engineer" named Richard Sumner (played by Spencer Tracy.) 1
Sumner, who in addition to being a management science expert is an MIT-trained computer engineer, is engaged in a month-long project of studying Watson's office and staff – the Reference Section of the company. Watson and the three women she supervises are the human Google for the company – their phones constantly ring with obscure questions - some of which are so familiar to the women that they can answer without effort, others of which require access to files and books.
Sumner's job, known to us and only suspected and feared by the other main characters, is to design a computer installation for the office. As the company wants some big publicity for this event, Sumner is to keep his mission a secret, leading to greater suspicion on the part of Watson and her team of an impending disaster – would a computer replace their labor?
The film's narrative is anchored by two significant tests. At the beginning, Watson is tested by Sumner, and determined to be a superb computing agent. She is able to count, tabulate, store and recall with uncanny precision, and using counter-rational or supra-rational algorithms. Later, during the story's second big test, the finally installed computer fields some initial queries in its position as reference librarian, and fails.
EMERAC fails because of poor context awareness, something that the mere typist assigned to inputting data doesn't know to compensate for. In the end, EMERAC is only successful - and therefore of value to humanity - when operated by Watson herself, who is able to enter in the right information to makeup for the computer's poor contextual knowledge.
So the conclusion takes us to a happy marriage of computer and operator, in which both are necessary to keeping things running smoothly and efficiently, in the context of a growing world of "big data." (The final problem, and the one we see EMERAC answer correctly, is the question "What is the weight of the Earth?")
EMERAC is thus more like Wolfram Alpha than the contemporary Watson. The new Watson, named for an operator rather than for a computer, is presented to television viewers as an operator of the Jeopardy interface. (The game is, after all, a button-pushing contest.)
In the new Watson, a man - at least in popular understanding - has replaced a woman at the switch. But perhaps a new configuration of labor has emerged anyway. Consider the change from the former, in which Sumner engineers and maintains the machine in real time, while Bunny operates it, to the newer version, in which multiple sites across multiple temporalities are responsible for the resulting computing event.
Alex Trebeck is in the role of the telephone from Desk Set, merely passing along the queries originating from elsewhere. The Watson AI, dressed in Davis' cartoony dataviz rather than Charles LeMaire's fashions, fields the questions and answers them as a sort of merged operator and machine. Behind the scenes and long before the event, a small army of researchers programmed the AI and fed it data. In Desk Set, this latter job is also visible, through the work of Bunny's staff, who help deliver all the content for the machine to digest.
So with the Jeopardy Watson stunt, we see primarily two changes – a person where a phone used to be, and a machine where there used to be a machine-plus-operator. The sum total of laborers has remain unchanged, though we are less one woman, and plus one man. This cybernetic brain needs no operator, but it does need a user – and it certainly needs an audience.
(1) The whole story takes place at Rockefeller Center and bears many stylistic resemblances to the current NBC sitcom 30 Rock – including a page named Kenneth.
This post was originally published on Critical Commons.
Friday, February 05. 2010
Virtual memorials are nothing new — people have been paying their respects to departed loved ones on Facebook and Myspace for years. But a Facebook page set up for Henio Zytomirski, a 6-year-old Polish boy who was killed during the Holocaust, is truly revolutionizing the way we recount history and remember the dead. His profile is, in essence, a virtual museum.
Last summer, a group of people in Lublin, Poland, and Israel — including Henio’s cousin Neta Zytomirski Avidar — created a Facebook profile for the boy, who was sent to the Majdanek death camp in 1942. According to the AP, the idea grew out of a group called Grodzka Gate-NN Teater, which uses the arts to remember victims of the Holocaust. Henio was chosen because there were so many photos and letters available to draw from, which makes his profile a truly rich reading experience.
The profile functions as kind of a piecemeal storybook, with Polish status updates in Henio’s voice as well as photos and other updates in the third person that tell his tale. Henio’s own voice is simple and touching, as you can see in the selection below. (Rough Translation: “I am seven years old. I have a mom and dad. I have a favorite place. Not everyone has a mom and dad, but everyone has their favorite place. Today I decided that I will never leave Lublin. I will stay here forever. In my favorite place. With Mom and Dad. In Lublin.”)
According to the AP, not everyone is happy with the project — the news company cites Adam Kopciowski, a historian at Lublin’s Marie Curie-Sklodowska University who specializes in Jewish studies, who thinks that writing in the dead boy’s voice is ethically unsound and amounts to “abuse toward a child that has been dead for the past 70 years.” Others have also raised the fact that the page — much like Doppelganger Week — violates Facebook’s TOS.
Still, Henio’s cousin makes very clear in a note on the profile that the young boy’s voice is meant to be purely speculative, and that he is to function as a symbol:
“We try to reconstruct his life in the ghetto from survivors’ testimonies, from documents, from knowing the history of Lublin during the Nazi occupation. From all of these we try to guess what might have been his testimony.
Henio is also a representing figure, a symbolic figure, an icon. His figure represents the destruction of the ancient Jewish community of Lublin.
His figure brings to Facebook the story of the Jewish community under the Nazi occupation regime and of its ruin.”
And judging by his 3,000+ fans, scores of thankful wall posts and avalanche of virtual gifts, people have become enamored of the long-lost boy.
Aside from being a touching memorial to a tragically departed boy, Henio’s profile is also a fascinating use of social media as an educational tool. Some of us have probably visited the United States Holocaust Memorial Museum in Washington, D.C. Upon entering, you receive a passport depicting someone who experienced the Holocaust, and throughout your tour through the museum, you learn his or her fate. Henio’s page brings this experience to another level, allowing you to interact with the boy, and to learn about his life in a way that integrates fully into your own social media experience.
This profile only goes to show how sites like Facebook are no longer silly time wasters or places to troll for your next collegiate hookup, they provide us with news, entertainment, advertisements and, now — as more and more people are seeing it as both a news portal and source — education. I recently became a friend of Henio’s, will you?
-----
Via Mashable
Friday, January 22. 2010
Last evening I particpated remotely from my home in France in a pre-event in Amsterdam of ElectroSmog International Festival for Sustainable Immobility.
I didn't use the fancy gadget in the photo above. My set-up yesterday was a bit, but not a lot, better-organized than the remote recording session (below) I did for a BBC radio programme last summer.
I said my bit to deBalie via skype, and followed the rest of proceedings, which were chaired by Eric Kluitenberg, on deBalie's livestreaming feed.
The deBalie session was not, I know, a major event in the greater context of events concerning sustainability, media, and design. But I'm proud, nonetheless: I have not yet set foot in an aeroplane in 2010, and this event was a meaningful first step: it followed a new year resolution radically to reduce my work-related travel.
In preparing for yesterday's modest exercise, I was amazed to discover that I have been writing about the substitution of telepresence for mobility for seventeen years. Writing, not doing, I know: By no means all my texts and talks are here and here and here and here and here and here and here and here and here and here and here and here and here and here and here and here.
Although deBalie's streaming video feed was clear (thanks to their industrial-quality cameras; three-times normal bandwidth; something called an h264 video codec; and Gerbrand); and Eric was a clear and well-organized compere; but the experience was as unrelaxing, experientially, as always.
I spent half-a-day spent fidding with lights and backdrops at my end. I had to miss lunch in order to test skype. And I had to work hard, during the event itself, to keep track of what was happening in Amsterdam. An abruptly broken connection, internet-side, just as the final Q+A started, was an abrupt but unsurprising conclusion.
Content-wise, the session was a blast from the past - in good ways and bad.
A guy from IBM demo'd a hideous virtual "creative office" populated by avatars. The avatar representing the IBM-er in Belgium failed to speak or move for five minutes; its human owner had apparently left his desk to look for a beer. This was fair enough -a national beer strike in Belgium has only recently ended - but the jerky, implausible look-and-feel of IBM's virtual office was less enticing than the pre-Sims demo given by Will Wright at Doors of Perception back in 1998.
(It wasn't much better, either, than the time I did a video conference with Korea in which twelve corporate persons - not from IBM - sat in a row facing the camera. I was able scan the camera along the line, jerkily, from my end. But because my fellow videoconferencers were dressed in identical blue suits, white shirt and dark tie; and because most of them seemed to be called Mr Kim; I soon gave up).
(But last night's IBM demo was superior to the videoconference between a summer school in Lisbon, and the White House, that I experienced last summer. Then, the link was enabled by Cisco Systems' ultra high-end platform. We were all excited because our interviewee was said to have an office just down the hall from the Oval Office. We all assumed that communicating with the centre of world power on the world's fanciest videoconferencing platform would be fab. But the link, once opened, yielded sound and pictures worse then the ones sent back by the first lunar lander. After ten minutes of torture, someone in Lisbon put their hand up and said" "can't we use skype?" - so we we did).
But there were delights, last evening, too. Costas Bissas from DistanceLab told us, from a location somewhere in the wilds of Scotland, about a cow called Grace who has been fitted with a webcam.
It took me back to the time Bill Gaver and Tony Dunne attached web-enabled microphones to chickens in Peccioli.
I told Costas I would pay good money to see Grace charging a bunch of tourists, but he said that is not their business model.
As last night's discussion continued, I had an epiphany: it is not my job to keep track of all these tele-tools and platforms - still less, to set them up and make them work when I need them.
I thought back to the early years of the telephone: for decades after the telephone was first publicly deployed, one would pick up the receiver - and a room full of operators would make the connection for you.
This is what we need now. We need the equivalent of a roadie for telepresence events.
Rock stars don't have to fiddle about setting up amps and lighting and the stage before they perform - so why should I, or any other right thinking citizen who has a life to lead?
e-Roadies are the solution I have been searching for for seventeen years.
I haven't worked out where to find them, nor how to train them - still less, a business model to pay for them. But I am surely on the right track because E-Roadies are a human solution.
Posted by John Thackara at January 22, 2010 10:35 AM
-----
Via Doors of Perception
Friday, December 04. 2009
Two persons in the same place, as represented on the Foursquare interface. A depiction of co-presence mediated by technology.
Co-presence, as described by Zhao can refer to the sense of being together with other people in a remote or a shared virtual environment. To refer back to Goffman, it’s a form of human co-location in which individuals become “accessible, available, and subject to one another“.
The advent of location-based services lead to a new class of situation where people can b both physically copresent (what Zhao calls “Corporeal Copresence”) and located in electronic proximity (what Zhao calls “Corporeal Telecopresence”). Which is what happens with the Foursquare interface. The categories are then not mutually exclusive.
Why do I blog this? curiosity about what this kind of constraints can lead to, in terms of location-based services in a physically co-present context.
-----
Via Pasta & Vinegar
Personal comment:
Not that it is a new concept, just to underline it one more time!
|