Friday, May 25. 2012
Via MIT Technology Review
-----
Siri may not be the smartest AI in the world, but it's the most socially adept.
By Will Knight
Me: "Should I go to bed, Siri?"
Siri: "I think you should sleep on it."
It's hard not to admire a smart-aleck reply like that. Siri—the "intelligent personal assistant" built into Apple's iPhone 4S—often displays this kind of attitude, especially when asked a question that pokes fun at its artificial intelligence. But the answer is not some snarky programmers' joke. It's a crucial part of why Siri works so well.
The popularity of Siri shows that a digital assistant needs more than just intelligence to succeed; it also needs tact, charm, and surprisingly, wit. Errors cause frustration and annoyance with any computer interface. The risk is amplified dramatically with one that poses as a conversational personal assistant, a fact that has undone some socially stunted virtual assistants in the past. So for Siri, being likable and occasionally kooky may be just as important as dazzling with feats of machine intelligence.
Siri has its origins in a research project begun in 2003 and funded by the U.S. military's Defense Advanced Research Projects Agency (DARPA). The effort was led by SRI International, which in 2007 spun off a company that released the original version of Siri as an iPhone app in February 2010 (the technology was named among Technology Review's 10 Emerging Technologies in 2009). This earlier Siri could do fewer things than the one that later came built into the iPhone 4S. It was able to access a handful of online services for making restaurant reservations, buying movie tickets, and booking taxis, but it was error-prone and never made a big hit with users. Apple bought the startup behind Siri for an undisclosed sum just two months after the app made its debut.
The Siri that appeared a year and a half later works astonishingly well. It listens to spoken commands (in English, French, German, and Japanese) and responds with either an appropriate action or an answer spoken in a calm, suitably robotic female voice. Ask Siri to wake you up at 8:00 a.m. and it will set the phone's alarm clock accordingly. Tell Siri to send a text message to a friend and it will dutifully take dictation before firing off your missive. Say "Where can I find a burrito, Siri?" and Siri will serve up a list of well-reviewed nearby Mexican restaurants, found by querying the phone's location sensor and performing a Web and map search. Siri also has countless facts and figures at its fingertips, thanks to the online "answer engine" Wolfram Alpha, which has access to many databases. Ask "What's the radius of Jupiter?" and Siri will casually inform you that it's 42,982 miles.
Siri's charismatic quality is entirely lacking in other natural-language interfaces. Several companies sell virtual customer service agents capable of chatting with customers online in typed text. One example is Eva, created by the Spanish company Indysis. Eva can chat comfortably unless the conversation begins to stray from the areas it's been trained to talk about. If it does, then Eva will rather rudely attempt to push you back toward those topics.
Siri also has some closer competitors in the form of apps available for iPhones and Android devices. Evi, made by True Knowledge; Dragon Go, from the voice-recognition company Nuance; and Iris, made by the Indian software company Dexetra, are all variations on the theme of a voice-controlled personal assistant, and they can often match Siri's ability to understand and carry out simple tasks, or to retrieve information. But they are much less socially adept. When I asked Iris if it thought I should go to sleep, "Perhaps you could use the rest" was its flat, humorless response.
Impressive though Siri is, however, the AI involved is not all that sophisticated. Boris Katz, a principal research scientist at MIT's Computer Science and Artificial Intelligence Lab, who's been building machines that parse human language for decades, suspects that Siri doesn't put much effort into analyzing what a person is asking. Instead of figuring out how the words in a sentence work together to convey meaning, he believes, Siri often just recognizes a few keywords and matches them with a limited number of preprogrammed responses. "They taught it a few things, and the system expects those things," he says. "They're very clever about what people normally ask."
In contrast, conventional artificial-intelligence research has strived to parse more complex meaning in conversations. In 1985, Katz began building a system called START to answer questions by processing sentence structure. That system answers typed questions by analyzing how the words are arranged, to interpret the meaning of what's being asked. This enables START to answer questions phrased in complex ways or with some degree of ambiguity.
In 2006—a year before SRI spun off its startup—Katz and colleagues demonstrated a software assistant based on START that could be accessed by typing queries into a mobile phone. The concept is remarkably similar to Siri, but this part of the START project never progressed any further. It remained less important than Katz's pursuit of his real objective—to create a machine that can better match the human ability to use language.
To understand how difficult it is to get communication right, you need look no further than the infamous intelligent assistant Clippy, introduced by Microsoft in 1997.
START is just a tiny offshoot of the research into artificial intelligence that began some 50 years earlier as an attempt to understand the functioning of the human mind and to create something analogous in machines. That effort has produced many truly remarkable technologies, capable of performing computational tasks that are impossibly complicated for humans. But artificial-intelligence research has failed to re-create many aspects of human intellect, including language and communication. As Katz explains, a simple conversation between two people can tap into the full depth of a person's life experiences, and this remains impossible to mimic in a machine. So even as AI systems have become better at accessing, processing, and presenting information, human communication has continued to elude them.
Despite being less capable than START at dealing with the complexities of language, Siri shows that a machine can pull off just enough tricks to fool users into feeling as if they're having something approximately like a real conversation. To understand how difficult it is to get even simple text-based communication right, you need look no further than the infamous intelligent assistant introduced by Microsoft back in 1997. This annoying virtual paper clip, called Clippy, would pop up whenever a user created a document, offering assistance with a message such as the infuriating line "It looks like you're writing a letter. Would you like help?" Microsoft expected users to love Clippy. Bill Gates thought fans would design Clippy T-shirts, mugs, and websites. So the company was stunned, and confused, when users hated Clippy, creating T-shirts, mugs, and websites dedicated to disparaging it. The response was so bad that Microsoft killed Clippy off in 2007.
Before it did, Microsoft hired Stanford professor Clifford Nass, an expert on human-computer interaction, to investigate why the program had inspired so much unpleasantness. Nass, who is the author of The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships, has spent years studying similar phenomena, and his work suggests a fairly simple cause: people instinctively apply the rules of human social interactions to dealings with computers, cell phones, robots, in-car navigation systems, and similar machines. Nass realized that Clippy broke just about every norm of acceptable social behavior. It made the same mistakes again and again, and constantly pestered users who wanted to be left alone. "Clippy's problem was it said 'I'll do everything' and then proceeded to disappoint," says Nass. Just as a person who repeats the same answer again and again makes us feel insulted, Nass says, so does a computer interface—even if we know full well we're dealing with a machine.
Clippy showed that attempting more humanlike communication can backfire spectacularly if the subtleties of social behavior aren't understood and respected. Nass says Apple did everything possible to make Siri likable. Siri doesn't impose itself on the user at all. The application runs in the background on the iPhone, leaping to attention only when the user holds down the "home" button or puts the phone to his or her ear and starts speaking. It also avoids making the same mistake twice, trying different answers when the user repeats a question. Even the tone of Siri's voice was carefully chosen to be inoffensive, Nass believes.
Apple also limited the tasks Siri can perform and the answers it can give, most probably to avoid disappointment. If you ask Siri to post something to Twitter, for example, it'll sheepishly admit that it doesn't know how. But since the alternative could be accidentally broadcasting garbled tweets, this strategy is understandable.
The accuracy of Siri's voice recognition also helps avoid disappointment. The system does sometimes mishear words, often with amusing results. "I'm sorry, Will, I don't understand 'I need pajamas'" was a curious response to a question that had nothing to do with pajamas. But mostly the voice system works remarkably well. It has no problem with my English accent or with many complex words and phrases, and this overall accuracy makes the odd mistake that much more acceptable.
A key challenge for Apple was that soon after meeting Siri, a person may experience a powerful urge to trip up this virtual know-it-all: to ask it the meaning of life, whether it believes in God, or whether it knows R2D2. Apple chose to handle this phenomenon in an inventive way: by making sure Siri gets the joke and plays along. Thus it has a clever answer for just about any curveball thrown at it and even varies its responses, a trick that makes it seem eerily human at times.
This banter also helps lessen the blow when Siri misunderstands something or is stumped by a surprisingly simple question. Once, when I asked who won the Super Bowl, it proudly converted one Korean won into dollars for me. I knew this was just an algorithmic error in a distant bank of computer servers, but I also felt the urge to interpret it as Siri being zany.
Nass says the way Siri handles humor is inspired. Research has revealed, he notes, that humor makes people seem smarter and more likable. "Intermittent, innocent humor has been shown, for both people and computers, to be effective," Nass says. "It's very positive, even for the most boring, staid computer interface."
But Katz, as someone who has been striving for decades to give machines the ability to use language, hopes eventually to see something much more sophisticated than Siri emerge: a machine capable of holding real conversations with people. Such machines could provide fundamental insights into the nature of human intelligence, he says, and they might provide a more natural way to teach machines how to be smarter.
That might continue to be the dream of AI researchers. For the rest of us, though, the arrival of a virtual assistant that is actually useful is just as fundamental a breakthrough. In Katz's office at MIT, I showed him some of the amusing answers Siri comes up with when provoked. He chuckled and remarked at the cleverness of the engineers who designed Siri, but he also spoke as an AI researcher using meanings and words that Siri would undoubtedly struggle with. "There's nothing wrong with having gimmicks," he said, "but it would be nice if it could actually analyze deeply what you said. The conversations with the user will be that much richer."
Katz is right that a more revolutionary intelligent personal assistant—one that's capable of performing many more complicated tasks—will need more advanced AI. But this also underplays an important innovation behind Siri. After testing the app a while longer, Katz confessed that he admires entrepreneurs who know how to turn advances in computer science into something that ordinary people will use every day. "I wish I knew how people do that," he admits.
For the answer, perhaps he just needs to keep talking to Siri.
Will Knight is Technology Review 's online editor.
Copyright Technology Review 2012.
Thursday, April 19. 2012
-----
Archinect and Woodbury School of Architecture are proud to present:
Publish Or... bracket [GOES SOFT]
Thursday, April 19
6:00 p.m.
Sonic landscape by Health and Beauty.
WUHO Gallery
6518 Hollywood Boulevard
Los Angeles, CA 90028 (map)
Come say hello, mingle, and check out selected entries from bracket [goes soft]. Including work by Woodbury School of Architecture faculty member Ewan Branda.
Limited edition zine-syle [goes-soft] take-aways. First come, first serve.
Bracket [goes soft] examines the use and implications of soft today—from the scale of material innovation to territorial networks. While the projects in Bracket 2 are diverse in deployment and issues they engage, they share several key characteristics—proposing systems, networks and technologies that are responsive, adaptable, scalable, non-linear, and multivalent. Certain projects reveal how soft systems rely on engagement with their larger environment, collecting and sensingenvironmental atmospheric information, and through feedback, adapting the system to augment performance. Other projects examine how soft systems can function as interfaces with the environment—whether mitigating or harnessing it—operating at the scale of a wall, a building, or a landscape.Moreover, a particular strand of projects presented in Bracket 2 are tactical and strategic in nature, enabling them to operate, often covertly, within existing organizational structures, subverting rules and limitations for opportunism, to support new ecologies—whether natural, economic or political. Intelligence in other work lies in the organization and format of the system, accommodating transformation by rejigging components of the system itself. Adapting to extrinsic as well as intrinsic factors, enabling them to anticipate, recover and transform in unexpected situations, renders other speculations resilient to disturbances. Instead of mitigation, contingency in these soft systems is typically opportunistic. Lastly, select projects expose how the networking of smaller units or interventions, diffused across a larger territory, can generate, collect, or respond at a vast scale. Agile, these tentacular networks can diffuse or retract as resources or needs change.
The editorial board and jury for Bracket 2 includes Benjamin Bratton, Julia Czerniak, Jeffrey Inaba, Geoff Manaugh, Philippe Rahm, Charles Renfro, as well as co-editors Lola Sheppard and Neeraj Bhatia.
Bracket 2 is published by Actar and designed by Thumb.
Personal comment:
fabric | ch publishes its project Arctic Opening (pdf) in the second volume of Bracket. Subject of this edition of Bracket is "software", or how the projects that are presented "share several key characteristics—proposing systems, networks and technologies that are responsive, adaptable, scalable, non-linear, and multivalent."
Monday, June 20. 2011
by nospam@example.com (Christian Babski)
Via slashgear via Computed·Blg
-----
Microsoft’s Kinect for Windows SDK will be released in beta form this week, according to Microsoft Spain president María Garaña, the toolkit allowing developers to use the motion-tracking hardware with PCs rather than the Xbox 360. Among the initially supported features, WinRumors reports, will be skeletal tracking for one or two people along with use of the four-microphone array.
That array is coupled with acoustic noise and echo cancellation, and can pinpoint which person in the field of view is speaking. Microsoft will also link it with the existing Windows speech recognition API, opening up the possibility of two users individually controlling a PC with their voice, and the system automatically recognizing which commands come from which person.
Finally, there’s XYZ depth perception to allow the Kinect camera – and the connected PC – to track how far away a user is. Microsoft’s gaming platform uses all this for motion-controlled titles such as sports, bowling and other games; back at E3 2011 the company confirmed that Star Wars would be coming to Kinect, as well as Mass Effect 3 and other titles.
For the PC, while gaming is likely to be one strand of Kinect’s use, Microsoft has also talked about its potential for other applications. Remote control of an HTPC – without having to navigate either a complex remote or wireless keyboard – is one suggestion, along with control of presentations and other media. With the Kinect for Windows SDK, third-party developers will also be able to bake support into their apps.
The Microsoft announcement will be held online via the company’s Channel 9, at 9.30am PST on Thursday June 16.
Thursday, April 21. 2011
Via MIT Technology Review
-----
An experimental system would tighten the limits on information provided to websites.
By Erica Naone
|
Credit: iStockphoto |
Today, many websites ask users to take a devil's deal: share personal information in exchange for receiving useful personalized services. New research from Microsoft, which will be presented at the IEEE Symposium on Security and Privacy in May, suggests the development of a Web browser and associated protocols that could strengthen the user's hand in this exchange. Called RePriv, the system mines a user's behavior via a Web browser but controls how the resulting information is released to websites that want to offer personalized services, such as a shopping site that automatically knows users' interests.
"The browser knows more about the user's behavior than any individual site," says Ben Livshits, a researcher at Microsoft who was involved with the work. He and colleagues realized that the browser could therefore offer a better way to track user behavior, while it also protects the information that is collected, because users won't have to give away as much of their data to every site they visit.
The RePriv browser tracks a user's behavior to identify a list of his or her top interests, as well as the level of attention devoted to each. When the user visits a site that wants to offer personalization, a pop-up window will describe the type of information the site is asking for and give the user the option of allowing the exchange or not. Whatever the user decides, the site doesn't get specific information about what the user has been doing—instead, it sees the interest information RePriv has collected.
Livshits explains that a news site could use RePriv to personalize a user's view of the front page. The researchers built a demonstration based on the New York Times website. It reorders the home page to reflect the user's top interests, also taking into account data collected from social sites such as Digg that suggests which stories are most popular within different categories.
Livshits admits that RePriv still gives sites some data about users. But he maintains that the user remains aware and in control. He adds that cookies and other existing tracking techniques sites already collect far more user data than RePriv supplies.
The researchers also developed a way for third parties to extend RePriv's capabilities. They built a demonstration browser extension that tracks a user's interactions with Netflix to collect more detailed data about that person's movie preferences. The extension could be used by a site such as Fandango to personalize the movie information it presents—again, with user permission.
"There is a clear tension between privacy and personalized technologies, including recommendations and targeted ads," says Elie Bursztein, a researcher at the Stanford Security Laboratory, who is developing an extension for the Chrome Web browser that enables more private browsing. "Putting the user in control by moving personalization into the browser offers a new way forward," he says.
"In the medium term, RePriv could provide an attractive interface for service providers that will dissuade them from taking more abusive approaches to customization," says Ari Juels, chief scientist and director of RSA Laboratories, a corporate research center.
Juels says RePriv is generally well engineered and well thought out, but he worries that the tool goes against "the general migration of data and functionality to the cloud." Many services, such as Facebook, now store information in the cloud, and RePriv wouldn't be able to get at data there—an omission that could hobble the system, he points out.
Juels is also concerned that most people would be permissive about the information they allow RePriv to release, and he believes many sites would exploit this. And he points out that websites with a substantial competitive advantage in the huge consumer-preference databases they maintain would likely resist such technology. "RePriv levels the playing field," he says. "This may be good for privacy, but it will leave service providers hungry." Therefore, he thinks, big players will be reluctant to cooperate with a system like this.
Livshits argues that some companies could use these characteristics of RePriv to their advantage. He says the system could appeal to new services, which struggle to give users a personalized experience the first time they visit a site. And larger sites might welcome the opportunity to get user data from across a person's browsing experience, rather than only from when the user visits their site. Livshits believes they might be willing to use the system and protect user privacy in exchange.
Copyright Technology Review 2011.
Thursday, March 31. 2011
Vis MIT Technology Review
-----
By Kate Greene
The Open Network Foundation wants to let programmers take control of computer networks.
|
Off switch: This visualization shows network traffic when traffic loads are low and switches (the large dots) can be turned off to save power.
Credit: Open Flow Project |
Most data networks could be faster, more energy efficient, and more secure. But network hardware—switches, routers, and other devices—is essentially locked down, meaning network operators can't change the way they function. Software called OpenFlow, developed at Stanford University and the University of California, Berkeley, has opened some network hardware, allowing researchers to reprogram devices to perform new tricks.
Now 23 companies, including Google, Facebook, Cisco, and Verizon, have formed the Open Networking Foundation (ONF) with the intention of making open and programmable networks mainstream. The foundation aims to put OpenFlow and similar software into more hardware, establish standards that let different devices communicate, and let programmers write software for networks as they would for computers or smart phones.
"I think this is a true opportunity to take the Internet to a new level where applications are connected directly to the network," says Paul McNab, vice president of data center switching and services at Cisco.
Computer networks may not be as tangible as phones or computers, but they're crucial: cable television, Wi-Fi, mobile phones, Internet hosting, Web search, corporate e-mail, and banking all rely on the smooth operation of such networks. Applications that run on the type of programmable networks that the ONF envisions could stream HD video more smoothly, provide more reliable cellular service, reduce energy consumption in data centers, or even remotely clean computers of viruses.
The problem with today's networks, explains Nick McKeown, a professor of electrical engineering and computer sciences at Stanford who helped develop OpenFlow, is that data flows through them inefficiently. As data travels through a standard network, its path is determined by the switches it passes through, says McKeown. "It's a little bit like a navigation system [in a car] trying to figure out what the map looks like at the same time it's trying to find you directions," McKeown explains.
With a programmable network, he says, software can collect information about the network as a whole, so data travels more efficiently. A more complete view of a network, explains Scott Shenker, professor of electrical engineering and computer science at the University of California, Berkeley, is a product of two things: the first is OpenFlow firmware (software embedded in hardware) that taps into the switches and routers to read the state of the hardware and to direct traffic; the second is a network operating system that creates a network map and chooses the most efficient route.
OpenFlow and a network operating system "provide a consistent view of the network and do that at once for many applications," says McKeown. "It becomes trivial to find new paths."
Some OpenFlow research projects require just a couple hundred lines of code to completely change the data traffic patterns in a network—with dramatic results. In one project, McKeown says, researchers reduced a data center's energy consumption by 60 percent simply by rerouting network traffic within the center and turning off switches when they weren't in use.
This sort of research has caught the attention of big companies, and is one reason why the ONF was formed. Google is interested in speeding up the networks that connect its data centers. These data centers generally communicate through specified paths, but if a route fails, traffic needs to be rerouted, says Urs Hoelzle, senior vice president of operations at Google. Using standard routing instructions, this process can take 20 minutes. If Google had more control over how the data flowed, it could reroute within seconds, Hoelzle says.
Cisco, a company that builds the hardware that routes much of the data on the Internet, sees ONF as a way to help customers build better Internet services. Facebook, for example, relies on Cisco hardware to serve up status updates, messages, pictures, and video to hundreds of millions of people worldwide. "You can imagine the flood of data," says McNab.
Future ONF standards could let people program a network to get different kinds of performance when needed, says McNab. Building that sort of functionality into Cisco hardware could make it more appealing to Internet services that need to be fast.
The first goal of the ONF is to take over the specifications of OpenFlow, says McKeown. As a research project, OpenFlow has found success on more than a dozen campuses, but it needs to be modified so it can work well at various companies. The next step is to develop easy-to-use interfaces that let people program networks just as they would program a computer or smart phone. "This is a very big step for the ONF," he says, because it could increase the adoption of standards and speed up innovation for network applications. He says the process could take two years.
In the meantime, companies including Google, Cisco, and others will test open networking protocols on their internal networks—in essence, they'll be testing out a completely new kind of Internet.
Copyright Technology Review 2011.
Wednesday, March 09. 2011
Situationist is an iPhone app that attempts to make your everyday life more experimental and unpredictable. Inspired by the Situationist International of the 50′s, who advocated experiences of life being alternative to those admitted by the capitalist order, for the fulfillment of human primitive desires and the pursuing of a superior passional quality, the app takes on the “situation” element of the movement, attempting to create random rendezvous and interactions with strangers to induce the unpredictable.
Using the iPhone and it’s geolocation features, the app alerts members to each other’s proximity and gets them to interact in random “situations”. These situations vary from the friendly “Hug me for 5 seconds exactly” or “Compliment me on my haircut”, to the subversive eg “Help me rouse everyone around us into revolutionary fervour and storm the nearest TV station”. Members simply upload their photo and pick the situations they want to happen to them from a shortlist, in the knowledge that they might then occur anywhere, and at any time.
Situationist was created by Ben Carey and Henrik Delehag aka Benrik.
Benrik’s mission is to remodel the current world into something more to their liking, adding idea by idea to the sum total of radical thinking and inspiration. They work across all forms of cultural production, looking for cracks, openings and other loopholes.
Thursday, October 28. 2010
via Cyber Badger Research Blog
--
We [ Martin Dodge & Rob Kitchin ] are just checking the page proofs for our book Code/Space that is nearing publication after a somewhat slow production process through much of this year.
The book has an ISBN and is now listed on the MIT Press website, although we are waiting to see what cover design is going to be applied.
Code/Space is structured in four sections and has eleven chapters:
- I Introduction
- 1 Introducing Code/Space
- 2 The Nature of Software
- II The Difference Software Makes
- 3 Remaking Everyday Objects
- 4 The Transduction of Space
- 5 Automated Management
- 6 Software, Empowerment, and Creativity
- III The Transduction of Everyday Spatialities
- 7 Air Travel
- 8 Home
- 9 Consumption
- IV Future Code/Space
- 10 The Promise of Everyware
- 11 A Manifesto for Software Studies
- Brief Glossary of Concepts
- References
- Index
More details on our ongoing research on code/space and various published papers are listed on this webpage.
Friday, October 08. 2010
Via TFTS
Whilst few (and especially Apple) would refute that HTML5 is the future of the web (Apple, you may recall, see HTML5 as an outright replacement for Abobe’s Flash – not that Adobe are not opening embracing the HTML5 standard themselves) it seems HTML5 is still not ready for full web deployment as the World Wide Web Consortium (W3C) have confirmed that they are running into interoperability issues meaning its not yet ‘ready for production’.
“The problem we’re facing right now is there is already a lot of excitement for HTML5, but it’s a little too early to deploy it because we’re running into interoperability issues,” says W3C’s Philippe Le Hegaret. “The real problem is can we make [HTML5] work across browsers and at the moment, that is not the case.”
This standpoint has been further backed up by industry analyst Al Hilwa (IDC) who highlights that, to date, mainstream browsers (not withstanding betas) are still not yet where they need to be for when the new standard hits. “HTML 5 is at various stages of implementation right now through the Web browsers. If you look at the various browsers, most of the aggressive implementations are in the beta versions,” Hilwa observes. “IE9 (Internet Explorer 9), for example, is not expected to go production until close to mid-next year. That is the point when most enterprises will begin to consider adopting this new generation of browsers.”
Interestingly, as an aside, whilst Hegaret sees HTLM5 as a ‘game changer’ and doesn’t doubt that HTLM5 will impact on the use of Abobe’s Flash on websites once the new standard is wholly adopted (HTML5 features integrated support for video and Canvas 2D) he still sees a place for both Flash and other comparable technologies (Microsoft’s Silverlight, for example) whilst Apple’s stance is, as alluded to previously, somewhat more hardline on this issue – if you need insight into Apple’s (in particular Jobs’) stance on the matter you can read more here or you can scroll down to the related posts section for yet more reading on the subject.
“We’re not going to retire Flash anytime soon,” says Hegaret who adds that “”You will see less and less websites using Flash” as HTML5 becomes the standard for website development.
So, quite when can we expect to see HTML5, which initially began development in 2004, hit the open web and become the de facto standard for web development? It seems that we are still looking at the 2 to 3 year timeframe though Hegaret has confirmed that the standard should be ‘feature-complete’ by the middle of next year.
Via SlashGear
Adobe’s cross-platform AIR application system has shown up for download for Android devices, giving developers using the framework a fifth platform on which their code can run. The new release – which can be found in the Android Market – means that AIR apps that run on Mac, Windows, Linux and iOS systems will now also be functional on Android devices.
Apps themselves will be distributed via the Android Market as usual, and as long as the user has the AIR runtime installed they’ll load just like regular apps do. An Android 2.2 Froyo device is required, and the apps themselves need to be formatted to suit a mobile device.
|