Amazon is tipped to be preparing its own Android marketplace, challenging the official Android Market with their own developer offering and hoping to lure in coders with the possibility of being features on the retailer’s well-trafficked site. Meanwhile there’s also talk of an Amazon tablet, produced alongside rather than replacing the Kindle, and itself running Android.
Developers will be charged $99 to take part, and receive either 70-percent of the purchase price or 20-percent of the list price (intended, apparently, to stop coders selling their apps cheaper elsewhere). In return they’ll be expected to update the Amazon app store versions of software at the same time as they do for the Android Market and other stores, and they’ll have to accept the retailer’s DRM.
The store will be US only, at least to begin with, and Amazon keeps executive control over how apps are priced; if they don’t like your numbers, they can change them or even pull the app altogether. Details on the tablet, meanwhile, are pretty much a mystery, though TechCrunch’s source has apparently got a reasonable history of accurate tips.
A Yahoo Research tool mines news archives for meaning--illuminating past, present, and even future events.
By Tom Simonite
Time traveler: Time Explorer shows coverage relevant to a search term over time.
Credit: Yahoo Research
Showing news stories on a timeline has been tried before. But Time Explorer, a prototype news search engine created in Yahoo's Barcelona research lab, generates timelines that stretch into the future as well as the past.
Time Explorer's results page is dominated by an interactive timeline illustrating how the volume of articles for a particular search term has changed over time. The most relevant articles appear on the timeline, showing when they were published. If the user moves the timeline into the future, articles appear positioned at any point in time the text might have referred to.
This provides a new way to discover articles, and also a way to check up on past predictions. The timeline for 2010 becomes a way to discover a 2004 Op-Ed suggesting that by now, North Korea would have constructed some 200 nuclear warheads, or a 2007 article accurately predicting difficult policy decisions for Democrats over the expiration of George Bush's tax cuts.
News organizations are increasingly turning to new ways of presenting their content, including through enhanced forms of search. A Pew research study in 2008 found that 83 percent of people looking for news online use a search engine to find it.
Time Explorer can spot both absolute references to future times, such as "November 2010," and work forward from an article's publication date to figure out relative timings like "an election next month." It also extracts names, locations, and organizations mentioned in articles. These are shown in a box to the right of the results; they can be used to add a person or other entity to the timeline, and to fine-tune results to home in on combinations of particular people or places.
"You can see for wars or any other event not only the people that are important, but when they became important," says Michael Matthews, a member of the Yahoo research team. "The evolution of news over time is not something you can do very easily with tools that are out there today."
Time Explorer was built using a collection of 1.8 million articles released by the New York Times stretching from 1987 to 2007 to stimulate research into new ways of exploring news coverage. Time Explorer was presented, along with other ideas for using the same dataset, at a session of the Human Computer Interaction and Information Retrieval (HCIR) workshop in New Brunswick, NJ, over the weekend. Time Explorer won the most votes from attendees for best use of the Times articles.
Other tools presented at HCIR attempted to assess the authority of people mentioned in an article, determine phrases related to a search term, and rapidly pull together a page summarizing the latest news on a particular topic, for example a celebrity or country.
"For most news search engines, recency is a significant factor for relevance," says Daniel Tunkelang, a tech lead at Google's New York office who chaired the challenge session. "Time Explorer brings an exploratory perspective to the time dimension, letting users see the evolution of a topic over time."
"The slick visualization allows users to discover unexpected relationships between entities at particular points in time--for example, between Slobodan Milosevic and Saddam Hussein," says Tunkelang. Refining a search for the term "Yugoslavia" with the two leaders reveals how, at first, Hussein appears as a point of comparison in coverage of the Serbian leader, but later the two leaders were directly involved, with stories reporting arms deals between them.
Although Time Explorer currently only works with old news, it could also be used to explore new coverage, and to put it in context, says Matthews. "It would be tough to update in real time, but it could certainly be done daily, and I think that would be useful for sure."
He says the service would be best deployed as a tool that works off of the topics in a breaking story. A person reading a news report about, say, Medicaid would find it useful to see the history of coverage on the topic, as well as the predictions made about its future, says Matthews. "It's like a related-articles feature, but focused in the future." He and colleagues are working on adding more up-to-date news sources, as well as content from blogs and other sites to Time Explorer's scope.
The Times has digitized and made searchable its content going back to 1851, yet today's search technologies and interfaces are not up to the task of making such large collections explorable, says Evan Sandhaus, a member of the New York Times Research and Development Labs who oversaw the release of the article archive in late 2008.
"We can say, 'show me all the articles about Barack Obama,' but we don't have a database that can tell us when he was born, or how many books he wrote," says Sandhaus, who adds that tools developed to process the meaning of news articles could have wider uses. "That resource will not only help the research community move the needle for our company but for any company with a large-scale data-management problem."
With most organizations harboring millions of text documents, from e-mails to reports, smarter tools to handle them would likely be popular, Matthews says. "In theory, the underlying algorithms should work on anything, perhaps with a little tweaking."
Free software pioneer Richard Stallman spoke with us recently about the principles of free and open source programs, and what he had to say is as relevant and revolutionary as when he first started working in this field 30 years ago.
Our community has been talking a lot lately about what it means to be open, about what makes software open, about what makes companies open. No matter what talk of “openness” you hear in the media, no major web company — not Facebook, not Google, not Adobe and certainly not Apple — is creating truly free and open applications. Some may make gestures toward this ideology with APIs or “open source” projects, but ultimately, the company controls the software and the users’ data.
At the end of the day, if you want freedom and privacy, the only way to attain those goals is to abstain from proprietary software, including media players, social networks, operating systems, document storage, email services and any other program that is licensed, patented and locked down by a corporation. If you prefer convenience — well, best to stop complaining about your loss of freedom and/or privacy.
Like many heroes of the digital era, Richard Stallman is largely unsung by the general populace. Yet when it comes to user privacy and technological freedom, he’s probably one of the most committed individuals in the world.
By freedom, he means four things:
The software should be freely accessible.
The software should be free to modify.
The software should be free to share with others.
The software should be free to change and redistribute copies of the changed software.
Stallman started the Free Software Foundation. He even worked to make an operating system (GNU/Linux) that could be entirely free. And he is deeply opposed to proprietary software, software with commercial licenses that fly in the face of everything he calls freedom.
If you’ve ever downloaded music illegally, if you’ve ever complained about closed platforms, if you’ve ever gotten a serial number online for software you didn’t buy, if you’re worried about social networks controlling your data, you need to hear what Stallman has to say.
We got the chance to interview Stallman extensively at WordCamp San Francisco, and we’ll be posting segments of that interview each week. Stay tuned for insights on music sharing, Apple versus Adobe and more.
Note: Stallman asked that we use Ogg Theora, an open format, for encoding this video. To download the original video, go to its Wikimedia page. This video is published under a Creative Commons-No Derivatives license.
Personal comment:
There's a debate now on Mashable about what is open vs what is proprietary in technology. Especially now that this has become a marketing term full of lies.
Interesting debate to follow, in particular for us in the perspective of two projects we are currently working on where we'll claim for an open approach regarding technology in public spaces (all of them, including in outer space! I-Weather as Deep Space Public Lighting) or where we'll adopt a critical approach regarding surveillance (technologies) in the public space (a future "Paranoid Shelter" project we are working on for a while), followed maybe a bit later by a collaboration ("Globale Paranoïa") with French writer and essayist Eric Sadin.
It's a really a whole big debate, quite complex, just as the question of freedom that is a complex one. For our part, we are interested maybe in a less complex question at first. The one of public space (public is not free so to say, it is shared in different ways and allows for, or offer this and that but not everything).
The smartphone market is very hot right now with smartphones selling very well and many different companies competing in the market. The open source Android OS is doing very well in the market against the proprietary iPhone OS and Windows Mobile. The most widely used smartphone OS in the world is Symbian and the Symbian Foundation announced today that its open source migration is complete.
The Symbian OS has been developed for more than ten years and has shipped on more than 330 million dives. The entire source code for the OS is now open source and available to anyone who wants to download it at no charge.
The code can now be used and modified by anyone for any purpose from mobile phones to other types of gear. The move was made to put Symbian in a position for growth and faster time to market. I wonder if we will see the Symbian OS start to pop up on consumer electronic devices like tablets like Android is doing. The use of the software is governed by the Eclipse Public License and other open source licenses.
With BIM, project collaboration was possible. But online collaboration over CAD drawings hasn’t been easy yet, with some applications that work very slow or in a limited way.
But a new project coming out of Autodesk Labs promises to make online CAD collaboration feasible: Autodesk Butterfly.
As you can see on the above video, sharing works on a very simple way over email allowing people to work on a drawing at the same time. I gave Autodesk Butterfly a try at their technology preview website (no need to sign-in, just hit Try Now) and the flash interface loaded fast. It is very intuitive to use if you have used AutoCad, and the version control system allows you to go back to previous states of the drawing.
An iPhone version would be great, as it would allow you to do on-site review and annotations. Anyway, the preview looks very strong, and maybe we will have a full version of this tool available soon.
A whole new era of architecture collaboration and online plan sharing? By extrapolation: community design, architects-social networks and open source plans?
Pachube is a web service available at http://www.pachube.com that enables you to store, share & discover realtime sensor, energy and environment data from objects, devices & buildings around the world. Pachube is a convenient, secure & scalable platform that helps you connect to & build the 'internet of things'.
As a generalized realtime data brokerage platform, the key aim is to facilitate interaction between remote environments, both physical and virtual. Apart from enabling direct connections between any two environments, it can also be used to facilitate many-to-many connections: just like a physical "patch bay" (or telephone switchboard) Pachube enables any participating project to "plug-in" to any other participating project in real time so that, for example, buildings, interactive environments, networked energy meters, virtual worlds and mobile sensor devices can all "talk" and "respond" to each other.
The Pachube community and environment is growing. Can it become a sort of equivalent to Processing (to which it connects and gets inspired) for "intelligent" environments and the "internet of things"?
Nokia is reportedly working with Korean developers on a face recognition application for some of its upcoming phones.
This application is designed to identify the faces of the person whose photos you have clicked using your mobile phone. All the user needs to do is to identify the face in his/her album just once. Post that, the application performs a search and identifies all similar looking faces stored on the phone. Apart from the face recognition capabilities, it will also feature time and location based features.
The application will make its debut with phones running Series 60 OS and having at least a 3-megapixel camera.
---
Like Nokia last week, Qualcomm has been talking up the future of phones, and the company gave us a sneak peek of what to expect a short while down the road: face recognition technology tied to social network search technology, so you can find out what a stranger just tweeted simply by pointing your handset at them. Is Facebook stalking about to get a whole lot worse?
We’ve seen in-picture face recognition start to appear in mobile phones recently, with Sony Ericsson promising the tech will make it into the Xperia X10 early next year. But Qualcomm reckons it’s going to get much more integrated and advanced.
Qualcomm Snapdragon tablet concept revealed
Gilbert admitted that the possibility raised serious privacy issues – you could theoretically pull up a person’s home address through automatic whois requests – but ethics aside, it’s an interesting next step for augmented reality apps, which layer data over the surroundings and have started to take off in a big way over the last year. As phones get faster and more powerful, what’s to stop people integrating this form of search?
---
Apple iPhoto
---
Face.com is opening its photo-tagging system, based on facial-recognition technology, to Facebook members.
Face.com
Face.com’s Photo Tagger app uses facial-recognition technology to help Facebook members tag photos of their friends.
Photo Tagger, which launched to a limited group of users in July, scans a user’s photo albums on the social-networking site, then lets him tag faces it identifies. It groups multiple shots of each person, making it easy to tag large albums, and users can also adjust or remove incorrectly tagged pictures.
Once a member has been identified, the app prompts him or her to approve the tag — a crucial privacy step, since he or she may not want to be labeled in a photo. It also works with a member’s current photo-privacy settings on Facebook.
For users with lots of friends and photos, Photo Tagger helps them spread the word and ensure that their contacts see relevant shots on their news feeds, said Gil Hirsch, Face.com’s chief executive.
“If you don’t tag an album, people don’t know about it,” he said. “What we’re doing is basically supporting the existing mechanism and augmenting it.”
The Tel Aviv-based company’s facial-recognition technology specializes in pictures in which the subject isn’t looking at the camera, as well as low-resolution or out-of-focus images, “what we refer to as everyday photos,” Mr. Hirsch said. “All these different things that make photos real.”
Photo Tagger is free, though he said Face.com is considering fee-based services that it could provide over the system. He declined to say what they might be.
Face.com is also introducing a new Photo Tagger feature, dubbed Face Alerts, along with the launch. It allows members to be notified through Facebook or email when new public photos are uploaded of them or their friends. “It’s a Google Alerts for faces,” Mr. Hirsch said, and a way for members to gain more control over where their image appears.
The app has taken off even in its alpha phase — he said more than 35,000 people have tried it. TechCrunch called it a “time vampire” because of its addictive nature, though VentureBeat noted that it might even work too well, and that it could lead to more people tweaking their privacy settings to avoid the limelight.
Mr. Hirsch said the company is sensitive to privacy concerns. “While for some folks face-recognition technology feels a little creepy, we’ve had no complaints or direct issues with the applications,” he added. “In fact, we’ve had quite a few privacy-aware users who tried out our apps, and their feedback was very positive, and with the new Face Alerts feature felt it actually helps them gain control of their online image.”
Personal comment:
A set of articles concerning the same facial detection and recognition schemas. It seems to be available/coming soon on social networks, normal computer applications and mobile phones with bridges from one platform to another.
Pushing this technology to the dark side will make possible to any one of us to be able to obtain the name, the address, or any kind of data about a person we can just see in the street, just by taking a simple picture.
As the technology seems to be planned for huge deployment, people will be able to file/record their own personal/biometric data on any kind of devices/social networks, prior to any possible reaction of legal instances.
Researchers plan to offer more than just directions with innovations in software and hardware.
By Kristina Grifantini
Augmented games: In this game, developed by researchers at Columbia University, a player holds a flat board and sees three-dimensional objects projected onto it through a head-worn display. The player tilts the game board to control a virtual ball.
Credit: Ohan Oda and Steve Feiner, Columbia University
Augmented reality (AR), which involves superimposing virtual objects and information on top of the real world, may be coming to a phone near you. As mobile phones become packed with more sensors, better video capabilities, and faster processing power, many experts predict that AR will become increasingly common. But in a panel discussion today at EmTech@MIT in Cambridge, MA, panelists will admit that several obstacles still remain and that the "killer app" for augmented reality has yet to emerge.
Several AR apps have already been released for cell phones with positioning sensors. For example, PresseLite's Metro Paris app and Acrossair's Nearest Tube both provide iPhone users with augmented directions to nearby subway stops. AR apps are also available for phones powered by Google's Android platform. Layar, developed by SPRXmobile, based in the Netherlands, overlays information from Twitter, Flickr, and Wikipedia on real-world locations, while Wikitude, from Austria-based Mobilizy, displays tourist information collected from Wikipedia.
Some researchers believe that AR represents a fundamentally new way to organize and interact with information. "In the future, we see augmented reality as a component of any kind of digital media interaction," says Mobilizy's CEO, Alexander Igelsboeck, who will speak at the EmTech@MIT session.
This week Mobilizy released a new language for AR called Augmented Reality Mark-up Language (ARML). With ARML, Mobilizy hopes to make it easier for programmers to create location-based content for AR applications. The company envisions ARML as equivalent to HTML for the Web, and Igelsboeck emphasizes the importance of open content and standardization for AR to take off. "We want to open those standards to be available for developer communities that can create innovative applications around this augmented experience," he says.
But many challenges still remain. For instance, the positioning technology currently available in cell phones falls short for sophisticated AR applications. The GPSs built into smart phones "were really not designed for AR," says panelist Steven Feiner, a professor of computer science at Columbia University. "They were designed for simpler applications."
Feiner, who has worked on AR for over a decade, notes that early examples of AR required wearing a computer backpack and using cumbersome head-mounted displays. "[But] the tracking that we used [in 2001] was much, much better," he says.
Feiner is focusing on less-mainstream applications for AR--he has developed one program that shows levels of carbon monoxide in Manhattan (see image above), and another that shows virtual labels for engineers--for example, a floating tag that says, "Remove this bolt using a 1/4 inch socket wrench". He adds that better object recognition and posture tracking, as well as a way to deal with direct sunlight, will help AR become more practical.
Another potential obstacle for AR is social acceptance. While people already text or check e-mail while they walk, looking through a phone can be awkward. Feiner suggests that well-designed goggles could help. "There's a very high bar of what people are willing to wear on their heads," he says.
Pollution visualized: Another application developed at Columbia shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.
Credit: Sean White and Steve Feiner, Columbia University
Last spring, a group at the MIT Media Lab demoed an interface that avoids the need to look at a display altogether. Graduate student Prana Mistry, a 2009 TR35 winner, developed SixthSense, a device that combines a webcam and a projector worn around the neck, along with colored markers on the fingers, to recognize a user's gestures and project information onto surfaces. (See a TR video of SixthSense in action here.)
"Your world can be augmented without you having to change your behavior and do anything extra [like] taking out your cell phone and starting an application," says MIT professor Pattie Maes, who heads the SixthSense project. Maes's group is also exploring technical applications for AR. "If my car stops working, I might open the hood and an expert might remotely see what I see and [then] project information in front of the engine, saying things like, 'Open this valve,'" explains Maes.
Nokia's Mobile Augmented Reality Applications and Mixed Reality Experiences projects aim to use a combination of hardware in AR applications. Ville-Veikko Mattila, a principal researcher at Nokia Research Center, believes that combining visual and audio information could be most practical. "I think it's clear that people won't be walking and holding a device upright. Therefore, the use of audio may be more intuitive," he says.
Mattila adds that AR could potentially combine social information and location-based services to give user-tailored recommendations. For example, an application could show what your friends think of a particular restaurant, instead of providing a guidebook's reviews.
"There's a lot of hype obviously," Feiner says. But ultimately he agrees that AR may be able to help people with their daily lives. "Like being able to get somewhere, find information, or recognize a face of a person you know, but can't remember the name of," he says.
Music fans can now play the Moon like a DJ spins a record, using a new online programme that generates melodies from its rugged surface.
Moonbell, which is free to use, exploits precise topographical data supplied by a Japanese satellite to create endless loops based on rises and falls in the Moon's terrain.
Users can either play a full orbit or select the "free scratch" mode, which allows them to map their own routes across the Moon's surface.
Like a record player, Moonbell translates the bumps and ridges it detects into musical notes.
The resulting compositions can be interpreted by any combination of more than 138 instruments, but explorers hoping to produce an orchestral masterpiece may be disappointed.
All of the Telegraph's attempts on the software sounded dispiritingly similar – more like indulgent free jazz than Pink Floyd's Darkside of the Moon.
The software works by interpreting information provided by the Japan Aerospace Exploration Agency's Kaguya satellite, which used a laser altimeter to generate detailed maps of the Moon until its planned crash in June this year.
The music produced by Moonbell synthesises three types of topographical data. The melody is generated by the actual ups and downs in the Moon's surface, while the "mid tones" are related to the elevation of the immediately surrounding area and the bass line is determined by an even broader section of elevation.
This is not the first online tool to make use of Kaguya since its launch in 2007 – Google Earth's 3D Moon option is also based on the information sent back by the satellite.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.