Tuesday, August 02. 2016
By fabric | ch
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2016), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG:
(to be seen just below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch at 16:58
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Monday, June 13. 2016
Note: after a few weeks posting about the Universal Income, here comes the "Universal data accumulator for devices, sensors, programs, humans & more" by Wolfram (best known for Wolfram Alpha computational engine and the former Mathematica libraries, on which most of their other services seem to be built).
Funilly, we've picked a very similar name for a very similar data service we've set up for ourselves and our friends last year, during an exhibition at H3K: Datadroppers (!), with a different set of references in our mind (Drop City? --from which we borrowed the colors-- "Turn on, tune in, drop out"?) Even if our service is logically much more grassroots, less developed but therfore quite light to use as well.
We developed this project around data dropping/picking with another architectural project in mind that I'll speak about in the coming days: Public Platform of Future-Past. It was clearly and closely linked.
"Universal" is back in the loop as a keyword therefore... (I would rather adopt a different word for myself and the work we are doing though: "Diversal" --which is a word I'm using for 2 yearnow and naively thought I "invented", but not...)
"The Wolfram Data Drop is an open service that makes it easy to accumulate data of any kind, from anywhere—setting it up for immediate computation, visualization, analysis, querying, or other operations." - which looks more oriented towards data analysis than use in third party designs and projects.
"Datadroppers is a public and open data commune, it is a tool dedicated to data collection and sharing that tries to remain as simple, minimal and easy to use as possible." Direct and light data tool for designers, belonging to designers (fabric | ch) that use it for their own projects...
Tuesday, June 07. 2016
Note: I've posted several articles about automation recently. This was the occasion to continue collect some thoughts about the topic (automation then) so as the larger social implications that this might trigger.
But it was also a "collection" that took place at a special moment in Switzerland when we had to vote about the "Revenu the Base Inconditionnel" (Unconditional Basic Income). I mentioned it in a previous post ("On Algorithmic Communism"), in particular the relation that is often made between this idea (Basic Income / Universal Income) and the probable evolution of work in the decades to come (less work for "humans" vs. more for "robots").
Well, the campain and votation triggered very interesting debates among the civil population, but in the end and predictably, the idea was largely rejected (~25% of the voters accepted it, with some small geographical areas that indeed acceted it at more than 50% --urban areas mainly--. Some where not so far, for exemple the city capital, Bern, voted at 40% for the RBI).
This was very new and a probably too (?) early question for the Swiss population, but it will undoubtedly become a growing debate in the decades to come. A question that has many important associated stakes.
Press talking about the RBI, image from RTS website.
Friday, May 27. 2016
Note: "(...) For example, technologists might be held responsible if they use poor quality data to train AI systems, or fossilize prejudices based on race, age, or gender into the algorithms they design."
Mind your data and the ones you'll use to "fossilize", so to say (and as long as you'll already know what's in your data)... It is then no more about "if" you're collecting data, but "which" data you'll use to feed your AIs, and "how". Now that we clearly see that large corporations plan to use more and more of these kind of techs to also drive "domestic" applications (and by extension as we already know "personal" applications of all sorts), it will be important to understand the stakes behind them as it will become part of our social and design context.
An important problem that I can see for designers and architects is that if you don't agree with the principles --commercial, social, ethical and almost conceptual-- implied by the technologies (i.e. any "homekit" like platforms controlled by bots), you won't find many if any counter propositions/techs to work with (all large diffusion products will support iOS, Android and the likes). It is almost a dictatorship of products hidden behind a "participate" paradigma. Either you'll be in and accept the conditions (you might use an API provided with the service --FB, Twitter, IFTTT, Apple, Google, Wolfram, Siemens, MS, etc.--, but then feed the central company nonetheless), or out... or possibly develop you own solution(s) that will probably be a pain in the ass to use for your client because it/they will clearly be side products hard to maintain, update, etc.
"Some" open source projects driven by "some" communities could be/become (should be) alternative solutions of course, but for now these are good for prototyping and teaching, not for consistent "domestic" applications... And when they'll possibly do so, they might likely be bought. So we'll have "difficulties" as (interaction) designers, so to say: you'll work for your client(s) ... and the corp. that provides the services you'll use!
The Obama administration is vowing not to get left behind in the rush to artificial intelligence, but determining how to regulate it isn’t easy.
By Mark Harris
Should the government regulate artificial intelligence? That was the central question of the first White House workshop on the legal and governance implications of AI, held in Seattle on Tuesday.
“We are observing issues around AI and machine learning popping up all over the government,” said Ed Felten, White House deputy chief technology officer. “We are nowhere near the point of broadly regulating AI … but the challenge is how to ensure AI remains safe, controllable, and predictable as it gets smarter.”
One of the key aims of the workshop, said one of its organizers, University of Washington law professor Ryan Calo, was to help the public understand where the technology is now and where it’s headed. “The idea is not for the government to step in and regulate AI but rather to use its many other levers, like coördination among the agencies and procurement power,” he said. Attendees included technology entrepreneurs, academics, and members of the public.
In a keynote speech, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, noted that we are still in the Dark Ages of machine learning, with AI systems that generally only work well on well-structured problems like board games and highway driving. He championed a collaborative approach where AI can help humans to become safer and more efficient. “Hospital errors are the third-leading cause of death in the U.S.,” he said. “AI can help here. Every year, people are dying because we’re not using AI properly in hospitals.”
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, left, speaks with attendees at the White House workshop on artificial intelligence.
Nevertheless, Etzioni considers it far too early to talk about regulating AI: “Deep learning is still 99 percent human work and human ingenuity. ‘My robot did it’ is not an excuse. We have to take responsibility for what our robots, AI, and algorithms do.”
A panel on “artificial wisdom” focused on when these human-AI interactions go wrong, such as the case of an algorithm designed to predict future criminal offenders that appears to be racially biased. “The problem is not about the AI agents themselves, it’s about humans using technological tools to oppress other humans in finance, criminal justice, and education,” said Jack Balkin of Yale Law School.
Several academics supported the idea of an “information fiduciary”: giving people who collect big data and use AI the legal duties of good faith and trustworthiness. For example, technologists might be held responsible if they use poor quality data to train AI systems, or fossilize prejudices based on race, age, or gender into the algorithms they design.
As government institutions increasingly rely on AI systems for decision making, those institutions will need personnel who understand the limitations and biases inherent in data and AI technology, noted Kate Crawford, a social scientist at Microsoft Research. She suggested that students be taught ethics alongside programming skills.
Bryant Walker Smith from the University of South Carolina proposed regulatory flexibility for rapidly evolving technologies, such as driverless cars. “Individual companies should make a public case for the safety of their autonomous vehicles,” he said. “They should establish measures and then monitor them over the lifetime of their systems. We need a diversity of approaches to inform public debate.”
This was the first of four workshops planned for the coming months. Two will address AI for social good and issues around safety and control, while the last will dig deeper into the technology’s social and economic implications. Felten also announced that the White House would shortly issue a request for information to give the general public an opportunity to weigh in on the future of AI.
The elephant in the room, of course, was November’s presidential election. In a blog post earlier this month, Felten unveiled a new National Science and Technology Council Subcommittee on Machine Learning and Artificial Intelligence, focused on using AI to improve government services “between now and the end of the Administration.”
Tuesday, May 24. 2016
Note: even people developing automation will be automated, so to say...
Do you want to change this existing (and predictable) future? This would be the right time to come with counter-proposals then...
But I'm quite surprized by the absence of nuanced analysis in the Wired article btw (am I? further than "make the workd a better place" I mean): indeed, this is a scientific achievement, but then what? no stakes? no social issues? It seems to be the way things should go then... (and some people know pretty well how "The Way Things Go", always wrong ;)), to the point that " No, Asimo isn’t quite as advanced—or as frightening—as Skynet." Good to know!
By Cade Metz
Deep neural networks are remaking the Internet. Able to learn very human tasks by analyzing vast amounts of digital data, these artificially intelligent systems are injecting online services with a power that just wasn’t viable in years past. They’re identifying faces in photos and recognizing commands spoken into smartphones and translating conversations from one language to another. They’re even helping Google choose its search results. All this we know. But what’s less discussed is how the giants of the Internet go about building these rather remarkable engines of AI.
Part of it is that companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.
“It’s almost like being the coach rather than the player,” says Demis Hassabis, co-founder of DeepMind, the Google outfit behind the history-making AI that beat the world’s best Go player. “You’re coaxing these things, rather than directly telling them what to do.”
That’s why many of these companies are now trying to automate this trial and error—or at least part of it. If you automate some of the heavily lifting, the thinking goes, you can more rapidly push the latest machine learning into the hands of rank-and-file engineers—and you can give the top minds more time to focus on bigger ideas and tougher problems. This, in turn, will accelerate the progress of AI inside the Internet apps and services that you and I use every day.
In other words, for computers to get smarter faster, computers themselves must handle even more of the grunt work. The giants of the Internet are building computing systems that can test countless machine learning algorithms on behalf of their engineers, that can cycle through so many possibilities on their own. Better yet, these companies are building AI algorithms that can help build AI algorithms. No joke. Inside Facebook, engineers have designed what they like to call an “automated machine learning engineer,” an artificially intelligent system that helps create artificially intelligent systems. It’s a long way from perfection. But the goal is to create new AI models using as little human grunt work as possible.
Feeling the Flow
After Facebook’s $104 billion IPO in 2012, Hussein Mehanna and other engineers on the Facebook ads team felt an added pressure to improve the company’s ad targeting, to more precisely match ads to the hundreds of millions of people using its social network. This meant building deep neural networks and other machine learning algorithms that could make better use of the vast amounts of data Facebook collects on the characteristics and behavior of those hundreds of millions of people.
According to Mehanna, Facebook engineers had no problem generating ideas for new AI, but testing these ideas was another matter. So he and his team built a tool called Flow. “We wanted to build a machine-learning assembly line that all engineers at Facebook could use,” Mehanna says. Flow is designed to help engineers build, test, and execute machine learning algorithms on a massive scale, and this includes practically any form of machine learning—a broad technology that covers all services capable of learning tasks largely on their own.
Basically, engineers could readily test an endless stream of ideas across the company’s sprawling network of computer data centers. They could run all sorts of algorithmic possibilities—involving not just deep learning but other forms of AI, including logistic regression to boosted decision trees—and the results could feed still more ideas. “The more ideas you try, the better,” Mehanna says. “The more data you try, the better.” It also meant that engineers could readily reuse algorithms that others had built, tweaking these algorithms and applying them to other tasks.
Soon, Mehanna and his team expanded Flow for use across the entire company. Inside other teams, it could help generate algorithms that could choose the links for your Faceboook News Feed, recognize faces in photos posted to the social network, or generate audio captions for photos so that the blind can understand what’s in them. It could even help the company determine what parts of the world still need access to the Internet.
With Flow, Mehanna says, Facebook trains and tests about 300,000 machine learning models each month. Whereas it once rolled a new AI model onto its social network every 60 days or so, it can now release several new models each week.
The Next Frontier
The idea is far bigger than Facebook. It’s common practice across the world of deep learning. Last year, Twitter acquired a startup, WhetLab, that specializes in this kind of thing, and recently, Microsoft described how its researchers use a system to test a sea of possible AI models. Microsoft researcher Jian Sun calls it “human-assisted search.”
Mehanna and Facebook want to accelerate this. The company plans to eventually open source Flow, sharing it with the world at large, and according to Mehanna, outfits like LinkedIn, Uber, and Twitter are already interested in using it. Mehanna and team have also built a tool called AutoML that can remove even more of the burden from human engineers. Running atop Flow, AutoML can automatically “clean” the data needed to train neural networks and other machine learning algorithms—prepare it for testing without any human intervention—and Mehanna envisions a version that could even gather the data on its own. But more intriguingly, AutoML uses artificial intelligence to help build artificial intelligence.
As Mehana says, Facebook trains and tests about 300,000 machine learning models each month. AutoML can then use the results of these tests to train another machine learning model that can optimize the training of machine learning models. Yes, that can be a hard thing to wrap your head around. Mehanna compares it to Inception. But it works. The system can automatically chooses algorithms and parameters that are likely to work. “It can almost predict the result before the training,” Mehanna says.
Inside the Facebook ads team, engineers even built that automated machine learning engineer, and this too has spread to the rest of the company. It’s called Asimo, and according to Facebook, there are cases where it can automatically generate enhanced and improved incarnations of existing models—models that human engineers can then instantly deploy to the net. “It cannot yet invent a new AI algorithm,” Mehanna says. “But who knows, down the road…”
It’s an intriguing idea—indeed, one that has captivated science fiction writers for decades: an intelligent machine that builds itself. No, Asimo isn’t quite as advanced—or as frightening—as Skynet. But it’s a step toward a world where so many others, not just the field’s sharpest minds, will build new AI. Some of those others won’t even be human.
Wednesday, April 27. 2016
Note: we blogged last week about automation and funilly, the Jacquard process was mentioned as one of the early stage of automation and computing during an exhibition in Wien. The "Métiers Jacquard" were an inspiration to Ch. Babbage when he started to design his Difference Engine, one of the early mechanic autonomous and programmable computer (in the sense of a calculator). We should also not forget that in reality, "computers" were real persons doing calculations -- often women (in particular during last world wars), which then became the first operators of automatic computers (see the ENIAC girls) -- until back in the middle of 20th century.
So to say, digital computers have already replaced "person computers" and automated, as well as by far quickened their activities... The first purpose of the computer as we know it was automation. It is part of its DNA.
Now, as a wink to this history but also as a possible "return of the repressed", Google literally enters the textile business and brings computing (back) to fabrics! So it is not by chance that they've picked up this name obviously, "Jacquard".
More about it on MIT Technology Review.
Thursday, April 21. 2016
Note: the idea of automation is very present again recently. And it is more and more put together with the related idea of a society without work, or insufficient work for everyone --which is already the case in the liberal way of thinking btw--, as most of it would be taken by autonomous machines, AIs, etc.
Many people are warning about this (Bill Gates among them, talking precisely about "software substitution"), some think about a "universal income" as a possible response, some say we shouldn't accept this and use our consumer power to reject such products (we spoke passionatey about it with my good old friend Eric Sadin last week during a meal at the Palais de Tokyo in Paris, while drinking --almost automatically as well-- some good wine), some say it is almost too late and we should plan and have visions for what is coming upon us...
Now comes also an exhibition about the same subject at Kunsthalle Wien that tries to articulate the questions: "Technical devices that were originally designed to serve and assist us and are now getting smarter and harder to control and comprehend. Does their growing autonomy mean that the machines will one day overpower us? Or will they remain our subservient little helpers, our gateway to greater knowledge and sovereignty?"
Installation view The Promise of Total Automation. Image Kunsthalle Wien
The word ‘automation’ is appearing in places that would have seemed unlikely to most people less than a decade ago: journalism, art, design or law. Robots and algorithms are being increasingly convincing at doing things just like humans. And sometimes even better than humans.
The Promise of Total Automation, an exhibition recently opened at Kunsthalle Wien in Vienna, looks at our troubled relationship with machines. Technical devices that were originally designed to serve and assist us and are now getting smarter and harder to control and comprehend. Does their growing autonomy mean that the machines will one day overpower us? Or will they remain our subservient little helpers, our gateway to greater knowledge and sovereignty?
The Promise of Total Automation is an intelligent, inquisitive and engrossing exhibition. Its investigation into the tensions and dilemmas of human/machines relationship explore themes that go from artificial intelligence to industrial aesthetics, from bio-politics to theories of conspiracy, from e-waste to resistance to innovation, from archaeology of digital communication to utopias that won’t die.
The show is dense in information and invitations to ponder so don’t forget to pick up one of the free information booklet at the entrance of the show. You’re going to need it!
A not-so-quick walk around the show:
James Benning, Stemple Pass, 2012
James Benning‘s film Stemple Pass is made of four static shots, each from the same angle and each 30 minutes long, showing a cabin in the middle of a forest in spring, fall, winter and summer. The modest building is a replica of the hideout of anti-technology terrorist Ted Kaczynski. The soundtrack alternates between the ambient sound of the forest and Benning reading from the Unabomber’s journals, encrypted documents and manifesto.
Kaczynski’s texts hover between his love for nature and his intention to destroy and murder. Between his daily life in the woods and his fears that technology is going to turn into an instrument that enables the powerful elite to take control over society. What is shocking is not so much the violence of his words because you expect them. It’s when he gets it right that you get upset. When he expresses his distrust of the merciless rise of technology, his doubts regarding the promises of innovation and it somehow makes sense to you.
Konrad Klapheck, Der Chef, 1965. Photo: © Museum Kunstpalast – ARTOTHEK
Konrad Klapheck’s paintings ‘portray’ devices that were becoming mainstream in 1960s households: vacuum cleaner, typewriters, sewing machines, telephones, etc. In his works, the objects are abstracted from any context, glorified and personified. In the typewriter series, he even assigns roles to the objects. They are Herrscher (ruler), Diktator, Gesetzgeber (lawgiver) or Chef (boss.) These titles allude to the important role that the instruments have taken in administrative and economic systems.
Tyler Coburn, Sabots, 2016, courtesy of the artist, photo: David Avazzadeh
This unassuming small pair of 3D-printed clogs alludes to the workers struggles of the Industrial Revolution. The title of the piece, Sabots, means clogs in french. The word sabotage allegedly comes from it. The story says that when French farmers left the countryside to come and work in factories they kept on wearing their peasant clogs. These shoes were not suited for factory works and as a consequence, the word ‘saboter’ came to mean ‘to work clumsily or incompetently’ or ‘to make a mess of things.’ Another apocryphal story says that disgruntled workers blamed the clogs when they damaged or tampered machinery. Another version saw the workers throwing their clogs at the machine to destroy it.
In the early 20th century, labor unions such as the Industrial Workers of the World (IWW) advocated withdrawal of efficiency as a means of self-defense against unfair working conditions. They called it sabotage.
Tyler Coburn contributed another work to the show. Waste Management looks like a pair of natural stones but the rocks are actually made out of electronic waste, more precisely the glass from old computer monitors and fiber powder from printed circuit boards that were mixed with epoxy and then molded in an electronic recycling factory in Taiwan. The country is not only a leader in the export of electronics, but also in the development of e-waste processing technologies that turn electronic trash into architectural bricks, gold potassium cyanide, precious metals—and even artworks such as these rocks. Coburn bought them there as a ready made. They evoke the Chinese scholar’s rocks. By the early Song dynasty (960–1279), the Chinese started collecting small ornamental rocks, especially the rocks that had been sculpted naturally by processes of erosion.
Accompanying these objects is a printed broadsheet which narrates the circulation and transformation of a CRT monitor into the stone artworks. The story follows from the “it-narrative” or novel of circulation, a sub-genre of 18th Century literature, in which currencies and commodities narrated their circulation within a then-emerging global economy.
Osborne & Felsenstein, Personal Computer Osborne 1a and Monitor NEC, 1981, Loan Vienna Technical Museum, photo: David Avazzadeh
Adam Osborne and Lee Felsenstein, Personal Computer Osborne 1a, 1981, Courtesy Technisches Museum, Wien
Several artifacts ground the exhibition into the technological and cultural history of automation: A mechanical Jacquard loom, often regarded as a key step in the history of computing hardware because of the way it used punched cards to control operations. A mysterious-looking arithmometer, the first digital mechanical calculator reliable enough to be used at the office to automate mathematical calculations. A Morse code telegraph, the first invention to effectively exploit electromagnetism for long-distance communication and thus a pioneer of digital communication. A cybernetic model from 1956 (see further below) and the first ‘portable’ computer.
Released in 1981 by Osborne Computer Corporation, the Osborne 1 was the first commercially successful portable microcomputer. It weighed 10.7 kg (23.5 lb), cost $1,795 USD, had a tiny screen (5-inch/13 cm) and no battery.
At the peak of demand, Osborne was shipping over 10,000 units a month. However, Osborne Computer Corporation shot itself in the foot when they prematurely announced the release of their next generation models. The news put a stop to the sales of the current unit, contributing to throwing the company into bankruptcy. This has comes to be known as the Osborne effect.
Kybernetisches Modell Eier: Die Maus im Labyrinth (Cybernetics Model Eier: The Mouse in the Maze), 1956. Image Kunsthalle Wien
Around 1960, scientists started to build cybernetic machines in order to study artificial intelligence. One of these machines was a maze-solving mouse built by Claude E. Shannon to study the labyrinthian path that a call made using telephone switching systems should take to reach its destination. The device contained a maze that could be arranged to create various paths. The system followed the idea of Ariadne’s thread, the mouse marking each field with the path information, like the Greek mythological figure did when she helped Theseus find his way out of the Minotaur’s labyrinth. Richard Eier later re-built the maze-solving mouse and improved Shannon’s method by replacing the thread with two two-bits memory units.
Régis Mayot, JEANNE & CIE, 2015. Image Kunsthalle Wien
In 2011, the CIAV (the international center for studio glass in Meisenthal, France) invited Régis Mayot to work in their studios. The designer decided to explore the moulds themselves, rather than the objects that were produced using them. By a process of sand moulding, the designer revealed the mechanical beauty of some of these historical tools, producing prints of a selection of moulds that were then blown by craftsmen in glass.
Jeanne et Cie (named after one of the moulds chosen by the designer) highlights how the aesthetics of objects are the result of the industrial instruments and processes that enter into their manufacturing.
Bureau d’études, ME, 2013, © Léonore Bonaccini and Xavier Fourt
Bureau d’Etudes, Electromagnetic Propaganda, 2010
The exhibition also presented a selection of Bureau d´Études‘ intricate and compelling cartographies that visualize covert connections between actors and interests in contemporary political, social and economic systems. Because knowledge is power, the maps are meant as instruments that can be used as part of social movements. The ones displayed at Kunsthalle Wien included the maps of Electro-Magnetic Propaganda, Government of the Agro-Industrial System and the 8th Sphere.
I fell in love with Mark Leckey‘s video. So much that i’ll have to dedicate another post to his work. One day.
David Jourdan’s poster alludes to an ad in which newspaper Der Standard announced that its digital format was ‘almost as good as paper.’
More images from the exhibition:
Magali Reus, Leaves, 2015
Thomas Bayrle, Kleiner koreanischer Wiper
Juan Downey, Nostalgic Item, 1967, Estate of Joan Downey courtesy of Marilys B. Downey, photo: David Avazzadeh
Judith Fegerl, still, 2013, © Judith Fegerl, Courtesy Galerie Hubert Winter, Wien
Installation view The Promise of Total Automation. Image Kunsthalle Wien
Installation view. Image Kunsthalle Wien
Installation view. Image Kunsthalle Wien
More images on my flickr album.
Also in the exhibition: Prototype II (after US patent no 6545444 B2) or the quest for free energy.
The Promise of Total Automation was curated by Anne Faucheret. The exhibition is open until 29 May at Kunsthalle Wien in Vienna. Don’t miss it if you’re in the area.
Wednesday, February 24. 2016
Note: j'aurai le plaisir d'être en entretien --en français-- ce vendredi 26.02 à 20h avec le journaliste Frédéric Pfyffer, de la Radio Télévision Suisse Romande, dans le cadre du programme Histoire Vivante qui traite cette semaine du sujet des "Big Data".
Cet entretien, qui a été enregistré en fin de semaine passée, nous verra évoquer la façon dont les artistes ou designers abordent aujourd'hui --mais aussi un peu hier-- cette question des données. En contrepoint ou complément peut-être des approches scientifiques. Pour ma part, aussi bien dans le contexte de ma pratique indépendante (fabric | ch où de nombreux projets réalisés ou en développement s'appuient sur des données) qu'académique (projet de recherche interdisciplinaire en cours autour des "nuages"... entre autres).
À noter encore qu'au terme de la semaine d'émissions thématiques sera diffusé sur la TSR (dimanche 28.02) le documentaire Citizenfour, qui relate toute l'aventure d'Edward Snowden et du journaliste Glenn Greenwald.
Ces cinq émissions seront également disponibles en mode podcast à la même adresse, suite à la diffusion de cette semaine.
Une semaine d’Histoire Vivante consacrée à l’histoire de la recherche scientifique à la lumière de l’émergence de l’internet et des big data.
Dimanche 28 février 2016, vous pouvez découvrir sur RTS Deux: CitizenFour, un documentaire de Laura Poitras (Allemagne-USA/2014):
"Citizenfour est le pseudonyme utilisé par Edward Snowden pour contacter la réalisatrice de ce documentaire lorsqu'il décide de révéler les méthodes de surveillance de la NSA. Accompagnée d'un journaliste d'investigation, elle le rejoint dans une chambre d'hôtel à Hong Kong. La suite est un huis-clos digne des meilleurs thrillers."
Monday, February 22. 2016
A computer trying to order a pizza (and having a hard time), back in 1974 | #computing #txt2speech #history
Note: can a computer "fake" a human? (hmmm, sounds a bit like Mr. Turing test isn't it?) Or at least be credible enough --because it sounds pretty clear in this video, at that time, that it cannot fake a human and that it is m ore about voice than "intelligence"-- so that the person on the other side of the phone doesn't hang up? This is a funny/uncanny experiment involving D. Sherman at Michigan State University, dating back 1974 and certainly one of the first public trial (or rather social experiment) of a text to speech/voice synthesizer.
Beyond the technical performance, it is the social experiment that is probably even more interesting. It's intertwined and odd nature. You can feel in the voice of the person on the other side of the phone (at the pizza factory --Domino's pizza--) that he really doesn't know how to take it and that the voice sounds like something not heard before. A few trials were necessary before somebody took it "seriously".
By John Eulenberg (publication on Youtube)
Every year, the researchers, students, and technology users who make up the community of the Michigan State University Artificial Language Laboratory celebrate the anniversary of the first use of a speech prosthesis in history: the use by a man with a communication disorder to order a pizza over the telephone using a voice synthesizer. This high-tech sociolinguistic experiment was conducted at the Lab on the evening of December 4, 1974. Donald Sherman, who has Moebius Syndrome and had never ordered a pizza over the phone before, used a system designed by John Eulenberg and J. J. Jackson incorporating a Votrax voice synthesizer, a product of the Federal Screw Works Co. of Troy, Michigan. The inventor of the Votrax voice synthesizer was Richard Gagnon from Birmingham, MI.
Monday, December 21. 2015
Gramazio Kohler celebrates 10 years of research in digital fabrication at ETHZ in a video | #digitalfabrication #researchbydesign
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter