The artist Liz West continues inventing original and psychedelic installations, this time as part of the Bristol Biennal. Her project Our Colour is composed of filters that allow the lights to change and is a good way to study the reactions of the human brain when confronted to certain luminous atmospheres. After travelling through all the shades, each person usually ends up enjoying his or her favorite one.
Note: not only photography is affected by digitizing, of course... residency and citizenship as well. In a different way than one could expect. "Crypto-residency" to come soon to help you invest your cryptocurrencies in a "crypto-land"?
Estonia aims to bring 10 million people to its digital shores.
With 1.3 million citizens, Estonia is one of the smallest countries in Europe, but its ambition is to become one of the largest countries in the world. Not one of the largest geographically or even by number of citizens, however. Largest in e-residents, a category of digital affiliation that it hopes will attract people, especially entrepreneurs.
Started two years ago, e-residency gives citizens of any nation the opportunity to set up Estonian bank accounts and businesses that use a verified digital signature and are operated remotely, online. The program is an outgrowth of a digitization of government services that the country launched 15 years ago in a bid to save money on the staffing of government offices. Today Estonians use their mandatory digital identity to do everything from track their medical care to pay their taxes.
Now the country is marketing e-residency as a path by which any business owner can set up and run a business in the European Union, benefiting from low business costs, digital bureaucratic infrastructure, and in certain cases, from the country’s low tax rates.
“If you want to run a fully functional company in the EU, in a good business climate, from anyplace in the world, all you need is an e-residency and a computer,” says Estonian prime minister Taavi Rõivas.
Tallinn, capital city of Estonia
Things that don’t come with e-residency include a passport and citizenship. Nor do e-residents automatically owe taxes to the country, though digital companies that incorporate there and obtain a physical address can benefit from the country’s low tax rate. The chance to run a business out of Estonia has proven popular enough that almost 700 new businesses have been set up by the nearly 1,000 new e-residents, according to statistics from the government.
The government hopes to have 10 million e-residents by 2025, though others think that goal is a stretch.
Estonian officials describe e-residency as an early step toward a mobile future, one in which countries will compete for the best people. And they are not the only ones pursing this idea. Payment company Stripe recently launched a program called Atlas that it hopes will boost the number of companies using its services to accept payments. It helps global Internet businesses incorporate in the state of Delaware, open a bank account, and get tax and legal guidance.
Juan Pablo Vazquez Sampere, a professor at Madrid’s IE Business School, sees the Estonia program as enabling global entrepreneurs to operate in Europe at a fraction of the cost of living in the region.
Last year, Arvind Kumar, an electrical engineer who lives just outside Mumbai, left his 30-year-career in the steel industry to start Kaytek Solutions OÜ, which creates models to improve manufacturing quality and efficiency. Last September Kumar flew to Tallinn, the capital of Estonia, and spent half a day setting up a bank account and a virtual office. In addition to the price of the trip, initial setup costs were around $3,300 (€3,000), and he has ongoing expenses of about $480 (€440) a year. The Indian system of setting up a new business is “tedious” by contrast, says Kumar—time-consuming, difficult, and expensive.
Cost was also a factor for Vojkan Tasic, chairman of a high-end car service company called Limos4, in his decision to pick Estonia as a new home for the company. Started in his home country of Serbia six years ago, Limos4 has been paying credit-card processing fees of 7 percent. Limos4 operates in 20 large European cities as well as Dubai and Istanbul, and counts Saudi Arabian and Swedish royalty and U.S. and European celebrities among its clients.
After considering Delaware and Ireland, Tasic chose Estonia, where he can settle his credit-card transactions through PayPal subsidiary Braintree for 2.9 percent and where there is no tax on corporate profits so long as they remain invested in the business. Since getting his e-residency and moving the company to Estonia, profits are up 20 percent, Tasic says. Annual revenue is around $2 million.
For Estonia, the financial benefit comes from the fees e-residents pay to the government and the tax revenue local support services like accountants and law firms make.
To Tasic, who runs background checks on all his drivers, one of the best things about the e-residency is the fact that the Estonian police investigate every applicant. Since Kumar set up his company, Estonia has begun allowing e-residents to set up their bank accounts online, but there remains a level of security, because to pick up their residency card, applicants must go in person to one of Estonia’s 39 embassies around the world and prove their identity.
Some have raised concerns that the e-residence might attract shady characters who could shield themselves from prosecution and possible punishment by doing business in Estonia but residing outside of its jurisdiction. But with no serious cases of fraud or illicit activity to date, it is unclear whether this is a serious concern, says Karsten Staehr, a professor of international and public finance at Tallinn University of Technology.
As with any digital system, security is a major concern. Estonia, which sits just to the west of Russia and south of the Gulf of Finland, recently announced plans to back up much of its data, including banking credentials, birth records, and critical government information, in the United Kingdom.
In 2007 the country suffered a sustained denial-of-service cyberattack linked to Russia after moving a Soviet war memorial from Tallinn city center and has run a distributed system for some time with data centers in every embassy in the world.
“I am convinced they are doing a good job,” says Tasic, who holds a PhD in information services. “But with increased attention, the attacks will increase, so let’s see what the future is.”
Note: in the continuity of my previous post/documentation concerning the project Platform of Future-Past (fabric | ch's recent winning competition proposal), I publish additional images (several) and explanations about the second phase of the Platform project, for which we were mandated by Canton de Vaud (SiPAL).
The first part of this article gives complementary explanations about the project, but I also take the opportunity to post related works and researches we've done in parallel about particular implications of the platform proposal. This will hopefully bring a neater understanding to the way we try to combine experimentations-exhibitions, the creation of "tools" and the design of larger proposals in our open and process of work.
Notably, these related works concerned the approach to data, the breaking of the environment into computable elements and the inevitable questions raised by their uses as part of a public architecture project.
The information pavilion was potentially a slow, analog and digital "shape/experience shifter", as it was planned to be built in several succeeding steps over the years and possibly "reconfigure" to sense and look at its transforming surroundings.
The pavilion conserved therefore an unfinished flavour as part of its DNA, inspired by these old kind of meshed constructions (bamboo scaffoldings), almost sketched. This principle of construction was used to help "shift" if/when necessary.
In a general sense, the pavilion answered the conventional public program of an observation deck about a construction site. It also served the purpose of documenting the ongoing building process that often comes along. By doing so, we turned the "monitoring dimension" (production of data) of such a program into a base element of our proposal. That's where a former experimental installation helped us: Heterochrony.
As it can be noticed, the word "Public" was added to the title of the project between the two phases, to become Public Platform of Future-Past (PPoFP) ... which we believe was important to add. This because it was envisioned that the PPoFP would monitor and use environmental data concerning the direct surroundings of the information pavilion (but NO DATA about uses/users). Data that we stated in this case Public, while the treatment of the monitored data would also become part of the project, "architectural" (more below about it).
For these monitored data to stay public, so as for the space of the pavilion itself that would be part of the public domain and physically extends it, we had to ensure that these data wouldn't be used by a third party private service. We were in need to keep an eye on the algorithms that would treat the spatial data. Or best, write them according to our design goals (more about it below).
That's were architecture meets code and data (again) obviously...
The Public Platform of Future-Past is a structure (an information and sightseeing pavilion), a Platform that overlooks an existing Public site while basically taking it as it is, in a similar way to an archeological platform over an excavation site.
The asphalt ground floor remains virtually untouched, with traces of former uses kept as they are, some quite old (a train platform linked to an early XXth century locomotives hall), some less (painted parking spaces). The surrounding environment will move and change consideralby over the years while new constructions will go on. The pavilion will monitor and document these changes. Therefore the last part of its name: "Future-Past".
By nonetheless touching the site in a few points, the pavilion slightly reorganizes the area and triggers spaces for a small new outdoor cafe and a bikes parking area. This enhanced ground floor program can work by itself, seperated from the upper floors.
Several areas are linked to monitoring activities (input devices) and/or displays (in red, top -- that concern interests points and views from the platform or elsewhere --). These areas consist in localized devices on the platform itself (5 locations), satellite ones directly implented in the three construction sites or even in distant cities of the larger political area --these are rather output devices-- concerned by the new constructions (three museums, two new large public squares, a new railway station and a new metro). Inspired by the prior similar installation in a public park during a festival -- Heterochrony (bottom image) --, these raw data can be of different nature: visual, audio, integers from sensors (%, °C, ppm, db, lm, mb, etc.), ...
Input and output devices remain low-cost and simple in their expression: several input devices / sensors are placed outside of the pavilion in the structural elements and point toward areas of interest (construction sites or more specific parts of them). Directly in relation with these sensors and the sightseeing spots but on the inside are placed output devices with their recognizable blue screens. These are mainly voice interfaces: voice outputs driven by one bot according to architectural "scores" or algorithmic rules (middle image). Once the rules designed, the "architectural system" runs on its own. That's why we've also named the system based on automated bots "Ar.I." It could stand for "Architectural Intelligence", as it is entirely part of the architectural project.
The coding of the "Ar.I." and use of data has the potential to easily become something more experimental, transformative and performative along the life of PPoFT.
Observers (users) and their natural "curiosity" play a central role: preliminary observations and monitorings are indeed the ones produced in an analog way by them (eyes and ears), in each of the 5 interesting points and through their wanderings. Extending this natural interest is a simple cord in front of each "output device" that they can pull on, which will then trigger a set of new measures by all the related sensors on the outside. This set new data enter the database and become readable by the "Ar.I."
The whole part of the project regarding interaction and data treatments has been subject to a dedicated short study (a document about this study can be accessed here --in French only--). The main design implications of it are that the "Ar.I." takes part in the process of "filtering" which happens between the "outside" and the "inside", by taking part to the creation of a variable but specific "inside atmosphere" ("artificial artificial", as the outside is artificial as well since the anthropocene, isn't it ?) By doing so, the "Ar.I." bot fully takes its own part to the architecture main program: triggering the perception of an inside, proposing patterns of occupations.
"Ar.I." computes spatial elements and mixes times. It can organize configurations for the pavilion (data, displays, recorded sounds, lightings, clocks). It can set it to a past, a present, but also a future estimated disposition. "Ar.I." is mainly a set of open rules and a vocal interface, at the exception of the common access and conference space equipped with visual displays as well. "Ar.I." simply spells data at some times while at other, more intriguingly, it starts give "spatial advices" about the environment data configuration.
In parallel to Public Platform of Future Past and in the frame of various research or experimental projects, scientists and designers at fabric | ch have been working to set up their own platform for declaring and retrieving data (more about this project, Datadroppers, here). A platform, simple but that is adequate to our needs, on which we can develop as desired and where we know what is happening to the data. To further guarantee the nature of the project, a "data commune" was created out of it and we plan to further release the code on Github.
In tis context, we are turning as well our own office into a test tube for various monitoring systems, so that we can assess the reliability and handling of different systems. It is then the occasion to further "hack" some basic domestic equipments and turn them into sensors, try new functions as well, with the help of our 3d printer in tis case (middle image). Again, this experimental activity is turned into a side project, Studio Station (ongoing, with Pierre-Xavier Puissant), while keeping the general background goal of "concept-proofing" the different elements of the main project.
A common room (conference room) in the pavilion hosts and displays the various data. 5 small screen devices, 5 voice interfaces controlled for the 5 areas of interests and a semi-transparent data screen. Inspired again by what was experimented and realized back in 2012 during Heterochrony (top image).
----- ----- -----
PPoFP, several images. Day, night configurations & few comments
Public Platform of Future-Past, axonometric views day/night.
An elevated walkway that overlook the almost archeological site (past-present-future). The circulations and views define and articulate the architecture and the five main "points of interests". These mains points concentrates spatial events, infrastructures and monitoring technologies. Layer by layer, the suroundings are getting filtrated by various means and become enclosed spaces.
Walks, views over transforming sites, ...
Data treatment, bots, voice and minimal visual outputs.
Night views, circulations, points of view.
Night views, ground.
Random yet controllable lights at night. Underlined areas of interests, points of "spatial densities".
Note: we've been working recently at fabric | ch on a project that we couldn't publish or talk about for contractual reasons... It concerned a relatively large information pavilion we had to create for three new museums in Switzerland (in Lausanne) and a renewed public space (railway station square). This pavilion was supposed to last for a decade, or a bit longer. The process was challenging, the work was good (we believed), but it finally didn't get build...
Sounds sad but common isn't it?
...
We'll see where these many "..." will lead us, but in the meantime and as a matter of documentation, let's stick to the interesting part and publish a first report about this project.
It consisted in an evolution of a prior spatial installation entitled Heterochrony (pdf). A second post will follow soon with the developments of this competition proposal. Both posts will show how we try to combine small size experiments (exhibitions) with more permanent ones (architecture) in our work. It also marks as well our desire at fabric | ch to confront more regularly our ideas and researches with architectural programs.
On the jury paper was written, under "price" -- as we didn't get paid for the 1st price itself -- : "Réalisation" (realization).
Just below in the same letter, "according to point 1.5 of the competition", no realization will be attributed... How ironic! We did work further on an extended study though.
A few words about the project taken from its presentation:
" (...) This platform with physically moving parts could almost be considered an archaeological footbridge or an unknown scientific device, reconfigurable and shiftable, overlooking and giving to see some past industrial remains, allowing to document the present, making foresee the future.
The pavilion, or rather pavilions, equipped with numerous sensor systems, could equally be considered an "architecture of documentation" and interaction, in the sense that there will be extensive data collected to inform in an open and fluid manner over the continuous changes on the sites of construction and tranformations. Taken from the various "points of interets' on the platform, these data will feed back applications ("architectural intelligence"?), media objects, spatial and lighting behaviors. The ensemble will play with the idea of a combination of various time frames and will combine the existing, the imagined and the evanescent. (...) "
Note: "(...) For example, technologists might be held responsible if they use poor quality data to train AI systems, or fossilize prejudices based on race, age, or gender into the algorithms they design."
Mind your data and the ones you'll use to "fossilize", so to say (and as long as you'll already know what's in your data)... It is then no more about "if" you're collecting data, but "which" data you'll use to feed your AIs, and "how". Now that we clearly see that large corporations plan to use more and more of these kind of techs to also drive "domestic" applications (and by extension as we already know "personal" applications of all sorts), it will be important to understand the stakes behind them as it will become part of our social and design context.
An important problem that I can see for designers and architects is that if you don't agree with the principles --commercial, social, ethical and almost conceptual-- implied by the technologies (i.e. any "homekit" like platforms controlled by bots), you won't find many if any counter propositions/techs to work with (all large diffusion products will support iOS, Android and the likes). It is almost a dictatorship of products hidden behind a "participate" paradigma. Either you'll be in and accept the conditions (you might use an API provided with the service --FB, Twitter, IFTTT, Apple, Google, Wolfram, Siemens, MS, etc.--, but then feed the central company nonetheless), or out... or possibly develop you own solution(s) that will probably be a pain in the ass to use for your client because it/they will clearly be side products hard to maintain, update, etc.
"Some" open source projects driven by "some" communities could be/become (should be) alternative solutions of course, but for now these are good for prototyping and teaching, not for consistent "domestic" applications... And when they'll possibly do so, they might likely be bought. So we'll have "difficulties" as (interaction) designers, so to say: you'll work for your client(s) ... and the corp. that provides the services you'll use!
Should the government regulate artificial intelligence? That was the central question of the first White House workshop on the legal and governance implications of AI, held in Seattle on Tuesday.
“We are observing issues around AI and machine learning popping up all over the government,” said Ed Felten, White House deputy chief technology officer. “We are nowhere near the point of broadly regulating AI … but the challenge is how to ensure AI remains safe, controllable, and predictable as it gets smarter.”
One of the key aims of the workshop, said one of its organizers, University of Washington law professor Ryan Calo, was to help the public understand where the technology is now and where it’s headed. “The idea is not for the government to step in and regulate AI but rather to use its many other levers, like coördination among the agencies and procurement power,” he said. Attendees included technology entrepreneurs, academics, and members of the public.
In a keynote speech, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, noted that we are still in the Dark Ages of machine learning, with AI systems that generally only work well on well-structured problems like board games and highway driving. He championed a collaborative approach where AI can help humans to become safer and more efficient. “Hospital errors are the third-leading cause of death in the U.S.,” he said. “AI can help here. Every year, people are dying because we’re not using AI properly in hospitals.”
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, left, speaks with attendees at the White House workshop on artificial intelligence.
Nevertheless, Etzioni considers it far too early to talk about regulating AI: “Deep learning is still 99 percent human work and human ingenuity. ‘My robot did it’ is not an excuse. We have to take responsibility for what our robots, AI, and algorithms do.”
A panel on “artificial wisdom” focused on when these human-AI interactions go wrong, such as the case of an algorithm designed to predict future criminal offenders that appears to be racially biased. “The problem is not about the AI agents themselves, it’s about humans using technological tools to oppress other humans in finance, criminal justice, and education,” said Jack Balkin of Yale Law School.
Several academics supported the idea of an “information fiduciary”: giving people who collect big data and use AI the legal duties of good faith and trustworthiness. For example, technologists might be held responsible if they use poor quality data to train AI systems, or fossilize prejudices based on race, age, or gender into the algorithms they design.
As government institutions increasingly rely on AI systems for decision making, those institutions will need personnel who understand the limitations and biases inherent in data and AI technology, noted Kate Crawford, a social scientist at Microsoft Research. She suggested that students be taught ethics alongside programming skills.
Bryant Walker Smith from the University of South Carolina proposed regulatory flexibility for rapidly evolving technologies, such as driverless cars. “Individual companies should make a public case for the safety of their autonomous vehicles,” he said. “They should establish measures and then monitor them over the lifetime of their systems. We need a diversity of approaches to inform public debate.”
This was the first of four workshops planned for the coming months. Two will address AI for social good and issues around safety and control, while the last will dig deeper into the technology’s social and economic implications. Felten also announced that the White House would shortly issue a request for information to give the general public an opportunity to weigh in on the future of AI.
The elephant in the room, of course, was November’s presidential election. In a blog post earlier this month, Felten unveiled a new National Science and Technology Council Subcommittee on Machine Learning and Artificial Intelligence, focused on using AI to improve government services “between now and the end of the Administration.”
Note: can a computer "fake" a human? (hmmm, sounds a bit like Mr. Turing test isn't it?) Or at least be credible enough --because it sounds pretty clear in this video, at that time, that it cannot fake a human and that it is m ore about voice than "intelligence"-- so that the person on the other side of the phone doesn't hang up? This is a funny/uncanny experiment involving D. Sherman at Michigan State University, dating back 1974 and certainly one of the first public trial (or rather social experiment) of a text to speech/voice synthesizer.
Beyond the technical performance, it is the social experiment that is probably even more interesting. It's intertwined and odd nature. You can feel in the voice of the person on the other side of the phone (at the pizza factory --Domino's pizza--) that he really doesn't know how to take it and that the voice sounds like something not heard before. A few trials were necessary before somebody took it "seriously".
Every year, the researchers, students, and technology users who make up the community of the Michigan State University Artificial Language Laboratory celebrate the anniversary of the first use of a speech prosthesis in history: the use by a man with a communication disorder to order a pizza over the telephone using a voice synthesizer. This high-tech sociolinguistic experiment was conducted at the Lab on the evening of December 4, 1974. Donald Sherman, who has Moebius Syndrome and had never ordered a pizza over the phone before, used a system designed by John Eulenberg and J. J. Jackson incorporating a Votrax voice synthesizer, a product of the Federal Screw Works Co. of Troy, Michigan. The inventor of the Votrax voice synthesizer was Richard Gagnon from Birmingham, MI.
The event was covered at the time by the local East Lansing cable news reporter and by a reporter from the State News. About seven years later, in 1981, a BBC production team produced a documentary about the work of the Artificial Language Laboratory and included a scene of a man with cerebral palsy, Michael Williams, ordering a pizza with a newer version of the Lab's speech system. This second pizza order became a part of the documentary, which was broadcast throughout the U.S. as part of the "Nova" science series and internationally as part of the BBC's "Horizon" series.
In January, 1982, the Nova show on the Artificial Language Lab was shown for the first time. The Artificial Language Lab held a premiere party in the Communication Arts and Sciences Building for all the persons who appeared in the program plus all faculty members of the College of Communication Arts and Sciences and their families. The Domino's company generously provided free pizzas for all the guests.
The following December, Domino's again provided pizzas for a party, again held at the Communication Arts building, to commemorate the first ordering of a pizza eight years earlier. The Convocation was held thereafter every year through 1988, each year receiving pizzas through the generous gift of Domino's.
A Communication Enhancement Convocation was held in 1999, celebrating the 25th anniversary of the first pizza order.In addition to Dominos's contribution of pizzas, the Canada Dry Bottling Co. of Lansing provided drinks.The Convocations resumed in 2010 through 2012, when Dr. John Eulenberg advanced to Professor Emeritus status.
At each event, in addition to faculty and students, the convocation guests included local dignitaries from the MSU board of trustees and from the Michigan state legislature. Stevie Wonder, whose first talking computer and first singing computer were designed at the Artificial Language Lab, made telephone appearances and spoke with the youngsters using Artificial Language Lab technology through their
school district special education programs. MSU icons such as the football team, Sparty, and cheer leaders made appearances as well.
Now, through YouTube, we can relive this historical moment and take a thoughtful look back at 40 years of progress in the delivery of augmentative communication technology to persons with disabilities.
Note: we are --like many others I guess-- very interested in the work of Carribean writer Édouard Glissant here at the studio (fabric | ch). Concepts like "archipelagic thinking", "rhizomic identity", "Tout-Monde" (could be imperfectly translated as "Whole-World") and of course "creolization" are powerful yet poetic and positive tools to understand our interleaved world and possibly envision ways of action.
I recently followed a link posted by Nicolas Nova which drived me to a channel on Youtube (managed by Laure Braeckman) that gather different sources/talks by E. Glissant and where he speaks about the different concets that structure his thinking.
Below is the link to this resource that might be useful when you'll like to discover or come back to these ideas.
Note: I'll move this afternoon to Grandhotel Giessbach (sounds like a Wes Anderson movie) to present later tonight the temporary results of the research I'm jointly leading with Nicolas Nova for ECAL & HEAD - Genève, in partnership with EPFL-ECAL Lab & EPFL: Inhabiting and Interfacing the Cloud(s). Looking forward to meet the Swiss design research community (mainly) at the hotel...
Christophe Guignard and myself will have the pleasure to present the temporary results of the design researchInhabiting & Interfacing the Cloud(s) next Thursday (28.01.2016) at theSwiss Design Networkconference.
The conference will happen at Grandhotel Giessbach over the lake Brienz, where we'll focus on the research process fully articulated around the practice of design (with the participation of students in the case of I&IC) and the process of project.
This will apparently happen between "dinner" and "bar", as we'll present a "Fireside Talk" at 9pm. Can't wait to do and see that...
The full program and proceedings (pdf) of the conference can be accessedHERE.
As for previous events, we'll try to make a short "follow up" on this documentary blog after the event.
Christophe Guignard will introduce the participants to the stakes and the progresses of our ongoing experimental work. There will be profiled and inspiring speakers such as Lev Manovitch, John Thackara, Andreas Brockmann, etc.
Christophe Guignard will make a short “follow up” about the conference on this blog once he’ll be back from Riga.
Note: a book as a follow up of the exhibition for which fabric | ch designed the scenography last May at the Haus der elektronische Künste in Basel (project White Oblique, downloadable pdf on our website). I was implicated in a double way in the exhibition due to the fact that the content of the design research I'm jointly leading with Nicolas Nova for ECAL and HEAD, Inhabiting and Interfacing the Cloud(s), was also exhibited. I have the pleasure to publish a text in the book about the state and objectives of the ongoing research as well.
Note: we’re pleased to see that the publication related to the exhibition and symposium Poetics & Politics of Data, curated by Sabine Himmelsbach at the H3K in Basel, has been released later this summer. The publication, with the same title as the exhibition, was first distributed in the context of the conference Data Traces. Big Data in the Context of Culture and Society that also took place at H3K on the 3rd andf 4th of July.
The book contains texts by Nicolas Nova (Me, My cloud and I) and myself (Inhabiting and Interfacing the Cloud(s). An ongoing Design Research), but also and mainly contributions by speakers of the conference (which include the american theorician Lev Manovitch, curator Sabine Himmelsbach and Prof. researcher from HGK Basel Claudia Mareis) and exhibiting artists (Moniker, Aram Bartholl, Rafael Lozano-Hemmer, Jennifer Lyn Morone, etc.)
The publication serves both as the catalogue of the exhibition and the conference proceedings. Due to its close relation to our subject of research (the book speaks about data, we’re interested in the infrastructure –both physical and digital– that host them), we’re integrating the book to our list of relevant book. The article A short history of Clouds, by Orit Halpern is obviously of direct signifiance to our work.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.