É. Urcades

Back to All Writing

January 3 (Dia 1)

1

Several years ago, before joining Tlon full-time to work on next-generation actually-personal computers, I was obsessed with the idea of starting a company involved with promoting new ideas around death culture through the design of new civic infrastructure.

Specifically, I was interested in the conceptual reversal of “death sites” (such as cemeteries) as environments of decay and stagnation to environments of new life, knowledge, and cultural transfer. I felt it was a shame that the richness of past lives and ideas were lost on a people constantly looking forward to the next thing, and thought death environments were an interesting material to shape.

This overall idea was heavily informed by my experience with Dia de los Muertos, the Mexican conception of a “day of the dead”, a period of time where the barrier between life and death becomes a little more porous.

I seriously believed places of remembrance could be transformed into cultural engines or avenues for novel meaning-making, turning sites of contemplation into sites of compounding enlightenment. A successor to the idea of a library oriented towards highly local know-how and history.

My first instinct as a “product designer” was to learn how to make physical grave sites speak, how to somehow (respectfully, with consent) reanimate the dead, bring their knowledge back to the world of the living, etc. Tombs as “Solid state memory”, save crystals, sites where you could speak to your ancestors, etc.

While I generated a load of novel ideas at the time, and wrote about them privately, and pitched a few ideas (such as a wind-chime urn) to various folks, I felt like there was no truly meaningful technological substrate to build this idea upon, so I let it rest for a while.

A few months after first starting to seriously think about using technology to revitalize death sites, I found a client in Samara/Airbnb and started to work on an iPhone app called Pilot that would serve as a “magic mirror” which would document your life, your moods, major events (and travel of course), and begin to reconstruct this data into a form of reflection, eventually granting you the ability to look back on your life and learn interesting patterns embedded within it. Further, we were exploring ideas for how to weave your life’s patterns into other people’s lives without an explicit social media layer.

It was interesting work, and part of a lineage of a greater line of thinking originally established (to my knowledge) at Lapka, later at Samara, later at LOT, still now at a variety of other projects.

Pilot was caught up in the greater imperative of establishing Backyard as Samara’s primary work. After evaluating the various ways we could interpret and shape the data our app would work with, we found that the technological substrate (again) was really not yet ripe enough to express our ambitions, and attention was mostly directed towards Backyard anyways.

While working at Samara, I began to think about parallel expressions of what it meant to “embed life into technological works” (and vice versa), and I had a lot of really interesting conversations with friends on the subject of (digital) animism. Most of these conversations happened at what was then called Soft Surplus.

Our conversations reached a point where we wanted to apply some of these diffuse ideas towards the idea of embedding “social life” into a variety of objects in our shared warehouse studio space, such as recipes for meals within a fridge, or steps to use a machine within the machine itself. Our first ideas involved scanning QR codes and building small p2p web presence(s) for things like plants.

The general idea was to technologically facilitate objects speaking for themselves. A plant augmented with this mechanism could broadcast to the local network how often it should be watered, or if it was receiving enough sunlight. A track saw could send a notification to you if you neared it, warning you that it was a dangerous device and required paging through its manual before it’d unlock for you. The trash can could note when it’s becoming too stinky, or full of garbage, and broadcast out to the studio space to clean it. And so on…

A third time, we ran into technological/implementation barriers (and a lack of time in general) preventing us from achieving our goal, which was to ideally “look at an object and have it tell you about itself or formulate a point of view based on external circumstances”. It turns out this is quite hard to figure out without pervasive passthrough augmented reality or VR, and can only be hacked together in the most rudimentary manner.

It’s been well over six years or so since I’ve first started thinking about this core idea that the world itself is a rich medium for technological expression. Rather than constrain our thinking to explicit devices for computation, such as phones and laptops, I’d argue that there’s an insane amount of potential in unlocking “the general environment” as a site for computation. I believe a phone is the worst place to situate a social network, and would rather see social networks orbit objects in everyday objects like ghosts.

What would it be like to wander the world knowing there are hidden worlds behind ordinary things? This is already the case, so why not facilitate the mechanism further?

Technology is the redirection of how information accumulates, and I have a recipe for making everyday items more dense with information.

I’d argue that at this point in time, a camera is all you need to turn something into a computer.

I’d argue that a flower can be a computer —

Dia 2

*

Back to All Writing

Subscribe

RSS