Critiques
I found a couple interesting critiques of this manifesto in an online community I’m in:
- The “Resonant Computing Manifesto” is not very good
- This one I found aligned with my feelings fairly closely. The AI components of this manifesto seem to be at odds with its premises in a practical world.
- Why I Didn’t Sign the Resonant Computing Manifesto: The Foundations Need Work
- This one used a lens of “the principles of design justice”. The idea of which seems to be providing actionable ideas and centering those impacted by decision makers. I thought this was an interesting look into why this manifesto felt a little silly or toothless. CaptainCalliope, the author of this critique, knows a lot more than me on this subject.
Manifesto
Resonance in computing
I think this is a cool idea:
_There’s a feeling you get
in the presence of
beautiful buildings and bustling courtyards.
A sense that these spaces
are inviting you to slow down,
deepen your attention, and be
a bit more human.What if our software could do the same?_
When your environment enlivens you instead of deadening you, we call this “resonance”. This comes from the work of someone called “Christopher Alexander”.
Apparently, the cool bits of this article are derived from:
- The Timeless Way of Building by Christopher Alexander
We should try to engage with your resonant digital experiences. Online experiences are often “digital junk food”.
This article then goes on to say - in order to scale, our software has been sanding away the edge cases. Standardizing in order to scale. Deadening architecture is something Alexander pushed against apparently.
this article lost me with a weird AI bent
The article loses me here. It claims that AI provides a way out from this. That now a piece of technology can adaptively shape itself in response to an individual user.
I guess my dissonance stems from the bad (and more likely imo) uses of AI. It will not be used so that I can fine tune my digital world. It will be used by powerful corporations to suck up every last dreg of my attention.
I find myself thinking that AI software fine-tuning itself to my every need sounds really unappealing. I prefer the good clean tool that does exactly what it does and nothing more much more appealing. I want a suite of good clean tools.
The article goes onto make some claims that we can prevent the bad version of AI. I am skeptical.
the article’s “principles”
The article propose five principles that software should have to take a stand
- private - we should own our data - i’m with this
- this is cool, i think this is very impractical
- i do not think anything current AI is at all on board with this
- AI products consume as much data as possible right now, the incentives here make zero sense to me
- dedicated - you should trust that the data use and software is working just for you, not against you for your corporate owner
- Again, pro this, but like I don’t think this makes any sense at all haha
- I guess in the sense of like software i download and run - my CLI tools or whatever
- I do feel this way about them
- This seems pretty wildly impractical again for AI products though, which seem to be centralized by design
- plural - no one corporation should own a digital space
- Also cool
- This is also something I see as pretty unlikely
- But, this seems more reasonable to me than the other points
- Things like Wikipedia or Mastodon exist
- adaptable - software should be open ended for the needs of every user
- I think this is actually a cool idea
- I don’t really agree AI is a very interesting tool for it though
- I think something like the Folk Computer or Scrappy is way more interesting for this
- prosocial - tech should enable us becoming better neighbors and collaborators
- This is very nice
- I like this point
- Idk how you do anything meaningful with it lol