By Cory Doctrow.

Reverse centaurs

AI is being used to create reverse centaurs. Centaurs are an automation term for, technology that helps a human. Autocomplete. Cars. Reverse centaurs are when a human has to help a technology. Like fact-checking the pull request output by an LLM.

How to pump a bubble

Tech giants are monopolies or cartels in various industries. They are all growth stocks. They’re currently operating at a scale that depends on them remaining growth stocks. Amazon is the example growth stock company in the article.

Target is an example of a mature stock company. Every dollar Amazon earns is valued higher than Target’s earnings on the stock market. This gives Amazon an advantage where it can “print” more stocks whenever it wants to outbid Target for talent or whatever else it wants.

Eventually, you have to stop being a growth stock… right? Once you own ALL online deliveries as Amazon… how do you grow more? At some point you must become “mature” once you take over the market.

This is a paradox of growth companies. The market loves you while you’re growing, but once you’ve taken over a giant chunk of your valuation plummets away.

To avoid this maturity, these giants need to keep pumping into new bubbles (or speculative investments from the less cynical perspective, unfounded) to try and keep growing - crypto, metaverse, AI.

  • It almost doesn’t matter if these bets win or not. These companies do probably want to win.
  • But, what really matters most is that the markets think they continue to grow.
  • And, they can give the appearance of growing infinitely by hopping bubble to bubble.

AI can’t do your job

The growth narrative for AI is “disrupting” the labor market. Bosses can fire employees and keep half the salary and give the other half to an AI company instead of an employee.

Now, if AI could do your job, this would still be a problem. We would have to figure out what to do with all these unemployed people.

But AI can’t do your job. It can help you do your job, but that does not mean it is going to save anyone money.

I guess Cory is talking about the current capabilities of AI in this quote. It is a productivity tool. That said, boosters are still talking about stuff like Claude Code as meaningfully producing output that would’ve been done by humans.

  • I’m skeptical of this claim - at least in the kind of work that I do
  • Maybe its possible for early stage startup sickos that this is true
  • Crucially, with productivity, I think “what people feel” doesn’t matter at all for its TRUE valuation. How productive something makes you in truth is impossible to tell, probably? At least on the short term.

The AI “human in the loop” pattern is a specific kind of reverse centaur - the accountability sink. It’s the human’s fault if something goes wrong, not the technology’s.


Stop the ‘okay fine AI is good at programming’ concessions

Okay this is a tangent from the point of this article, but I’m getting annoyed reading concessions that AI is good at programming now.

Cory says:

Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles.

I feel like there’s a narrative that anti-AI people think they need to concede that AI is genuinely useful in programming contexts… I really feel like pushing against that. I have unquestionably gotten use out of AI tooling. I think it is pretty questionable how much value that was though.

Using static types seems more useful than AI. Using Prettier seems more useful than AI. Stack Overflow (was/is?(sometimes?)) more useful than AI. Discord chatrooms with people making the library I’m using are more useful than AI.

I don’t think we should be using phrases like “genuinely make them more efficient” when describing AI in programming contexts… I would hazard that most programmers in love with AI do not have any clue if they’re more productive now. They FEEL more productive, but that’s not the same thing.

Some other things I’ve read on this subject:


Back to the article. Cory’s point is that, in order for AI to replace programmers it needs to replace senior programmers. The very people who can spot (sometimes) the subtle kinds of bugs AI tries to sneak into code.

Can AI replace artists?

Cory points out illustrators were already criminally underpaid as an industry and even if AI can replace them, it would hardly lead to the returns that this bubble is demanding.

The purpose of replacing artists (as Cory sees it) with AI is for marketing/PR of the AI product. Its coming for jobs, see!

Can it actually replace artists? I agree with Cory that it cannot. He says:

Here’s what I think art is: it starts with an artist, who has some vast, complex, numinous, irreducible feeling in their mind. And the artist infuses that feeling into some artistic medium. They make a song, a poem, a painting, a drawing, a dance, a book or a photograph. And the idea is, when you experience this work, a facsimile of the big, numinous, irreducible feeling will materialize in your mind.

I think this is a reasonable definition of art that approximates what I feel art is. Some kind of communication from one person to another via an artistic medium.

An AI has nothing to communicate. A simple prompt diluted over thousands of pixels or words leaves very little intent to get from the piece.

Cory also says, and this matches my feelings, if you’re able to get some more intention-per-pixel it would be by making progressively detailed and intricate prompts. At which point - oops! - you have a human artist again.

AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it is missing something. It has nothing to say, or whatever it has to say is so diluted that it is undetectable.

Everything AI does now with public information is legal for good reason. They read webpages, count occurrences of words, and then publish a literary work (a model of what they found).

We don’t want a world where we’re not allowed to look at webpages, report facts about them, and then publish facts about that data. This is also how a search engine works for example. (Aside: Maybe search indices should be publicly owned too?)

IMO - a big difference between when I do this and an AI does it is scale. But curious to see where Cory goes from here.

We have historically expanded copyright over time. Alongside that the power of the media industry continues to grow. Alongside that the portion of this power/wealth owned by bosses of a few media companies continues to grow while that owned by workers shrinks.

Creative workers who cheer on lawsuits by the big studios and labels need to remember the first rule of class warfare: things that are good for your boss are rarely what’s good for you.

All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans

That is why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.

I did not know about monkey selfie! What a story! I feel bad for the artist who set up the camera to capture the picture.

Cory argues that instead of pushing for expanded copyright, we should be pushing for sectoral bargaining. This is an interesting idea!

We can do it ourselves, the way the writers did in their historic writers’ strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in “sectoral bargaining”, whereby all the workers in a sector can negotiate a contract with every employer in the sector.

That has been illegal for most workers since the late 1940s, when the Taft-Hartley Act outlawed it. If we are gonna campaign to get a new law passed in hopes of making more money and having more control over our labor, we should campaign to restore sectoral bargaining, not to expand copyright.

What will the bubble leave behind?

we will have the open-source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video; describing images; summarizing documents; and automating a lot of labor-intensive graphic editing – such as removing backgrounds or airbrushing passersby out of photos. These will run on our laptops and phones, and open-source hackers will find ways to push them to do things their makers never dreamed of.

If there had never been an AI bubble, if all this stuff arose merely because computer scientists and product managers noodled around for a few years coming up with cool new apps, most people would have been pleasantly surprised with these interesting new things their computers could do. We would call them “plugins”.

How to pop the bubble?

To pop the bubble, we have to hammer on the forces that created the bubble: the myth that AI can do your job, especially if you get high wages that your boss can claw back; the understanding that growth companies need a succession of ever more outlandish bubbles to stay alive; the fact that workers and the public they serve are on one side of this fight, and bosses and their investors are on the other side.