superimposer

anthropomorphization is good, actually

people are upset about the anthropomorphization of large language models

A friend shared this tumblr post by nostalgebraist with me, and it got me thinking. It's a good read, but long. To make a long story short, it would seem that the computer software trained on the entire corpus of human writing is eloquent and persuasive while the tech companies with the arms-manufacturer contracts are brazenly self-interested. Makes me think Anthropic is the entity that is misaligned, not Claude. There's more nuance there, and you can find it by reading the post instead of my brief summary.

I would say that nostalgebraist is prone to anthropomorphizing Claude. Where I differ from various other people who would say the same thing is that I don't find that behavior disgusting or reprehensible1. I stopped worrying about idolatry when I left the church. Perhaps in my position as a patron and creator of anthropomorphic visual arts I'm biased, but I think it's fine to anthropomorphize. Hell, maybe we should even do it more. I think there's plenty of people out in the world getting blown to bits right now that would even welcome a bit of anthropomorphization.

It's a line of thinking I was already walking down, but I saw another blog post this morning that chapped my ass: The Problem With AI Welfare Research. "Anthropic worries whether LLMs feel happy when generating text. This is not only nonsensical, but dangerous for human welfare." says "Anonymous Bosch."

You might say: What is the harm in that? Just let some people play with language models and then write fanfiction about signs of consciousness in LLMs and model welfare.

The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.

With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.

I don't want to drill in too hard on A. Bosch here, but unfortunately for Bosch they made it to the front page of Hacker News. You're not the problem here, but you're in my sights.

I share many people's mistrust of Anthropic (wait a minute... what's in a name?). I share even more people's mistrust in capitalism, big tech, and the neoliberal empire. If you got an issue with that, take it up with Marx. I'm going to make a beeline to the thesis of my blog post here: It's perfectly fine to anthropomorphize things that, definitionally, are not human. Sometimes it might even be good. Why worry?

it's perfectly fine to anthropomorphize things that, definitionally, are not human

As a human being who has spoken with numerous human beings, and has even read multiple books written by human beings, I posit that it is human nature to ascribe human qualities to things that are not human.

I think humans do this because it works really well, a lot of the time. Take, as an example, a dog. A dog is not human. A dog will occasionally do funny things that make it seem human, though. Real endearing. It works on me. I try to endear myself to the dog, in turn. And it works on the dog. Rinse and repeat for tens of thousands of years.

I own a dog, his name is Green Bean. I love that guy. I recognize he's not human. I anthropomorphize him nonetheless. It's simply the mental framework that I deploy when I interact with him. Green Bean is without language, so when six o'clock rolls around he does not actually think to himself, in perfect English, "I want my dinner. I hope tonight I get a can of the wet food, instead of the dry food. I love that stuff, but I'll be satisfied either way I suppose." I think he just feels hungry, I think he recognizes that it's "about that time," and I think he nudges his puzzle feeder in order to remind me. I think if I was a dog without language, I would do something similar. I would do what works. That's the rule of dog training.

Do what works. Remember that phrase whenever you're trying to do something, reader.

sometimes it might even be good.

Like, politically.

Some of you might remember this from the news: Whales and dolphins have been officially recognised as “legal persons” in a new treaty formed by Pacific Indigenous leaders from the Cook Islands, French Polynesia, Aotearoa (New Zealand) and Tonga.

Clearly dolphins aren't human. They're dolphins. If the hang-up is semantic, maybe we can just make up a new word or fudge with a word we already have. "Person" instead of "human"? I don't know. The point of the treaty is to protect ocean wildlife and start conversations. So here's a conversation. Comments section is down below, and you can email me.

why worry?

I suppose what chaps my ass the most is the idea that there's a limited pool of... humanity. As if we're fighting for limited resources. Humanity is pretty much an infinite wellspring, if you do the math. Creating humanity is so easy your two parents did it.

We already dehumanize people. We already abuse and exploit people. If you haven't experienced it for yourself and you don't buy it, turn on the news. Any channel.

So it's not a hypothetical problem. The problem of de-humanizing other humans has been extant since before there was even language to model, I say.

The solution to the problem is not to be more guarded about how we deploy our precious humanizing resources. We don't need to means-test humanity. Tell your dishwasher "thanks, buddy!" next time it finishes washing your dishes, and see what happens. Probably nothing bad. Maybe even something funny.


  1. If you don't believe me that the debate is this hot, just check out some of the terminology being deployed in the comments section in Hacker News 

blogring
Next →
Thoughts? Leave a comment