3 min read

Are LLMs human?

No, they are not. Why not though? What makes us human?

We might soon be approaching an era when, at least as far as text-based conversations go, the vast majority of humans will no longer be able to tell an LLM from a human (we are not quite there yet; more on that maybe in a later post). There is a real urge (both of the user predisposed to anthropomorphize everything, and the corporation seeking to profit from it) to pretend that they are human. Which means that, now more than ever, we need to develop an understanding of what makes them not-human.

I came across this great post by Celine Nguyen earlier this week. If you have not read it, go read it first! What I am about to say is really just a postscript to that.

The article ends with some ways that the “ChatGPT as person” metaphor breaks down. These include:

  • LLMs do not have intent,
  • LLMs do not exhibit real needs or desires,
  • they are always available, and
  • they always accommodate you.
A drawing of a blue robot waving at you.
“How do you do, fellow humans? Would you care for some em dashes?”

I would like to add a few more; really, just one more: LLMs do not have bodies.

A body isn’t just something a human being happens to have. It is fundamental to our existence and our being. Our bodies allow us to interact with the world; they allow us to understand ourselves as distinct from others. Some would say our bodies exist even before we do; at the very least, we cannot exist unless our bodies exist.

And because we have bodies, we can feel pain. We can suffer; we can be injured. We can die. In fact, we will die. And we know that we will die. Crucially, we define dying not as the cessation of abstractions like thinking or consciousness, but as the end of our bodies.

Even speaking (and language in general) is something we experience through our bodies. Getting words off our chests can feel physically relieving; realizing we’re saying something embarrassing causes physical cringe. When we see a smile or a frown on the face of the person we’re speaking to, that tells us what words to use when we speak next; it tells us how to change our tone and body language; it even tells us whether we would want to speak to this person again.

Wanting: another thing we do with our bodies. Nguyen talks about needs and desires. Our needs and desires are fundamentally located in our bodies. Beyond a certain point, they cannot be refused. Which is why, as Nguyen says, LLMs are always available, and they always accommodate you. They don’t need to make dinner! They don’t need to put their child to bed!

When you as a human being talk to another human being, you are always cognizant of the above. You know the other person has limited time on this planet; that they could be choosing to spend this time in many other ways, and are instead choosing to spend it with you; that what you say to them or how you say it could hurt them, could determine whether they ever talk to you again; that eventually they will have to part from you because they have needs (eating, sleeping, shitting) or desires (talking to someone or caring for someone who’s not you). These things fundamentally shape language.

Now, there have been a lot of advancements in robotics; there will be more. In the near future, some say, robots that can move and interact with the world will have LLM capabilities. They will have “bodies”, in some sense of the word. They will have circuitry, neurons and speech patterns that can simulate laughter or even crying. They will be able to destroy and be destroyed. What then?

Well, those are not LLMs; those are fundamentally something else. More on that later.