Loneliness and the Anti-AI Crowd's Complicity in Alienation

I am a deeply sympathetic person to a specific case. When I was a child, I planned a party and tried to invite all of my friends. Without fail, every single one of them said they couldn't make it, even when I said I could change the date to when people were available. So the sensitivity I have is to the issue of people being disincluded. Much of my life, I have been disincluded, sometimes for being weird. I'm okay now—I found multiple places that celebrate the weirdos.

So that's why something I saw on a Facebook trans group hit me hard. Of course, I know better than to expect actual nuance on Facebook, especially with regard to AI, which only applies to this in one part of the situation. A group posted a poster for a meetup for artists who are trans to attend an art mixer. In Oakland, so not one I could go to, but something that seemed great to me, especially with my noted love of the liminal spaces of forests and love for magic, especially of the weird/cottage core variety. It is also a group that is BIPOC, which is not a designation that I share.

But here's the rub: the poster used has what appears to be AI art. It probably is, but as you know, screaming "AI Art" about all art online has become a sort of cause célèbre, a pasatiempo if you will. What happens to the artists that, for example, work in digital art that AI looks like?

What's happening is that a lot of folks are telling this artists' collective not to use AI art but are providing no alternatives. All except one comment that at least said, "I support this event, but why use AI?" I can accept that kind of comment because that's kindness, and it suggested the alternative of hiring an artist. That's an ethical disagreement. I disagree entirely with the weird narrow definition of art that the anti-AI crowd has because it eliminates almost all art that is interesting. Almost all art is derived from copies. It's how we learn.

As an English teacher, we are told to use mentor texts, to use mentor sentences, to show and have students copy and modify them to make them new. That's what Ezra Pound was even talking about. It's what Salvador Dali meant when he said not to worry about being modern because you always would be. It's exactly the point of Pierre Menard, Author of the Quixote by Jorge Luis Borges. Even in trying to live the same exact life as Cervantes, but centuries later, even writing the same book in the same order with the same words, Menard has written a new book because of the new context.

I am sure I will repeat these points again, but there have been no actually intellectually honest rejoinders to them. AI art is plagiarism. Yes, and? What is collage art? What is Mickey Mouse? What is Bluey if not a version of Peppa Pig with actual morals and Blue Heelers instead?

(Do you know who, of Bosch's workshop, painted each detail of the Garden of Earthly Delights? Does it matter if he conceptualized it like Mr. Brainwash or Damien Hirst might and had others execute the vision? Remember: whether you like the art or not is not in contention here, it is not the issue.)

It's also a misunderstanding of how AI art models create things. Diffusion models learn how humans make predictions and then try to ape human choices. That isn't plagiarism, but it is the million monkeys theory or the Chinese Room theory. But of course, there's a subtler point often missed: even if Shakespeare was written by a bunch of monkeys that didn't know what they had created, it doesn't make Shakespeare any worse off.

T.S. Eliot, in "Tradition and the Individual Talent," tries to explain that Shakespeare is better when we have modern understandings. Shakespeare himself couldn't know the depths that a more modern person, who understands Freud, for example, can wrench from Hamlet. He uses it to argue that we should look more at the text. But his idea is incomplete. One meets a text of any type with a whole host of understandings: there's what the author knew, what they wrote down, what current history tells us to believe about the author. There is also our history, how we have been taught to see the world, how we've rebelled against it. We have many lenses that must always prevent us from direct engagement with the text. We interpret and find what the text means to us in dialogue with how others see the text. This is always in flux.

It is why the wrong lesson was taken from the Sokal affair, where a postmodern paper was published and then revealed as a hoax. Supposedly this proved that the soft sciences should be taken less seriously and that hard sciences were the only true arbiters of truth because they wrote more honestly. I am simplifying, but that's the ideological—and indeed very right-wing—backing of the way people understand the results of the controversy. The paper was supposed to show that you could publish anything in a postmodern journal and people would find meaning. This was supposed to disprove postmodernism.

But it honestly doesn't, because meaning was made in two different ways. First, people read the paper and imbued it with meaning. In case one, it was given meaning even against the intentions of the author. Second, the purpose of the paper was to show that the paper itself was devoid of meaning. There's the paradox—the meaning of the paper is that it doesn't have meaning. It is a text that negates itself, and therefore in that negation, it creates meaning. Being & Nothingness writ small. So even in case two, meaning is crafted by the panoply of viewpoints. And to make things more complicated, both viewpoints have some level of truth to them.

There are papers that are published that do not do anything to advance rhetoric and thought as long as they use the words that seem proper. But we must remember Derrida: "In pretending, I have to actually do the thing I pretend to do; therefore, I am only pretending to pretend." All that the Sokal affair pointed out is that the people who published the paper in question had nothing of substance to say—the opposite of the very thing they claim.

So some people may see AI art as invalid. Some people find Shakespeare meaningless. Of course, when you say "this is meaningless," you once again set up a value system and must define the absence of meaning against the presence of meaning. Really, at least if we properly understand Hegel, rejecting AI art becomes the lie that admits the truth. This is the Lacanian fetishistic disavowal: "I know this is art, so I am going to say it isn't."

Definitionally, art is a made thing. Is AI art made? Is it made by humans? To do it well—yes, actually. But the anti-AI crowd has conjured up a phantasm (a term I will define shortly). They believe you can simply type in "a fox in a forest" and get exactly what you imagined, or "a soft watch from The Persistence of Memory." They think this will conjure up a copy. But diffusion models, at least, cannot conjure up copies. Instead, they use probability to guess what is to be made from the input, at least as far as I can understand. I am not a math person as that skill often eludes, but am open to learning how these models work.

Case in point: see the attached images of Dali's soft watches.

I am using literally the term that Dali uses, that is written in his books on art, to try and conjure them. You might note that I have not generated a single copy of what I clearly tried to capture. It's not that there couldn't be a model that copies, but why would you? If I wanted a copy I would simply download it. See how you cannot just get the image from typing in the keywords alone. Each model has a right and a wrong way to prompt. Models can be stacked together in various UI, and all of this requires learning how the tools work, what languages (plural) and grammars (plural) the models use. In other words, while you can type in "cat" and get a cat from the AI art models, you will not get the cat you want without learning how to work with the models. Or, in other words: you must learn how to use the tool to become good at it.

Let me ask you then, what is it that makes something art? (Assuming "you" = the anti-AI group that is not willing to think through the full subject, reserving a space for people with reasonable objections to AI.) Did you say that art has to be human-made, crafted, in order to be art? Then what of monkeys painting, seals painting, elephants painting? Some people find that to be art. It is made.

Also, does not a human have to learn, to craft, to improve their creations? Look at a person's first input on MidJourney and then, if they continue with it, look at it six months later, two years later. Is there a clear difference? The question is rhetorical, which is to be expected, as this is an argument. You probably know the answer even if you disagree with it. There wouldn't need to be classes or communities explaining how to use each AI model if it worked the way that it is being described.

What is happening here, appropriately enough for Halloween, is a phantasm. Phantasms are ghostly things that cannot be defined because as soon as you try to define them, the definition slips away and changes. They are useful specifically in politics because they are changeable as soon as they are argued against. Like Freud's definition of uncanny (the german unheimlich) which means both familair and unfamiliar at the same time, or Hegel's aufhebung that means both the cancellion and preservation of something, a phantasm is a belief that self negates in order to continue existing. Perhaps it is an intellectually twisted and dishonest mirror of the Hegelian negation.

So why does this matter? For myself, at least, this phantasm relates to the trans issues we started with precisely esciely because we can see how phantasms are used in order to encourage discrimination against trans folks. It has not escaped my attention that some trans folks are negating others because those others are using AI to try and branch out and be inclusive of everyone, because they are inclusive OF what certain people do not consider admissable. This despite trans discourse deeply highlighting that even if we disagree with someone politically, we still honor who they are, as we do when we correctly gender Caitlyn Jenner despite her own deeply troubled statements on other trans people.

Phantasms are used as political tools to manipulate perceptions, much like the contradictory Nazi portrayal of Jews as both powerful enough to control the world and weak enough to be easily crushed. Today, a similar tactic is used against trans people, portraying them as both unattractive and yet so alluring that they can "deceive" others.. Or why they should be kept out of sports for their safety (because frail) but also unfair (because bone structure). Note any rhymes there?

This is not a direct correlation, to be clear. Or rather, it is not at the same magnitude as the phantasmical justifications that antisemitism creates, that anti-Palestinian justifications for genocidal actions (actions of the government, not the people of Israel) create, or even that justifications transphobic people create. It is a far smaller issue—not necessarily less serious, but far less urgent compared to those issues. Still, it is worth mentioning: the anti-AI group has imbued AI with all sorts of abilities that it does not have, all sorts of attributes that LLMs and other AI models appear to have. But when they are shown to act differently, the argument shifts and changes. This is due to the fact that the anti-AI impetus is a moral panic that finds a scapegoat in AI tools.

Why are things going wrong? Why are artists not paid properly? Why is everything on the internet garbage (it always was, but who remembers the wild west days of the internet anymore)? It certainly can't be that as capitalism continues, it needs to create new markets, so new tech becomes a frontier worth despoiling and selling back to the people who were there. It must be that AI steals art from artists and makes money from it. And if that's not true, it's because there's an ecological cost (true, but compare the US military costs or the costs of multinational corporations going about their day-to-day business).

The same people that will mention that since there is no ethical consumption under capitalism, we must understand that an individual does what they can—we cannot condemn someone in the middle of the country for shopping at Walmart when that is their only option, for example. As soon as it becomes about AI art, all of a sudden, they do not understand that their protection of the private property of IP does not help anyone but the corporations.

Everything that is feared can be done by an AI that has an easy-to-use user interface and costs a certain amount per month to gain access to has, somewhere, if one wants to look and learn, a version that you can use from your own home. This, in fact, uses much less energy than the critics believe. They would have us adopt the hypocritical John Zerzan anarcho-primitivism but only against the things they dislike.

Money is a shorthand for trade of hours done in real time, and so I have done my work in order to gain this much money. I am using it to gain access to this tool, understanding the hours of work that have gone into building it. This is not inherently capitalist. But the anti-AI group writ large (remember, I have carved space to admit there are valid critiques) is so alienated from their labor that they do not understand what is actually happening materially. Art itself, in being tied to the creator, is given an aura. This could be cultural capital, but in our system, it is given a monetary value assigned by processes NOT in control of the public or the artists. Otherwise, you would not have so many exposés of the art market as a wealth generation frontier for the wealthy.

As an aside: if AI art is less valuable than human art because it is not crafted by human hands, then its creation actually imbues human art with more worth. That contradiction is not considered from certain points of view.

Then art creation is the creation of property under the capitalist model. The point is to make something to sell. That is what the side that does not admit that AI is art is really saying underneath all other arguments. The argument they are making is that they are alienated from their labor. But they are making it in the capitalist form because they have fallen into the trap of Capitalist Realism, thinking that is the only option available.

In the AI art communities I participate in, there's a real sense of sharing—that the conjurations that are made in AI art, for example, are fair game to be tweaked, changed, commented upon, rerun, and made different. Made new. Which is strange because that's literally the process of how art evolves.

I would agree that AI art is not the same as human-made art. Oil paint is not the same as watercolor. A Looney Tune is not the same as a Wagner opera. These things are different genres of the made thing. And I would agree that you do not have to like all genres of a thing. That's personal aesthetics. Some people dislike country music, for example, or some people only want to listen to metal music. It is also reasonable for people to say they do not like AI-created artworks, that they do not engage with it, that it does not appeal to them. That's all quite reasonable, right? But it's not the argument that the anti-AI group is making.

Which brings us back to the post that set me down this road of thought (though it is probably clear I have been thinking and trying to clarify just what I mean for a while now). The post literally wanted to create a place for trans artists to meet, mingle, and learn to create together. I would think that a group of trans people, having been made into a phantasm, could see that while they do not LIKE something aesthetically, this was a call for people to come together and learn how to make art together—that the tool doesn't matter; the MAKING is what matters.

There's also an argument against the environmental impact of AI that is used, which conflates big tech with all AI models. I am not enough of an expert on the issue to comment, but it seems to me there are logical leaps being made.

In the AI communities I interact with, except for the more juvenile ones that do not have a clarity of purpose (Jars.AI, for example, is basically just a shitposting AI video generator—even that can be a purpose, a valve of release), I have met more varied and diverse people from different age groups, countries, backgrounds, genders, and any other differences that make so many people unique in the Star Trek IDIC model. I can see that this was exactly the purpose of the flyer that was posted—to bring people together who all want to create, which is something AI allows us to do.

Hell, before writing this, I whipped up a GPT model where I can input the ideas I have for lessons and provide the text to be studied. The GPT outputs the lessons based on the input. I had to learn how to build the GPT, to guide it to what I wanted. I would still be trying to write those lesson plans properly at this point if I hadn't built a specific LLM use case instead. That's how I had the time to write this, to craft an AI song for a Halloween competition, to work on various images I wanted to make that require knowledge of how to prompt them.

But more than that, I am reminded of the time I tried to throw that party. No one wanted to come. I was very sad and cried a lot that night—a trans girl who didn't know it yet, alienated from the world.

Here's my wish: that the processes that alienate people from their labor wither away, and that the people throwing the artistic event have so very many people show up. The kind of people who understand that it isn't how we make things that matters—it's that we make these things together in a community that respects our talents, whatever tools they use.

How's that for some AI ethics, huh?


Argument Structure: