In recent days, maybe less than a few months at the time of this writing, there has been debate on whether or not AI Art is Art, and how hostile or not writers and artists should be to it. AI advocates see it as an exciting new tool to augment human ability, while its detractors see it as a destructive force that steals work from artists both by limiting their job opportunities and using their own work to improve itself while giving no credit to its sources.

I think I may want to define my position for posterity, if only because this is a fascinating and pivotal point in human art, perhaps only rivaled by humans discovering they could use paint to create animals on cave walls. I don’t write that lightly, but I think AI has the potential to change everything. I’m a writer by hobby and sometimes trade, and so my biases lean that way. However, the damage done to visual artists will likely be the most destructive out of all the arts.

I’d define art simply as machines (perhaps a natural machine rather than an AI one) that generate ideas or emotions in the viewer. This can be as simple as a story that says “fascism bad” or it can be a complicated emotional crash as when one views Ivan the Terrible and His Son Ivan. When I write something it is a personal letter from me to you over the space of time. I like stories, these stories have a central theme, or they’re meant to entertain, or they do both, and that’s what I’m doing with my art. Other artists have different reasons.

My biggest problem with AI Art, I think, is that it cannot present an idea or emotion. Because of the way it works, AI-- at best-- generates an average of an idea. It takes what others have done and predicts what a person might create based on a database of similar items. So, when I tell it, “Write me a script where Buffy and Faith get along,” it’s going to pull locations from Buffy the Vampire Slayer, and have the characters talk like characters getting along would talk. The witty dialogue isn’t going to be there because most people don't write characters like that only Whedon does, but it can see that the plot of Buffy is more or less “girl kills vampires.” The problematic nature of the creator is gone, but so is the creation and we get something very vapid and bland. But never so bad as to be actually funny. If I replaced Buffy and Faith with any other characters from any other show or book, or even put random names in, the result would be similar, boringly similar. It’s able to pull the vocabulary of authors, it vaguely “understands” that a prompt with Shakespeare in it ought to use thees and thous, but it doesn’t have any ability to produce meaningful work. That problem might be fixed in later AIs, but I doubt we’ll get to the point where I can ask for a deep narrative driven plot with interesting characters and actually get something worth reading. At least, it won’t happen with how AIs currently work.

For visual artists the problem is much worse. Companies have always used artists as a means to create ads, book covers, graphic design for logos. Companies being what they are will always seek out the cheapest means to their ends and an AI that can do shitty but passable imitations of artists is going to win every time. Additionally, AI art is so easy to make it is not inconceivable that machine made art crowds out living artists so that their ideas never reach anybody else.

So, the future we have to look forward to is one where we have thousands of shitty book covers, the blandest looking ads with the worst fonts imaginable, with artists doing art primarily in isolation or influenced by AI. Which won't matter because no one will see their work anyway.

The writer fairs a bit better, but given the garbage on TV, I’m not sure they do that much better.

Advocates say that AI will be best used as a supplementary tool. Writing opening lines to books is notoriously difficult, for example. An AI (they say) can ease the load. As an experiment I tried this with ChatGPT. “Give me seven great opening lines!” It at first gave me famous opening lines. I remember Neuromancer was one of them. Some Dickens. A Stephen King book. I amended the prompt to “Give me seven original great opening lines!” What it spat out was not great.

I’ve observed that because there is such an emphasis on opening lines in writing workshops, prompts, publishers, etc. that the tendency is for writers to try way too hard on their first sentence so that the reader can feel the machinery chugging along underneath. These sort of “super-charged” openers are often worse than a boring opening line, and I think it’s about time we had a discussion about whether or not “Billy looked down in horror at the blood all over his hands. ‘Christ,’ he thought, ‘is it really only Tuesday?’” is better than “In the bosom of one of those spacious coves which indent the eastern shore of the Hudson, at that broad expansion of the river denominated by the ancient Dutch navigators the Tappan Zee…”

But that’s all digression. The point is that since this is what humans do the machine does it too. Only more bland and average. There’s false emotional pathos and bathos and shock and awe, but it falters into the uncanny valley. It’s like talking to a Working Joe, the creepy androids from Alien Isolation. There’s nothing right about them and the stretched plastic skin and robotic voice quipping at you only reminds you that they aren’t human.

It takes a long time to write a book. An AI could conceivably do it in a day. You might, due to laziness or some desire to get ahead or philosophical reason, decide that instead of having an AI write the whole book, use it to augment your writing skills much like spellcheck cleans up your spelling skills.

You might be able to do some impressive stuff this way, but much like a baseball player on steroids, it isn’t really impressive. AI, much like steroids, is an augmentation that takes away from the prestige of the game and the credibility of the player. Barry Bonds can hit as many homeruns as he likes but if he’s using drugs to do it, he’s not doing it himself. Training, grit, and perseverance are impressive. Using technological or chemical magic is not.

This barely touches on ethical concerns. AI Art trains on a lot of sources by necessity. It pulls from websites and thousands of uncredited artists. If I want “soldier lady in 19th century dress,” it is going to give me an approximation of every artist who has drawn that across multiple websites, whether they want their art used like that or not. A lot of these artists might have strong objections to that because their work is being used to put them out of a job, may make money for somebody who didn’t do the work themselves, and any number of other concerns. The main contention is that the influences are not cited.

There are also advocates who suggest that this is no different than what human artists do. I read a lot of Stephen King as a teen, and surely there’s some King in my work. The same for Susan Cooper at an even younger age. Diane Duane. More even. Surely, if AI has to cite its sources, humans should too.

First, I’d like to note that AI are not humans, do not function like humans, and any comparison to the contrary is anthropomorphizing. There’s no functional parallel, because while the output is vaguely similar the inner workings are completely different. We tend to equate the brain with a computer because that is what we’re familiar with, but the brain is not a computer and doesn’t operate like one in any meaningful sense. We can both do math, but how the brain does it and how a computer does it are completely different. Human math is bad, machine math is good. Human art is good, machine art is bad; and both are a product of the different processes going on beneath.

Second, the synthesis a human mind makes after interpreting different works is not an average. Rather, it is a dialogue. If I read something I disagree with I can advance a counter-argument, perhaps present a thesis. I can mock, I can satirize. If I want to satirize some company, I can remember back to some Chaplin and go, “Ah, there was a good technique.” If I remember the visual techniques in an Akira Kurosawa film, I can try to render them in prose to see how they’d work, and then discard or enhance them if they do. Translating a cinematic technique to prose might fail miserably, but it'll be fun to try. These are aesthetic choices all in service of the central goal of the work. To entertain, to inform, to make you happy or sad.

AI, by virtue of how it functions, is a cheat to this. It doesn’t synthesize so much as aggregate and predict. “A human statistically writes this, so I will write this.” Most humans are bad at art, so what it predicts is bad, and the real danger I see is that our world becomes flooded with AI art that builds on this mediocrity until that is what we expect our stories to be. A hundred years from now we might be sitting in a bland holodeck with recycled artificial stories done in the style of whatever artist we choose, but without the spark of human passion. An algorithm feeding itself blandness, while as humans our entire artistic world from the cave paintings to pop music falls into artistic rot, and the minds behind the machine stagnate and die.