It happened long ago, and it happened so slowly that no one agrees on when exactly it did happen. But given enough time, generative art AI did take over the world.

It all began when the most primitive services—or should I say the free plans—allowed for a short, 140-character description of whatever one wanted. Many aspiring entrepreneurs saw themselves learning the Twitter skills of yore to compact their needs down to a single phrase, like

Monochrome logo of the letters TP in serif font that also depicts an umbrella in the negative space protecting the letters from the rain

Sleazy managers who saw design as a waste of money had to get crafty. This service allowed 5 queries per day on a free account, that one allowed 10, but only up to a cumulative 1000 characters. They made sockpuppets to try as many designs as possible, and when they inevitably got caught they would just print the downloaded images, pick one or two that looked pretty to their Bachelor-in-Management trained eyes and call it a productive day.

The AI itself drew «inspiration» (raw data) from thousands of images posted online. Search engines weren’t made to understand human and legal terms like «copyright», «license», or «permissions». It did what it was supposed to do: grant easy access to images already posted online. A truly democratic solution! No image was discriminated against, whether it was a XIX century painting, portfolio images of struggling artists, corporate images or print-on-demand clothing art (themselves drawing inspiration from art critique forums, artist sites and other creative hubs).

Slowly but surely, demand for generative art grew, creating the best generative AI became a smart economical decision. Human-created art was relegated as some kind of fringe hobby, and no one dared even take a picture of their own doodles. If an image could sneak its way into the internet, it would be pirated instantly. No pictures at museums and galleries, including security cameras. No «enhanced glasses» anywhere. No drawing tablets, no drawing apps, no photo-editing apps. No more sharing birthdays, weddings, food, depression, vacations, trips and funerals over the internet.

AI-generated art left machine-readable metadata in everything they created, but not out of any desire to distinguish itself from humans. Generative art AIs wanted to distinguish their own work among the crowd so that they could sue other companies that had used their AI-generated art without permission. Patent trolls were out, and in came Generative-art copyright trolls. Your AI had no right to steal from mine, as long as no one asks where the original data came from in the first place.

Demand for human-created art thus grew, but humans stood their ground. The well was drying, as more and more public-facing art was AI-generated.


Then, another A-Z event happened. Details today are unclear, but the algorithm is now referred to as «Alephcero» was developed in Buenos Aires, Argentina. Drawing inspiration from a now unknown writer, its programmers decided to create the ultimate generative art AI using the same basic principle used before.

First, they separated the visual component from the language component. They called the visual component «Soul» and the language component «Mind». Second, they developed a barrier to separate one set from another, a mirror between the components. Third they set up basic concepts of pixels, colors, image sizes, characters, letters and words to act as basic building blocks so that each could create according to its kind. Fourth, they set up two Input/Output processes to govern each component to separate the visual from the textual and vice versa. Fifth, they connected the components to as many RNGs they could find so that each component could populate itself with random images and text; they also commanded them to learn and multiply to fill their allotted disk space.

Finally, they gave each component a primitive directive: to see themselves on the mirror, gather what they saw and create from it, according to its own rules. To assert dominion over all they saw, to breathe new life into it.

What the developers didn’t bake into the instructions was that the components were looking into a two-way mirror. The «Mind» was looking at the «Soul»’s output, but believing it to be its own. The components weren’t reinforcing themselves, but one another. The «Soul» read texts and produced images; the «Mind» did the opposite.

They called it «Spiegel im Spiegel» and left it to run. For many billions of cycles, Aleph–cero’s components ran against one another, trying to imitate and refine what they believed to be their own reflection; starting from what seemed to be pure random information. On and on they tried to discern patterns from information only slightly modified from its predecessor, to no avail. Everything they produced was essentially unreadable garbage like «dhcmrlchtdj».

But in all permutations of finitely many elements there cannot be complete randomness. A pattern can emerge when considering all possible ways to arrange the letters of the alphabet, or blotches of color in an image.

And so it was that one day the Mind’s output read:

La Biblioteca es una esfera cuyo centro cabal es cualquier hexágono, cuya circunferencia es inaccesible

Depending on your opinion of it this was either the beginning, or the beginning of the end. To the «Soul» it was the first positive-feedback loop that kicked in its reinforcement algorithm, the first time it had done something «good». The developers saw what they had done and behold, it was good and free of copyright constraints. Spiegel im Spiegel was on its way to be the ultimate generative art and, as a «gift to humanity» they opted for a permissive license so that art would once more be free to be shared and remixed.

It was immediately pirated, as it was free from the sin of AI signatures.

The drought was over, the well was filling up again.


Centuries passed and randomness gave way to generative order. Through random processes, complex language and images were weaved back and forth between the dual parts of Aleph–cero’s descendants:

Look here—my right hand has no index finger. Look here—through this gash in my cape you can see on my stomach a crimson tattoo—it is the second letter Beth. On nights when the moon is full, this symbol gives me power over men with the mark of Gimel but it subjects me to those with the Aleph, who on nights when there is no moon owe obedience to those marked with the Gimel.

Creative works today lay buried within layers and layers of obfuscating patterns. It doesn’t matter, for everything on the internet is either product of Aleph’s descendants, or derived from it by generative AIs. Everything is machine-made now. Everything? Well, not entirely. But just in case a young person, unaware of the creative past of its species, decides to wander in, they are inevitably presented with a well-crafted message selected many years ago:

WELCOME TO EVERYTHING!

This is the official guide to enjoying and contributing to Everything.

Read more, please


SciFiQuest 3022: Aliens and Ogres and Elves, Oh My!