ChatGPT seems to be the current Big Thing, the Buzz, the Door to a New Era, etc. Microsoft has invested 10 billion dollars on it. There has been a downpour of media articles on how it can do the work of computer programmers, artistic painters, writers of any kind, teachers, and just about everyone. We've perhaps gotten used to the over-hyping of new tech and tend to skepticism.

But what is it, exactly? Let's ask it. No, seriously. What follows is real stuff (except for the hardlinks, which I added).

 

What is ChatGPT?

When asked, ChatGPT tells us:

"ChatGPT is a conversational AI model developed by OpenAI. It is a transformer-based language model that was trained on a large corpus of text data, allowing it to generate text in a human-like manner. The name "ChatGPT" stands for "Chat Generative Pretrained Transformer."

ChatGPT can be used to generate responses to text inputs in a variety of settings, such as chatbots, virtual assistants, and language translation applications. It uses deep learning algorithms to understand the context of the input text and generate an appropriate response. The model has been fine-tuned on specific tasks, such as question answering and conversation, to improve its performance in these domains.

Overall, ChatGPT represents a major advance in the field of natural language processing and has the potential to revolutionize the way people interact with computers."

My thoughts:

The response is well-formed in sentences and paragraphs and appears to be an appropriately concise and adequate brief answer to the question, although a human writer might choose to compress the three paragraphs into one, considering how short they are. That aspect is excellent; better than most writing we see on the Web or even on paper.

A less technical description is that ChatGPT works by predicting what words will come out in a sequence according to some established context. In principle, it is no different from a first-order Markov chain of words and punctuation, latent semantic analysis, latent Dirichlet allocation, or other generative tools in the statistical lineage. It differs vastly in the scope and types of contexts it uses. No one can really know, but the base model used by ChatGPT (GPT-3) seems to embed grammatical patterns and other structural elements in some form.

While the output of ChatGPT is truly impressive, the big 'but' is: Can we trust the information as true? Sadly, this is a general problem these days, and has made smart people more skeptical and allows others to follow their entrenched Will to Believe by selective acceptance. My own current understanding of ChatGPT is limited, but sufficient to recognize no lies in this response and I trust it to my own satisfaction. There are some technical terms and 'context of the input text' is a bit foggy. A bit surprisingly, the final sentence is bit of an an opinion. Let's ask for some clarification.

 

What is a transformer-based language model?

ChatGPT:

"A transformer-based language model is a type of deep learning model used for natural language processing tasks such as language translation, text classification, and text generation. The model is called "transformer-based" because it uses the Transformer architecture, which was introduced in 2017 by Vaswani et al. in their paper "Attention is All You Need."

The Transformer architecture is a neural network architecture that is specifically designed for processing sequences of data, such as sequences of words in a sentence or sequences of characters in a word. It is particularly well-suited for language tasks because it allows the model to attend to different parts of the input sequence when making predictions, rather than processing the entire sequence in a fixed order.

The Transformer architecture uses a self-attention mechanism, which allows the model to weigh the importance of different parts of the input sequence when making predictions. This allows the model to capture long-range dependencies between words in a sentence, which is important for many NLP tasks.

Overall, transformer-based language models have proven to be highly effective for a wide range of NLP tasks and have revolutionized the field of NLP. ChatGPT is one example of a transformer-based language model, and it has been trained on a large corpus of text data to generate text in a human-like manner."

My thoughts:

Again, no evident reason to doubt factual accuracy on this technical topic, but readers might miss having references to back up assertions, especially the 'have proven to be highly effective ... and have revolutionized ...' part. If ChatGPT could properly cite sources, and form actual opinions, that would be a genuine 'wow!' The potential for factual error was demonstrated perfectly by a recent demo that went horribly wrong, where Google's delayed attempt to get on the bandwagon when a post on Twitter gave a factually wrong answer.

 

How is the fine-tuning of ChatGPT done?

ChatGPT:

"Fine-tuning ChatGPT involves adjusting its parameters so that it is better suited to a specific task or domain. This is typically done by continuing the training process on a smaller, task-specific dataset.

The process of fine-tuning ChatGPT can be broken down into the following steps:

  1. Pre-training: The ChatGPT model is first pre-trained on a large corpus of text data. This allows the model to learn patterns and relationships between words, sentences, and paragraphs in the text data.
  2. Task-specific dataset: A task-specific dataset is created that is representative of the task the model will be fine-tuned for. For example, if the task is to generate responses to questions, the task-specific dataset would consist of questions and their corresponding answers.
  3. Fine-tuning: The pre-trained ChatGPT model is then fine-tuned on the task-specific dataset. This involves using the task-specific data to update the parameters of the model so that it can generate better responses for that task. The fine-tuning process typically involves running multiple training epochs on the task-specific dataset, using a variant of the standard supervised learning objective function.
  4. Evaluation: Once the fine-tuning process is complete, the model is evaluated on a set of evaluation examples to assess its performance on the task.

Overall, fine-tuning ChatGPT is an effective way to tailor the model's behavior to a specific task or domain, allowing it to generate more accurate and relevant responses."

My thoughts:

Being wrong in fact is not the only problem with generative models that are not 'persons': bias on social issuess can be a major problem, and it already has been. Bias is not unique to generative models like ChatGPT, but is a general problem in any statistical technique. The models are trained on text written by people, mostly on the Web, so when you talk to ChatGPT, you are talking to some large subset of the Web. This is the problem of 'garbage in, garbage out and 'you are what you eat'. There are ways to diminish bias, but the fine-tuning of ChatGPT has people in the loop, people who select a more 'correct' set of data for retraining, so one bias may just be substituted with another.

 

Name 10 of the parameters of Chat GPT

ChatGPT:

"I'm sorry, but it is not possible to list 10 specific parameters of ChatGPT as it is a complex machine learning model with millions of parameters. The parameters of ChatGPT are the weights in its neural network, and they determine the model's behavior and its ability to generate text responses. These parameters are learned through the training process, during which the model is exposed to a large amount of text data and the parameters are adjusted to minimize prediction error.

It's not feasible to list 10 specific parameters as they are part of a massive neural network with millions of interconnected nodes. The exact values of these parameters are not publicly available, as they are proprietary information belonging to OpenAI."

My thoughts:

This response equates its parameters with artificial neuron weights in an artificial neural network, but the model ChatGPT is based on uses 175 billion parameters, so this may be a wrong answer. It also means that large language models (LLMs) like ChatGPT are pretty much black boxes, much like the brains of mammals. We can't look into them and see specifically how they work to produce behavior like we can for mechanical or electronic systems; we can only observe the resulting behavior and try to adjust it with re-training rather than opening it up and changing something specific.

 

How will ChatGPT affect human producers of computer programs, music and other arts, teaching, and writers?

ChatGPT:

"The impact of ChatGPT and other AI models on human producers of computer programs, music and other arts, teaching, and writers will likely depend on the specific task or application. However, in general, these models have the potential to both complement and compete with human workers in these fields. In some cases, ChatGPT and other AI models may be used to automate certain tasks or to generate new content in a way that is faster and more efficient than traditional methods. For example, ChatGPT has been used to generate code snippets and to write simple news articles. In these cases, ChatGPT and other AI models may reduce the demand for human workers in these fields. On the other hand, ChatGPT and other AI models may also complement human workers by providing them with new tools and resources to help them in their work. For example, ChatGPT can be used to assist human writers by suggesting possible completions for sentences or paragraphs, or to help musicians generate new musical ideas. In these cases, ChatGPT and other AI models may enhance the creativity and productivity of human workers. Overall, the impact of ChatGPT and other AI models on human workers will likely depend on a variety of factors, including the specific task or application, the level of development of the technology, and the willingness of human workers to adapt to new tools and technologies."

My thoughts:

The frenzy of interest in what ChatGPT can and can't do will quickly reveal its strengths and weaknesses of utility. As things calm down, the range of applications will be explored in many more directions. It's likely that the truly useful purposes will complement human skills rather than replace them for a while, but there is still room for major improvement in model performance.

Further Thoughts

ChatGPT is not simply a chatbot. You can converse with it, but it excels as a source of knowledge and it can 'create' things, like stories and computer programs. It can also serve as a back-end answering function for a sophisticated chatbot, and services that are merely 'thin-wrappers' around ChatGPT are popping up like popcorn. You can use it to learn all kinds of things, like programming languages, or how-to almost anything. It's basically search on steroids that can return complex answers. Students can use it to cheat on writing assignments, bloggers can use it as an assistant, researchers ..., noders ..., and on and on. It's apparent capabilities have produced a lot of worry about lost or diminished human jobs and other lead-ups to the Robot Apocalypse and Singularity, etc.

Technically, it seems to be near the end of the tunnel to 'enough computing power and data' to produce the Do Everything Machine. Or, even approach a complete human brain simulation, considering an estimated 86 billion brain cells and possibly 10,000 connections among them. I think the biggest technical drawback is that these models suffer from the rigidity of 'trained and done' models: they don't work until fully trained (an expensive process that can takes months) and once trained, they are largely 'frozen'. ChatGPT is based on data dated up to 2021, so it is ignorant of anything newer. The fine-tuning mentioned earlier does not update the base model itself. To do so would require a very time-consuming and expensive full retraining. Nor can the model learn continuously 'on the fly', as can brains.

Also, the 'knowledge' the model can spit out is not 'grounded' directly in reality by experience, as is human knowledge. It doesn't understand anything at all and nothing means anything to it. It therefore remains an elegant encyclopedia and not a mind. The next version of the 'engine', GPT4 is expected to use a trillion or so parameters, so who knows what might shake out of that. If the model is trained on reality directly sensed by human-like sensors on a mobile physical body for which reality has import rather than on the text representations we have abstracted from our sensed reality. that would solve the grounding problem and generate true minds.

Fear or be thrilled, or both, as you please.

Write a poem about lost love.

Lost Love

A heart once filled with joy and light,
Now shattered in the darkest night.
A love that once was pure and true,
Now gone, and all we're left with is the dew.

The memories we shared, they linger on,
But the feelings we had, they're now gone.
The laughter and the smile, the touch,
Now just a distant memory, a thing of the past.

The love we had, it was so real,
It felt like nothing could ever heal.
But life can be so cruel and kind,
Taking love and leaving us behind.

We try to move on, to find a way,
But the hurt and pain, they still remain.
The emptiness that takes its place,
A constant reminder of lost love's grace.

Though we may never love again,
We'll hold onto the memories till the end.
For they are all we have left,
Of the love that once was, now bereft.

--- ChatGPT

In recent days, maybe less than a few months at the time of this writing, there has been debate on whether or not AI Art is Art, and how hostile or not writers and artists should be to it. AI advocates see it as an exciting new tool to augment human ability, while its detractors see it as a destructive force that steals work from artists both by limiting their job opportunities and using their own work to improve itself while giving no credit to its sources.

I think I may want to define my position for posterity, if only because this is a fascinating and pivotal point in human art, perhaps only rivaled by humans discovering they could use paint to create animals on cave walls. I don’t write that lightly, but I think AI has the potential to change everything. I’m a writer by hobby and sometimes trade, and so my biases lean that way. However, the damage done to visual artists will likely be the most destructive out of all the arts.

I’d define art simply as machines (perhaps a natural machine rather than an AI one) that generate ideas or emotions in the viewer. This can be as simple as a story that says “fascism bad” or it can be a complicated emotional crash as when one views Ivan the Terrible and His Son Ivan. When I write something it is a personal letter from me to you over the space of time. I like stories, these stories have a central theme, or they’re meant to entertain, or they do both, and that’s what I’m doing with my art. Other artists have different reasons.

My biggest problem with AI Art, I think, is that it cannot present an idea or emotion. Because of the way it works, AI-- at best-- generates an average of an idea. It takes what others have done and predicts what a person might create based on a database of similar items. So, when I tell it, “Write me a script where Buffy and Faith get along,” it’s going to pull locations from Buffy the Vampire Slayer, and have the characters talk like characters getting along would talk. The witty dialogue isn’t going to be there because most people don't write characters like that only Whedon does, but it can see that the plot of Buffy is more or less “girl kills vampires.” The problematic nature of the creator is gone, but so is the creation and we get something very vapid and bland. But never so bad as to be actually funny. If I replaced Buffy and Faith with any other characters from any other show or book, or even put random names in, the result would be similar, boringly similar. It’s able to pull the vocabulary of authors, it vaguely “understands” that a prompt with Shakespeare in it ought to use thees and thous, but it doesn’t have any ability to produce meaningful work. That problem might be fixed in later AIs, but I doubt we’ll get to the point where I can ask for a deep narrative driven plot with interesting characters and actually get something worth reading. At least, it won’t happen with how AIs currently work.

For visual artists the problem is much worse. Companies have always used artists as a means to create ads, book covers, graphic design for logos. Companies being what they are will always seek out the cheapest means to their ends and an AI that can do shitty but passable imitations of artists is going to win every time. Additionally, AI art is so easy to make it is not inconceivable that machine made art crowds out living artists so that their ideas never reach anybody else.

So, the future we have to look forward to is one where we have thousands of shitty book covers, the blandest looking ads with the worst fonts imaginable, with artists doing art primarily in isolation or influenced by AI. Which won't matter because no one will see their work anyway.

The writer fairs a bit better, but given the garbage on TV, I’m not sure they do that much better.

Advocates say that AI will be best used as a supplementary tool. Writing opening lines to books is notoriously difficult, for example. An AI (they say) can ease the load. As an experiment I tried this with ChatGPT. “Give me seven great opening lines!” It at first gave me famous opening lines. I remember Neuromancer was one of them. Some Dickens. A Stephen King book. I amended the prompt to “Give me seven original great opening lines!” What it spat out was not great.

I’ve observed that because there is such an emphasis on opening lines in writing workshops, prompts, publishers, etc. that the tendency is for writers to try way too hard on their first sentence so that the reader can feel the machinery chugging along underneath. These sort of “super-charged” openers are often worse than a boring opening line, and I think it’s about time we had a discussion about whether or not “Billy looked down in horror at the blood all over his hands. ‘Christ,’ he thought, ‘is it really only Tuesday?’” is better than “In the bosom of one of those spacious coves which indent the eastern shore of the Hudson, at that broad expansion of the river denominated by the ancient Dutch navigators the Tappan Zee…”

But that’s all digression. The point is that since this is what humans do the machine does it too. Only more bland and average. There’s false emotional pathos and bathos and shock and awe, but it falters into the uncanny valley. It’s like talking to a Working Joe, the creepy androids from Alien Isolation. There’s nothing right about them and the stretched plastic skin and robotic voice quipping at you only reminds you that they aren’t human.

It takes a long time to write a book. An AI could conceivably do it in a day. You might, due to laziness or some desire to get ahead or philosophical reason, decide that instead of having an AI write the whole book, use it to augment your writing skills much like spellcheck cleans up your spelling skills.

You might be able to do some impressive stuff this way, but much like a baseball player on steroids, it isn’t really impressive. AI, much like steroids, is an augmentation that takes away from the prestige of the game and the credibility of the player. Barry Bonds can hit as many homeruns as he likes but if he’s using drugs to do it, he’s not doing it himself. Training, grit, and perseverance are impressive. Using technological or chemical magic is not.

This barely touches on ethical concerns. AI Art trains on a lot of sources by necessity. It pulls from websites and thousands of uncredited artists. If I want “soldier lady in 19th century dress,” it is going to give me an approximation of every artist who has drawn that across multiple websites, whether they want their art used like that or not. A lot of these artists might have strong objections to that because their work is being used to put them out of a job, may make money for somebody who didn’t do the work themselves, and any number of other concerns. The main contention is that the influences are not cited.

There are also advocates who suggest that this is no different than what human artists do. I read a lot of Stephen King as a teen, and surely there’s some King in my work. The same for Susan Cooper at an even younger age. Diane Duane. More even. Surely, if AI has to cite its sources, humans should too.

First, I’d like to note that AI are not humans, do not function like humans, and any comparison to the contrary is anthropomorphizing. There’s no functional parallel, because while the output is vaguely similar the inner workings are completely different. We tend to equate the brain with a computer because that is what we’re familiar with, but the brain is not a computer and doesn’t operate like one in any meaningful sense. We can both do math, but how the brain does it and how a computer does it are completely different. Human math is bad, machine math is good. Human art is good, machine art is bad; and both are a product of the different processes going on beneath.

Second, the synthesis a human mind makes after interpreting different works is not an average. Rather, it is a dialogue. If I read something I disagree with I can advance a counter-argument, perhaps present a thesis. I can mock, I can satirize. If I want to satirize some company, I can remember back to some Chaplin and go, “Ah, there was a good technique.” If I remember the visual techniques in an Akira Kurosawa film, I can try to render them in prose to see how they’d work, and then discard or enhance them if they do. Translating a cinematic technique to prose might fail miserably, but it'll be fun to try. These are aesthetic choices all in service of the central goal of the work. To entertain, to inform, to make you happy or sad.

AI, by virtue of how it functions, is a cheat to this. It doesn’t synthesize so much as aggregate and predict. “A human statistically writes this, so I will write this.” Most humans are bad at art, so what it predicts is bad, and the real danger I see is that our world becomes flooded with AI art that builds on this mediocrity until that is what we expect our stories to be. A hundred years from now we might be sitting in a bland holodeck with recycled artificial stories done in the style of whatever artist we choose, but without the spark of human passion. An algorithm feeding itself blandness, while as humans our entire artistic world from the cave paintings to pop music falls into artistic rot, and the minds behind the machine stagnate and die.

ChatGPT is, of course, not creating anything at all of its own accord. It is responding to prompts. The tone of the prompt informs the AI of the tone of what it is instructed to write. The prompt is the art. The AI is simply the tool.

For example:

Prompt:

Write a story about a pair of socks that are pleased to be purchased by a man in a department store, only to discover when they are put in the sock drawer with his other socks that the man's feet eat socks. Later, however, the socks discover that when his socks become too worn out to wear, instead of throwing them away, he washes them and uses them as material to make teddy bears for needy children.

Result:

Once upon a time, in a bustling department store, there were two socks, a pair, lying on the shelf. They were a bright, cheerful shade of blue and they had been waiting for someone to take them home for quite some time. As the day went on, many people passed by, but no one seemed to take notice of the socks. They were starting to worry that they would never find a home.

Finally, a man walked into the store and made his way over to the sock section. He picked up the blue socks and examined them closely. The socks were thrilled, and they knew that they had found their new home. The man paid for them and took them home, carefully placing them in his sock drawer with his other socks.

However, the joy of finding a new home didn't last long. The blue socks soon realized that they were living with a strange and unusual family of socks. The man's feet had a habit of eating socks! The other socks in the drawer were full of holes and had lost their color and vitality.

The blue socks were worried that they would meet the same fate, but they were determined to stay strong and keep their color as long as they could.

One day, the man took all of his old, worn-out socks and put them in a laundry basket. The blue socks were shocked when they saw him throw them into the washing machine. They thought that this was the end for their fellow socks. But to their surprise, the man took the washed socks and started cutting them up into small pieces.

The blue socks were puzzled as to why he was doing this until they saw him start to sew the pieces together into tiny teddy bears. The man had a big heart and a passion for helping those in need, and he used the old socks to make stuffed animals for children who didn't have much.

The blue socks were overjoyed to see that their fellow socks had been given a new life and purpose. They learned that there was more to the man's actions than they originally thought, and they felt proud to be part of such a kind-hearted home.

From that day on, the blue socks realized that they didn't have to fear being eaten by the man's feet. Instead, they felt grateful for the life they had and the opportunity to be part of something bigger than themselves. They knew that they had found a home with a loving owner who valued every piece of fabric, no matter how worn out it was.

Is this the story I would have written had I written the story completely? Probably not. I find that it reads like a children's story, which is fine, people write children's stories, have been doing so for generations. But what it is is something new, a story capturing the requested elements with the correct tone which does indeed have the capacity to stir the reader to relate to a degree with its fabric protagonists. And yes, that makes it art. Note by the way that I'd previously asked ChatGPT to respond to just the prompt, "Write a story about a pair of socks that are pleased to be purchased by a man in a department store, only to discover when they are put in the sock drawer with his other socks that the man's feet eat socks." Which came out with a result much more like a horror story.

Log in or register to write something here or to contact authors.