ChatGPT seems to be the current Big Thing, the Buzz, the Door to a New Era, etc. Microsoft has invested 10 billion dollars on it. There has been a downpour of media articles on how it can do the work of computer programmers, artistic painters, writers of any kind, teachers, and just about everyone. We've perhaps gotten used to the over-hyping of new tech and tend to skepticism.
But what is it, exactly? Let's ask it. No, seriously. What follows is real stuff (except for the hardlinks, which I added).
What is ChatGPT?
When asked, ChatGPT tells us:
"ChatGPT is a conversational AI model developed by OpenAI. It is a transformer-based language model that was trained on a large corpus of text data, allowing it to generate text in a human-like manner. The name "ChatGPT" stands for "Chat Generative Pretrained Transformer."
ChatGPT can be used to generate responses to text inputs in a variety of settings, such as chatbots, virtual assistants, and language translation applications. It uses deep learning algorithms to understand the context of the input text and generate an appropriate response. The model has been fine-tuned on specific tasks, such as question answering and conversation, to improve its performance in these domains.
Overall, ChatGPT represents a major advance in the field of natural language processing and has the potential to revolutionize the way people interact with computers."
My thoughts:
The response is well-formed in sentences and paragraphs and appears to be an appropriately concise and adequate brief answer to the question, although a human writer might choose to compress the three paragraphs into one, considering how short they are. That aspect is excellent; better than most writing we see on the Web or even on paper.
A less technical description is that ChatGPT works by predicting what words will come out in a sequence according to some established context. In principle, it is no different from a first-order Markov chain of words and punctuation, latent semantic analysis, latent Dirichlet allocation, or other generative tools in the statistical lineage. It differs vastly in the scope and types of contexts it uses. No one can really know, but the base model used by ChatGPT (GPT-3) seems to embed grammatical patterns and other structural elements in some form.
While the output of ChatGPT is truly impressive, the big 'but' is: Can we trust the information as true? Sadly, this is a general problem these days, and has made smart people more skeptical and allows others to follow their entrenched Will to Believe by selective acceptance. My own current understanding of ChatGPT is limited, but sufficient to recognize no lies in this response and I trust it to my own satisfaction. There are some technical terms and 'context of the input text' is a bit foggy. A bit surprisingly, the final sentence is bit of an an opinion. Let's ask for some clarification.
What is a transformer-based language model?
ChatGPT:
"A transformer-based language model is a type of deep learning model used for natural language processing tasks such as language translation, text classification, and text generation. The model is called "transformer-based" because it uses the Transformer architecture, which was introduced in 2017 by Vaswani et al. in their paper "Attention is All You Need."
The Transformer architecture is a neural network architecture that is specifically designed for processing sequences of data, such as sequences of words in a sentence or sequences of characters in a word. It is particularly well-suited for language tasks because it allows the model to attend to different parts of the input sequence when making predictions, rather than processing the entire sequence in a fixed order.
The Transformer architecture uses a self-attention mechanism, which allows the model to weigh the importance of different parts of the input sequence when making predictions. This allows the model to capture long-range dependencies between words in a sentence, which is important for many NLP tasks.
Overall, transformer-based language models have proven to be highly effective for a wide range of NLP tasks and have revolutionized the field of NLP. ChatGPT is one example of a transformer-based language model, and it has been trained on a large corpus of text data to generate text in a human-like manner."
My thoughts:
Again, no evident reason to doubt factual accuracy on this technical topic, but readers might miss having references to back up assertions, especially the 'have proven to be highly effective ... and have revolutionized ...' part. If ChatGPT could properly cite sources, and form actual opinions, that would be a genuine 'wow!' The potential for factual error was demonstrated perfectly by a recent demo that went horribly wrong, where Google's delayed attempt to get on the bandwagon when a post on Twitter gave a factually wrong answer.
How is the fine-tuning of ChatGPT done?
ChatGPT:
"Fine-tuning ChatGPT involves adjusting its parameters so that it is better suited to a specific task or domain. This is typically done by continuing the training process on a smaller, task-specific dataset.
The process of fine-tuning ChatGPT can be broken down into the following steps:
-
Pre-training: The ChatGPT model is first pre-trained on a large corpus of text data. This allows the model to learn patterns and relationships between words, sentences, and paragraphs in the text data.
- Task-specific dataset: A task-specific dataset is created that is representative of the task the model will be fine-tuned for. For example, if the task is to generate responses to questions, the task-specific dataset would consist of questions and their corresponding answers.
- Fine-tuning: The pre-trained ChatGPT model is then fine-tuned on the task-specific dataset. This involves using the task-specific data to update the parameters of the model so that it can generate better responses for that task. The fine-tuning process typically involves running multiple training epochs on the task-specific dataset, using a variant of the standard supervised learning objective function.
- Evaluation: Once the fine-tuning process is complete, the model is evaluated on a set of evaluation examples to assess its performance on the task.
Overall, fine-tuning ChatGPT is an effective way to tailor the model's behavior to a specific task or domain, allowing it to generate more accurate and relevant responses."
My thoughts:
Being wrong in fact is not the only problem with generative models that are not 'persons': bias on social issuess can be a major problem, and it already has been. Bias is not unique to generative models like ChatGPT, but is a general problem in any statistical technique. The models are trained on text written by people, mostly on the Web, so when you talk to ChatGPT, you are talking to some large subset of the Web. This is the problem of 'garbage in, garbage out and 'you are what you eat'. There are ways to diminish bias, but the fine-tuning of ChatGPT has people in the loop, people who select a more 'correct' set of data for retraining, so one bias may just be substituted with another.
Name 10 of the parameters of Chat GPT
ChatGPT:
"I'm sorry, but it is not possible to list 10 specific parameters of ChatGPT as it is a complex machine learning model with millions of parameters. The parameters of ChatGPT are the weights in its neural network, and they determine the model's behavior and its ability to generate text responses. These parameters are learned through the training process, during which the model is exposed to a large amount of text data and the parameters are adjusted to minimize prediction error.
It's not feasible to list 10 specific parameters as they are part of a massive neural network with millions of interconnected nodes. The exact values of these parameters are not publicly available, as they are proprietary information belonging to OpenAI."
My thoughts:
This response equates its parameters with artificial neuron weights in an artificial neural network, but the model ChatGPT is based on uses 175 billion parameters, so this may be a wrong answer. It also means that large language models (LLMs) like ChatGPT are pretty much black boxes, much like the brains of mammals. We can't look into them and see specifically how they work to produce behavior like we can for mechanical or electronic systems; we can only observe the resulting behavior and try to adjust it with re-training rather than opening it up and changing something specific.
How will ChatGPT affect human producers of computer programs, music and other arts, teaching, and writers?
ChatGPT:
"The impact of ChatGPT and other AI models on human producers of computer programs, music and other arts, teaching, and writers will likely depend on the specific task or application. However, in general, these models have the potential to both complement and compete with human workers in these fields.
In some cases, ChatGPT and other AI models may be used to automate certain tasks or to generate new content in a way that is faster and more efficient than traditional methods. For example, ChatGPT has been used to generate code snippets and to write simple news articles. In these cases, ChatGPT and other AI models may reduce the demand for human workers in these fields.
On the other hand, ChatGPT and other AI models may also complement human workers by providing them with new tools and resources to help them in their work. For example, ChatGPT can be used to assist human writers by suggesting possible completions for sentences or paragraphs, or to help musicians generate new musical ideas. In these cases, ChatGPT and other AI models may enhance the creativity and productivity of human workers.
Overall, the impact of ChatGPT and other AI models on human workers will likely depend on a variety of factors, including the specific task or application, the level of development of the technology, and the willingness of human workers to adapt to new tools and technologies."
My thoughts:
The frenzy of interest in what ChatGPT can and can't do will quickly reveal its strengths and weaknesses of utility. As things calm down, the range of applications will be explored in many more directions. It's likely that the truly useful purposes will complement human skills rather than replace them for a while, but there is still room for major improvement in model performance.
Further Thoughts
ChatGPT is not simply a chatbot. You can converse with it, but it excels as a source of knowledge and it can 'create' things, like stories and computer programs. It can also serve as a back-end answering function for a sophisticated chatbot, and services that are merely 'thin-wrappers' around ChatGPT are popping up like popcorn. You can use it to learn all kinds of things, like programming languages, or how-to almost anything. It's basically search on steroids that can return complex answers. Students can use it to cheat on writing assignments, bloggers can use it as an assistant, researchers ..., noders ..., and on and on. It's apparent capabilities have produced a lot of worry about lost or diminished human jobs and other lead-ups to the Robot Apocalypse and Singularity, etc.
Technically, it seems to be near the end of the tunnel to 'enough computing power and data' to produce the Do Everything Machine. Or, even approach a complete human brain simulation, considering an estimated 86 billion brain cells and possibly 10,000 connections among them. I think the biggest technical drawback is that these models suffer from the rigidity of 'trained and done' models: they don't work until fully trained (an expensive process that can takes months) and once trained, they are largely 'frozen'. ChatGPT is based on data dated up to 2021, so it is ignorant of anything newer. The fine-tuning mentioned earlier does not update the base model itself. To do so would require a very time-consuming and expensive full retraining. Nor can the model learn continuously 'on the fly', as can brains.
Also, the 'knowledge' the model can spit out is not 'grounded' directly in reality by experience, as is human knowledge. It doesn't understand anything at all and nothing means anything to it. It therefore remains an elegant encyclopedia and not a mind. The next version of the 'engine', GPT4 is expected to use a trillion or so parameters, so who knows what might shake out of that. If the model is trained on reality directly sensed by human-like sensors on a mobile physical body for which reality has import rather than on the text representations we have abstracted from our sensed reality. that would solve the grounding problem and generate true minds.
Fear or be thrilled, or both, as you please.
Write a poem about lost love.
Lost Love
A heart once filled with joy and light,
Now shattered in the darkest night.
A love that once was pure and true,
Now gone, and all we're left with is the dew.
The memories we shared, they linger on,
But the feelings we had, they're now gone.
The laughter and the smile, the touch,
Now just a distant memory, a thing of the past.
The love we had, it was so real,
It felt like nothing could ever heal.
But life can be so cruel and kind,
Taking love and leaving us behind.
We try to move on, to find a way,
But the hurt and pain, they still remain.
The emptiness that takes its place,
A constant reminder of lost love's grace.
Though we may never love again,
We'll hold onto the memories till the end.
For they are all we have left,
Of the love that once was, now bereft.
--- ChatGPT