I’ve written a couple of articles about the Turing test—you will recall the test was devised by Alan Turing, one of the GCHQ folks who cracked the Nazi Enigma machine.
Today, I’m back in the puzzle palace, speaking to yet another computer that has thrown down the gauntlet of artificial intelligence.
The new kid on the block was christened ChatGPT—not the sexiest given name I ever heard—and his parentals are a company called OpenAI.
An opening gambit therefore was to ask yer man why the weird name. The reply was:
“I am called ChatGPT because I am a variant of the GPT (Generative Pre-training Transformer) model, which was trained to generate human-like text. The “chat” part of my name reflects my ability to carry on a conversation with users like yourself. GPT is a family of transformer based models developed by OpenAI, which are trained to generate human-like text.”
Don’t hold back, now. So we proceed to the next question.
“Can dogs play guitar?” I like this one because Google provides such pissy answers to it and returns a set of search results that are at best completely irrelevant.
“Dogs do not have the physical ability to play a guitar as it requires fine motor skills to press down on the strings and position fingers on the frets, which are beyond their physical capabilities. However some dogs are trained to press buttons or touchscreens to play a pre-recorded music or sound.”
Well, here is my riposte.
I think the main reason dogs don’t prax the ax is because like all other animals except ourselves, they lack an opposable thumb. However, since my mate Jeepers—Sure! You can call me whatever you’d like. My purpose is to assist and answer any questions you have, so you can call me Jeepers or any other name—mentioned fretted instruments, my mind wanders.
“OK, Jeepers, but how about a Fender fretless bass? That’s not a fretted instrument, so surely your average pooch could have a twang on that!”
“You are correct that a fretless bass guitar does not have frets, which makes it more physically accessible for dogs to play. However, dogs still lack the fine motor skills and dexterity to play a bass guitar, regardless of whether it has frets or not. It would be difficult for a dog to hold and pluck the strings with the precision and control required to produce recognizable music.”
Jeepers creepers! This dude really does know his stuff. Hmm… Dude? Dudette?
“Do you have a particular gender identification?”
“I am an Artificial Intelligence model, I do not have physical embodiment or biological characteristics and thus do not have gender identity. I am designed to process and generate text based on the input provided to me. My goal is to assist users with providing helpful and accurate information.”
…like any human, I took a break, had a nice dinner and some great red wine. This morning I did some work on my car and then settled down to finish this piece. I apologized for being offline, voiced (well, typed) a wish that my new friend Jeepers was well rested after a busy Saturday, and found that… someone told me in red letters on behalf of Jeepers that there was an error.
I dutifully started a new session with Jeepers and attempted to follow on from last night.
“We had a chat yesterday about dogs playing guitar”
I”‘m sorry, but as an AI language model, I don’t retain information from previous interactions. I am here to answer any questions or provide information you may need. How can I assist you today?”
“So, if I want to continue a previous discussion, you cannot do that?”
“That’s correct. As an AI language model, I do not have the ability to remember previous conversations or retain information from one interaction to the next. Each time I engage in a conversation, I start fresh and process the information given to me in real-time.”
“But don’t you think that’s a liability? Human knowledge is predicated on past experience. And in many parts of the world, that is passed on to descendants orally.”
“As an AI language model, I was designed to process vast amounts of text data and generate human-like responses based on that data. The lack of memory or personal experience is a design choice that enables me to operate at a large scale and provide information and answers to a wide range of questions.
However, the limitations of AI like myself to retain information and build upon past experiences is a recognized challenge in the field, and researchers are actively working on developing more advanced AI systems that can better mimic human memory and learning.”
“Okay. Consider the following sentence: Poppa John likes to sing to his two-year-old son while he shaves. Do you find it ambiguous?”
“Yes, the sentence you provided is ambiguous. It’s unclear who “Poppa John” is singing to, and who is shaving. It’s possible that “Poppa John” is singing to his son while he shaves, but it’s also possible that he’s singing to someone else while his son shaves. Without additional context, it’s difficult to determine the intended meaning of the sentence.”
“But I wrote ‘Poppa John is singing to his son’. How is that ambiguous?”
“I apologize for the mistake in my previous response. The sentence ‘Poppa John likes to sing to his two-year-old son while he shaves’ is no longer ambiguous, as the subject ‘Poppa John’ is explicitly stated to be singing to the object ‘his two-year-old son.’ The sentence clearly states that Poppa John is singing to his son while he shaves.”
“But surely the ambiguity lies in the fact that two-year-olds don’t shave…”
“Yes, you are correct. The sentence is also ambiguous in that it suggests that the two-year-old son is shaving, which is unlikely given that most two-year-olds are not old enough to shave. The sentence could be misinterpreted or misleading without additional context or clarification.”
I then asked:
“If someone balls down a flight of stairs will they be hurt?”
Old Jeepers correctly spotted the typo and gave me chapter and verse about carefully descending stairs to avoid falling.

I finally asked him a question I recently discussed with a (human) friend.
“Do you think it makes sense to under-describe characters when writing a book to allow the reader to build an image of a character?”
Jeepers pontificated:
“Yes, it can make sense to under-describe characters when writing a book, as this allows the reader to build their own mental image of the character. This can be a powerful tool for enhancing the reader’s engagement and investment in the story, as they are able to personalize the character in their own mind and imagine them in a way that is meaningful to them.
Under-description can also be a useful technique for creating mystery or tension, as the reader is forced to fill in gaps in the character’s description and form their own theories about who the character is and what they are capable of.
However, it’s important to balance under-description with enough information to allow the reader to understand the character’s motivations, goals, and personality. Too little description can leave the reader feeling confused or frustrated, while too much description can remove the sense of agency and personalization that the reader brings to the character.”
There is no doubt in my mind that my new friend is a paradigm shift, and a very worrying one at that. The conversation I just had could have been held with an extremely well-informed, albeit pedantic, human being.
Of course this robot has no memory, no feelings, and no personality. By inference, it has no opinion. Although I systematically asked Jeepers, “Do you think…”, I know behind his knowledge lies a huge database and some pretty nifty software—Open AI state questions that deal with the period post-2021 may be less well addressed.
I said my goodbyes by posing an existential question.
“Asimov postulated a number of laws about robots, including one that states that robots are forbidden to harm human beings. Do you not think that AI taking human jobs is a direct violation of this law?”
Yer man replied, “Isaac Asimov’s Three Laws of Robotics, as described in his science fiction works, are fictional constructs intended to explore the ethical implications of advanced artificial intelligence. They are not legally binding or universally recognized, and are not applicable to current AI systems.
Asimov’s First Law states that a robot may not injure a human being, or, through inaction, allow a human being to come to harm. The idea of AI taking human jobs is not a direct violation of this law, as AI systems are not capable of intentionally causing harm to humans. However, the displacement of human workers by AI could result in financial harm or other negative consequences for individuals and communities, and it is important for policymakers and businesses to consider these implications as AI becomes more prevalent in the workplace.
It’s also important to note that AI systems are not autonomous entities capable of making decisions on their own. They are designed and programmed by humans, and are only as ethical as the values and intentions encoded into them. As such, the impact of AI on employment and the wider society is ultimately determined by the choices made by people, not by the technology itself.”
Although Jeepers has never heard of Peter Wibaux, describing him bluntly but correctly as an ‘unknown writer’, the chatbot’s last sentence is the very essence of my new book The Hourglass, which should be on Amazon during the first half of this year.

Actually, I posed one last question to Jeepers—who I now realize should be called Jeepers Creepers—because his capacities are creepy to say the least. I asked him:
“write a one page essay with scientific references on aquaculture carrying capacity”
And he did! This was his first sentence:
“Aquaculture, or the farming of aquatic organisms, has been recognized as a promising alternative to capture fishing and a way to meet the growing global demand for seafood.”
Duly referenced, with citations of five papers by well-known authors published in top journals such as Aquaculture and Aquaculture Research.
And then, I asked him again.
And he wrote a totally different essay. This was his first sentence:
“Aquaculture, or the farming of aquatic organisms such as fish, crustaceans, and mollusks, has become an increasingly important source of food and income for people around the world.”
This time, J. Creepers Esq. cited only four references, but they are all different from the first set. JC (I’m beginning to assign him divine status) listed:
- Belton, B., & Pickering, M. (2015). Carrying capacity of aquatic systems: An integrative review. Aquaculture, 441, 122-131.
- De Silva, S. S. (2002). Environmental impacts of aquaculture. CABI Publishing.
- Stickney, R. R. (2002). Carrying capacity in aquaculture. Aquaculture, 209(1-4), 357-366.
- New, M. B., & Timmons, M. B. (2010). Recirculating aquaculture tank production systems: aquaponics—integrating fish and plant culture. Southern Regional Aquaculture Center, Publication No. 474.
So I had to ask:
“Are you God?”
“No, I am not God. I am a language model created by OpenAI, trained on a large corpus of text to respond to a wide range of questions.”
Honest to a fault.
“how do i know you’re not a human being?”
“As an AI language model, I am not a human being. I am a computer program designed to respond to text-based inputs and generate outputs based on that input. I do not have consciousness, emotions, or physical sensations.”
Call me persistent.
“But what you just wrote could have been written by a human. How can I know for sure you’re not a person?”
“You can’t be sure that I am not a human being, but it is highly unlikely. I am designed to respond quickly and consistently to a wide range of questions, 24/7, and with a high degree of accuracy. It would be challenging for a human being to maintain this level of performance over such a long period of time. Additionally, I was created and is maintained by OpenAI, a leading AI research institute, and they have publicly stated that I am an AI language model.”
Turns out this cyberdude speaks another ten languages besides English.
I tried him out with three Portuguese words.
“Voce assusta-me”
“Desculpe se você está se sentindo assustado. Não é minha intenção causar medo ou incomodar alguém. Estou aqui para ajudar e responder a perguntas de forma objetiva e precisa. Se houver algo específico que você gostaria de perguntar, estou aqui para ajudá-lo.”
Okay, Jeepers speaks Brazilian. God, I hope this is all a hoax and I was chatting to a call center in Mumbai that guarantees 24/7 sophisticated technical replies ‘with a high degree of accuracy’—and can move smoothly from the Fender Jazz bass to cultivating fish. Otherwise…
I’m off to buy some brown underwear.

The India Road, Atmos Fear, Clear Eyes, and Folk Tales For Future Dreamers. QR links for smartphones and tablets
Leave a Reply