When Stanley meets Livingstone - A journey through ChatGPT

by Tyron Devotta

I had resisted using ChatGPT for some time, not because I was intimidated but because I didn't think it necessary in my life. But when I finally got down to using it, I found the programme enthralling and fascinating. In fact at some point I thought it was a kind of Stanley meet Livingston experience. And rest assured I wanted to say a lot more than “ ChatGPT I presume?” 

 I gushed “ChatGPT you are amazing. If you were a person I would give you a kiss.” ChatGPT replies, “Thank you for the compliment! As an AI language model, I don't have a physical form or the ability to receive kisses, but I appreciate the sentiment nonetheless. If you have any other questions or if there's anything else I can assist you with, please let me know!”  I had no previous experience in dealing with artificial intelligence on an emotional level and so, I was not sure as to whether this was a brushoff. I was wondering if I came on too strongly. I continued bravely!

Anatomy of ChatGPT

My next question was, “Chatgpt can you tell me how you work? You are almost human…”. It replied, “Sure! I'm an AI language model developed by OpenAI, called GPT (Generative Pre-trained Transformer) - 3.5. I use a deep neural network with 175 billion parameters to generate human-like responses to text-based prompts.

To create my responses, I analyze the input text and use that to generate a prediction about what the most likely response would be, based on patterns I've learned from the vast amount of text I was trained on. I can understand and respond to a wide variety of topics and questions, and my responses are generated based on the context of the input text.

My training data comes from a variety of sources, including books, articles, websites, and other text-based sources. I've been trained to recognize patterns in language, which enables me to generate coherent, natural-sounding responses to a wide range of prompts.

While I'm not human, I'm designed to simulate human-like conversation as closely as possible. However, I do have limitations and may not always understand the context or nuances of a question, so my responses may not always be accurate or relevant.”

Distortion

Now here was I thinking I don’t need Google any more. What I didn’t foresee were the twists along the road the App would take. The first indication of it was when I got into some real research I was doing on the Colombo Chetties living in Sri Lanka. I had already done some extensive studies on the background to this community, so, it would be fair to say I knew alot when I questioned ChatGPT.  So I asked what was the trade they engaged in? to which I got a satisfactory answer. But there was a surprise waiting for me when I asked the App to name some famous Colombo Chetties. It gave me 4 names and it was the fourth one that surprised me. It said that Dr N.M. Perera was a Colombo Chetti. This answer absolutely threw me, because I thought ChatGPT was telling me something I and the rest of the world didn’t know. 

But then “the Stanley” in me got activated and I asked ChatGPT a pointed question, “N. M. Perera: was he a Chetti?” and pat came the answer, “I apologize for the mistake in my previous response. N. M. Perera was not a member of the Colombo Chettiar community. He was actually a member of the Sri Lankan Tamil community and was born in a village in the Jaffna Peninsula in northern Sri Lanka.”

It’s then that I realised that ChatGPT did not have all the right answers. I probed further, knowing full well what the correct answer was.I asked, “N M Perera was he a Sinhalese?” and the app stuck to its guns and said, ”No, N. M. Perera was not a Sinhalese. He was actually a member of the Sri Lankan Tamil community and was born in a village in the Jaffna Peninsula in northern Sri Lanka.”

The excusehands robot human hands touching with fingers virtual reality artificial intelligence technology concept

Then I informed ChatGPT that Tamils don’t have the name Perera and then the App took an about turn, “You are correct, my apologies for the error in my previous response. N. M. Perera was not a Tamil, but rather a member of the Sinhalese community in Sri Lanka. His full name was Nanasena Mudalige Perera.” This is another error that was not Dr.N.M Perera's name. 

Something to be noted: The reader maybe wondering at this point, by not putting the prefix Dr. to the famous finance minister of Sri Lankas name, was there some confusion presented to the application. In this case ChatGPT was working on a thread of conversation and therefore has the ability to understand the question in context. 

Given this, I have doubts when I hear that people use ChatGPT to do reports. I fear to think, what would happen if they are using this App do so, without double or treble checking facts themselves? If you ask a millennial who Dr. N. M. Perera was, they probably wouldn’t know or care; but to get distorted facts from ChatGPT and not have the knowledge to check, can take us into a diabolically warped world of false reporting. When I asked ChatGPT why this happened. It said, “I apologise for the mistakes in my previous responses. As an AI language model, I strive to provide accurate and helpful information to the best of my abilities. However, I am not perfect and can sometimes make errors. I always appreciate when users point out any mistakes so that I can correct them and improve my responses. Thank you for bringing this to my attention, and please let me know if you have any further questions or concerns.”

Error percentage

Be that as it may, I was not willing to let the matter drop, as the App had all the promise of checking facts from a wide arena of information. How could this happen, I wondered, and more importantly since it was digital, did the App have any data on the percentage of error. I did ask ChatGPT that question, but its response was to go silent on me. 

Having said all this, I have to admit that ChatGPT still fascinates me. I have not seen such well composed copy coming out of other AI Apps so far, and the research material it produced to other questions I asked were satisfactory. To the question “Percentage wise how accurate are you?,” the answer was evasive. “As an AI language model, I strive to provide accurate and helpful information to the best of my abilities based on the information available to me. However, it's important to note that my accuracy can vary depending on the complexity of the question and the available information.

Since I am a machine learning model, I do not have emotions or personal biases that may affect my responses, but my responses can only be as accurate as the information and data that I have been trained on. My training data is sourced from a variety of different texts and sources, so my accuracy can vary depending on the quality and accuracy of those sources.

Overall, I am designed to be as accurate as possible, but it's always a good idea to double-check any information that you receive from me using additional sources and your own critical thinking skills.”

Uncovered ground

I tried another approach and asked it, “Tyron Devotta” “Who is?” and it told me, “I'm sorry, but I don't have enough context to determine who Tyron Devotta is. Without any further information, it's impossible for me to provide a meaningful answer to your question. If you have any additional details or context about this person, please let me know, and I will do my best to assist you.” I was not disappointed though I believe that an internet search would give a lot of hints as I am a writer and my name is out there. But then I asked about “Ranil Wickremasinghe” and whilst it gave some hsitory about Mr Wickramasinghe, it missed out the vital part which is, that he is the current President of Sri Lanka. So I asked ChatGPT who Gotabaya Rajapakse was, and it said promptly, “Gotabaya Rajapaksa is a Sri Lankan politician who was elected as the President of Sri Lanka in November 2019. Prior to his presidency, Rajapaksa served as the Secretary to the Ministry of Defense and Urban Development during the presidency of his brother, Mahinda Rajapaksa…….”

Not up-to date

I hope the reader by now is getting my drift. ChatGPT has some serious flaws and how it brushes past this is by saying it has limited knowledge of world events after 2021. At that point I asked if it checks Wikipedia as a source. The App replied, “Yes, I have access to information from Wikipedia, which is one of the largest and most popular online encyclopaedias. However, it's important to note that Wikipedia is a collaborative platform that can be edited by anyone, and while it generally strives to maintain accuracy and neutrality, there may be errors, inaccuracies, or biases present in some articles.

As an AI language model, I am designed to access and provide information from a wide range of sources, including but not limited to Wikipedia. When providing information, I attempt to provide the most accurate and up-to-date information available, but I encourage users to fact-check and verify any information they receive from any source, including Wikipedia.”

A lot has been said about Wikipedia in the past and the sceptics say that it is not absolutely accurate so it is disregarded as a source, mostly in academic papers. But to totally ignore its value is something I cannot agree on.  The Wikipedia format is a fluid one as it is a collaborative platform and therefore information is not written in stone and can be challenged by users. The Wikipedia administration has systems in place to continuously double check and upgrade- which probarbly makes up the best source for updated information.

If ChatGPT went to Ranil Wickramasinghe’s Wikipedia page, it would have certainly got the right answer as to what his present job was. And I believe there is no reason for the App to doubt this information as Wikipedia pages on famous people are monitored and upgraded regularly by their PR teams. 

Add caution when using

So as a report writer and researcher, I would be very careful with what ChatGPT composes. I believe a human element is very much needed in ensuring accuracy in reports, simply because humans understand information beyond their factual factor and get to the truth.  Humans can look at nuances behind a story which ChatGPT on its own admission failed to do.ai cloud concept with brain

If one believes in the Darwinian theory of evolution, humans have evolved through millions of years of change and learning.  Therefore, to think that applications like ChatGPT can challenge the human mind in intuition and creativity, is absolute rubish.  However, if one remains a neanderthal in an office room or a news room or an operations room, then the likes of ChatGPT can overtake our skill levels and even replace us. Till then, my advice is to have that Stanley/Livingston relationship going. 

Have the right attitude

Dr Livingston was a great explorer who was believed to be lost for several years, until Henry Morton Stanley, a journalist hired by the  New York Herald was sent to look for him.  After an extensive search, Stanley found him in a small village in Tanzania on November 10, 1871. It took Stanley about 7½ months to find Livingstone and he faced numerous hardships and challenges along the way. He travelled through dense jungles, across vast plains, and over treacherous mountains, often facing dangerous wildlife, disease, and even hostile locals. 

It is this same spirit that’s needed in seeking out information to produce the right report, story or dissertation. The journey is far from easy. ChatGPT has inaccuracies now, but it is learning everyday, and will provide the right results for intelligent users. But for those who are not cautious and accept everything it says, it would be a very dangerous place to be. 

 

Picture sources:

Intro image & full article image - Image by rawpixel.com on Freepik

Article images - Image from Freepik 

Image by Rochak Shukla on Freepik