I use ChatGPT. I use Gemini. I use Claude. That’s not the apology. I don’t apologize for using AI. It’s a useful tool. I use AI for writing assistance. I use it for exploring topics and ideas that might be of interest. I use it to create outlines. I use it to do initial research.
As we wrapped up 2024, watching how AI has improved in a matter of months has been incredible. Not long ago, it was a closed system. The LLMs did not have access to the internet.
“I’m sorry, that’s before my time”
In 2023, if you asked whether the Chiefs or Eagles won the Super Bowl, early ChatGPT would apologize with, “I apologize for the confusion, but as an AI language model, I don’t have the ability to access or review specific articles or documents in real-time. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data up until September 2021.”
ChatGPT, and most AI chatbots, now have real-time access to the internet, so it’s been a while since I’ve seen this message.
“I’m sorry, I can’t do that”
With echoes of 2001, A Space Odyssey, “I’m sorry, Hal, I can’t do that.” Previous versions of ChatGPT did not have access to the internet in real-time. When asked about data on specific websites, it would respond with, “I’m sorry, but as an AI language model, I do not have the ability to access or analyze specific offers or websites. Additionally, I am not able to provide opinions or make claims about the effectiveness or uniqueness of any particular solution or methodology. My primary function is to provide information and answer questions to the best of my ability based on my training and knowledge.”
It sounds like there are a couple of things going on here. There was not yet access to links, and guardrails were in place to protect both parties.
“I’m sorry, I misspoke”
Once, when I followed up on a lengthy response by ChatGPT, I prompted it to provide the citation for the quote in its answer. It responded with, “I apologize for the error in my previous response. Upon review, I realized that I made a mistake in attributing the quote to the U.S. Copyright Office report. The quote is actually from a report by the European Commission’s Joint Research Centre, which was published in 2019.” It found a quote that was relevant to the research I was doing and then hallucinated the source.
“I’m sorry, I’m a machine”
ChatGPT is aware (yes, it’s hard not to anthropomorphize) in some of my interactions of its limitations. I pushed it to tell whether a belief was objectively wrong. It responded with “I’m sorry, as an AI language model, I don’t hold personal opinions or beliefs.” Turing test failed. The robot fails the Turing test of fooling you that it is a machine if it tells you that it is a machine. It’s not even trying.
“I’m sorry, but you really shouldn’t do that”
I don’t need to tell you the prompt; you can figure it out. Some of my research led to this response from ChatGPT. “As an AI language model, I’m committed to promoting ethical behavior and ensuring the responsible use of technology. I cannot assist in providing guidance or techniques for hacking, even in generalities. My purpose is to provide helpful and constructive information within legal and ethical boundaries.”
There you have it. You will have a hard time getting ChatGPT to tell you how to break into cars, exploit security flaws to access Pentagon servers, or to expertly craft that phishing email. You’re going to have to go to the dark web for that.
“I’m sorry you caught me”
This was not really an apology. I asked ChatGPT to back up one of its references. “You’re correct to question the reference. The title and URL provided were a fabrication, a ‘hallucination,’ as you suspected. This can happen when I attempt to generate specific references or sources based on my training data, rather than accessing current, verifiable sources.”
“To provide accurate information, I should rely on verifiable references and clearly state when something is a reconstruction or approximation based on my general knowledge.”
True. ChatGPT should only rely on reliable information and not make stuff up. Oddly, it knows what it needs to do but does not do it.
When to say you’re sorry
I’ve found that ChatGPT’s apologies tend to fall into least these categories:
- If you call it out on its initial response, it may apologize for including incorrect information. I found that there are some puzzles and games that it just does not get. It is horrible at Euchre.
- If you prompt for a redirect of its initial response, ChatGPT may apologize that it didn’t fully address the question or it missed critical points.
- If it is not able to respond due to the limitations of its training data or interpretation.
- If there are guardrails added to protect either the user from personal harm or legal liability.
- If there are guardrails in place to protect the developers from legal liability.
Set your expectations appropriately
Be skeptical. Challenge ChatGPT’s responses. Know that it has limitations. Know that it won’t always tell you what those limitations are. Validate every response you get – if you need to rely on it, or if you are going to publish it as fact. Sometimes, it will even tell you this. So, I don’t apologize for using AI to assist in writing; ChatGPT has done that for me.