An A(I)pology

I use ChatGPT.  I use Gemini.  I use Claude.  That’s not the apology.  I don’t apologize for using AI.  It’s a useful tool.  I use AI for writing assistance.  I use it for exploring topics and ideas that might be of interest.  I use it to create outlines.  I use it to do initial research.  

As we wrapped up 2024, watching how AI has improved in a matter of months has been incredible.  Not long ago, it was a closed system.  The LLMs did not have access to the internet.  

“I’m sorry, that’s before my time”

In 2023, if you asked whether the Chiefs or Eagles won the Super Bowl, early ChatGPT would apologize with, “I apologize for the confusion, but as an AI language model, I don’t have the ability to access or review specific articles or documents in real-time. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data up until September 2021.”

ChatGPT, and most AI chatbots, now have real-time access to the internet, so it’s been a while since I’ve seen this message.

“I’m sorry, I can’t do that”

With echoes of 2001, A Space Odyssey, “I’m sorry, Hal, I can’t do that.”  Previous versions of ChatGPT did not have access to the internet in real-time.  When asked about data on specific websites, it would respond with, “I’m sorry, but as an AI language model, I do not have the ability to access or analyze specific offers or websites. Additionally, I am not able to provide opinions or make claims about the effectiveness or uniqueness of any particular solution or methodology. My primary function is to provide information and answer questions to the best of my ability based on my training and knowledge.”

It sounds like there are a couple of things going on here.  There was not yet access to links, and guardrails were in place to protect both parties. 

“I’m sorry, I misspoke”

Once, when I followed up on a lengthy response by ChatGPT, I prompted it to provide the citation for the quote in its answer.  It responded with, “I apologize for the error in my previous response. Upon review, I realized that I made a mistake in attributing the quote to the U.S. Copyright Office report. The quote is actually from a report by the European Commission’s Joint Research Centre, which was published in 2019.”  It found a quote that was relevant to the research I was doing and then hallucinated the source.

“I’m sorry, I’m a machine”

ChatGPT is aware (yes, it’s hard not to anthropomorphize) in some of my interactions of its limitations.  I pushed it to tell whether a belief was objectively wrong.  It responded with “I’m sorry, as an AI language model, I don’t hold personal opinions or beliefs.”  Turing test failed.  The robot fails the Turing test of fooling you that it is a machine if it tells you that it is a machine.  It’s not even trying.

“I’m sorry, but you really shouldn’t do that”

I don’t need to tell you the prompt; you can figure it out.  Some of my research led to this response from ChatGPT.  “As an AI language model, I’m committed to promoting ethical behavior and ensuring the responsible use of technology. I cannot assist in providing guidance or techniques for hacking, even in generalities. My purpose is to provide helpful and constructive information within legal and ethical boundaries.”

There you have it.  You will have a hard time getting ChatGPT to tell you how to break into cars, exploit security flaws to access Pentagon servers, or to expertly craft that phishing email.  You’re going to have to go to the dark web for that.

“I’m sorry you caught me”

This was not really an apology. I asked ChatGPT to back up one of its references.  “You’re correct to question the reference. The title and URL provided were a fabrication, a ‘hallucination,’ as you suspected. This can happen when I attempt to generate specific references or sources based on my training data, rather than accessing current, verifiable sources.”

“To provide accurate information, I should rely on verifiable references and clearly state when something is a reconstruction or approximation based on my general knowledge.”

True.  ChatGPT should only rely on reliable information and not make stuff up.  Oddly, it knows what it needs to do but does not do it.

When to say you’re sorry

I’ve found that ChatGPT’s apologies tend to fall into least these categories:

  1. If you call it out on its initial response, it may apologize for including incorrect information.  I found that there are some puzzles and games that it just does not get.  It is horrible at Euchre.
  2. If you prompt for a redirect of its initial response, ChatGPT may apologize that it didn’t fully address the question or it missed critical points.
  3. If it is not able to respond due to the limitations of its training data or interpretation.
  4. If there are guardrails added to protect either the user from personal harm or legal liability.
  5. If there are guardrails in place to protect the developers from legal liability.

Set your expectations appropriately

Be skeptical.  Challenge ChatGPT’s responses.  Know that it has limitations.  Know that it won’t always tell you what those limitations are.  Validate every response you get – if you need to rely on it, or if you are going to publish it as fact.  Sometimes, it will even tell you this.  So, I don’t apologize for using AI to assist in writing; ChatGPT has done that for me.

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.