Harnessing GPT-n For Enhanced Qlik Development Process

ChatGPT For Qlik

As you may know, my team and I have brought to the Qlik community a browser extension that integrates Qlik and Git to save dashboard versions seamlessly, making thumbnails for dashboards without switching to other windows. In doing so, we save Qlik developers a significant amount of time and reduce stress on a daily basis.

I always look for ways to improve the Qlik development process and optimize daily routines. That’s why it is too hard to avoid the most hyped topic, ChatGPT, and GPT-n, by OpenAI or Large Language Model in common.

Let’s skip the part about how Large Language Models, GPT-n, works. Instead, you can ask ChatGPT or read the best human explanation by Steven Wolfram.

I will start from the unpopular thesis, “GPT-n Generated Insights from the data is a Curiosity-Quenching Toy,” and then share real-life examples where an AI assistant we are working on can automate routine tasks, free time for more complex analysis and decision-making for BI-developers/analysts.

No alt text provided for this image

AI assistant from my childhood

Don’t Let GPT-n Lead You Astray

… it’s just saying things that “sound right” based on what things “sounded like” in its training material. © Steven Wolfram

So, you’re chatting with ChatGPT all day long. And suddenly, a brilliant idea comes to mind: “I will prompt ChatGPT to generate actionable insights from the data!”

Feeding GPT-n models using OpenAI API with all business data and data models is a great temptation to get actionable insights, but here is the crucial thing — the primary task for the Large Language Model as GPT-3 or higher is to figure out how to continue a piece of text that it’s been given. In other words, It “follows the pattern” of what’s out there on the web and in books and other materials used in it.

Based on this fact, there are six rational arguments why GPT-n generated insights are just a toy to quench your curiosity and fuel supplier for the idea generator called the human brain:

  1. GPT-n, ChatGPT may generate insights that are not relevant or meaningful because it lacks the necessary context to understand the data and its nuances—lack of context.
  2. GPT-n, ChatGPT may generate inaccurate insights due to errors in data processing or faulty algorithms — lack of accuracy.
  3. Relying solely on GPT-n, ChatGPT for insights can lead to a lack of critical thinking and analysis from human experts, potentially leading to incorrect or incomplete conclusions — over-reliance on automation.
  4. GPT-n, ChatGPT may generate biased insights due to the data it was trained on, potentially leading to harmful or discriminatory outcomes — the risk of bias.
  5. GPT-n, ChatGPT may lack a deep understanding of the business goals and objectives that drive BI analysis, leading to recommendations not aligned with the overall strategy — a limited understanding of business goals.
  6. Trusting business-critical data and sharing it with a “black box” that can self-learn will spawn the idea in TOP management bright heads that you are teaching your competitors how to win — lack of trust. We had already seen this when the first cloud databases like Amazon DynamoDB began to appear.

To prove at least one argument, let’s examine how ChatGPT could sound convincing. But in some cases, it’s not correct.

I will ask ChatGPT to solve the simple calculation 965 * 590 and then will ask it to explain the results step-by-step.

No alt text provided for this image

568 350 ?! OOPS… something goes wrong.

In my case, a hallucination broke through in the ChatGPT response because the answer 568,350 is incorrect.

Let’s make the second shot and ask ChatGPT to explain the results step-by-step.

No alt text provided for this image

Nice shot! But still wrong…

ChatGPT tries to be persuasive in a step-by-step explanation, but it’s still wrong.

The context matters. Let’s try again but feed the same problem with the “act as …” prompt.

No alt text provided for this image

BINGO! 569 350 is the correct answer

But this is a case where the kind of generalization a neural net can readily do — what is 965*590 — won’t be enough; an actual computational algorithm is needed, not just a statistical-based approach.

Who knows… maybe AI just agreed with math teachers in the past and doesn’t use the calculator until upper grades.

Since my prompt in the previous example is straightforward, you can quickly identify the fallacy of the response from ChatGPT and try to fix it. But what if the hallucination breaks through into response to questions like:

  1. Which salesperson is the most effective?
  2. Show me the Revenue for the last quarter.

It could lead us to the HALLUCINATION-DRIVEN DECISION making, without mushrooms.

Of course, I’m sure that many of my above arguments will become irrelevant in a couple of months or years due to the development of narrowly focused solutions in the field of Generative AI.

While GPT-n’s limitations should not be ignored, businesses can still create a more robust and effective analytical process by leveraging the strengths of human analysts (it’s funny that I have to highlight HUMAN) and AI assistants. For example, consider a scenario where human analysts try to identify factors contributing to customer churn. Using AI assistants powered by GPT-3 or higher, the analyst can quickly generate a list of potential factors, such as pricing, customer service, and product quality, then evaluate these suggestions, investigate the data further, and ultimately identify the most relevant factors that drive customer churn.

SHOW ME THE HUMAN-LIKE TEXTS

No alt text provided for this image

HUMAN ANALYST making prompts to ChatGPT

The AI assistant can be used to automate tasks that you spend countless hours doing right now. It’s obvious, but let’s look closer at the area where AI assistants powered by Large Language Models such as GPT-3 and higher are tested well — generating human-like texts.

There are a bunch of them in BI developers’ daily basis tasks:

  1. Writing charts, sheet titles, and descriptions. GPT-3 and higher can help us quickly generate informative and concise titles, ensuring our data visualization is easy to understand and navigate for decision-makers and using the “act as ..” prompt.
  2. Code documentation. With GPT-3 and higher, we can quickly create well-documented code snippets, making it easier for our team members to understand and maintain the codebase.
  3. Creating master items (business dictionary). The AI assistant can assist in building a comprehensive business dictionary by providing precise and concise definitions for various data points, reducing ambiguity, and fostering better team communication.
  4. Creating a catchy thumbnail (covers) for the sheets/dashboards in the app. GPT-n can generate engaging and visually appealing thumbnails, improving user experience and encouraging users to explore the available data.
  5. Writing calculation formulas by set-analysis expressions in Qlik Sense / DAX queries in Power BI. GPT-n can help us draft these expressions and queries more efficiently, reducing the time spent on writing formulas and allowing us to focus on data analysis.
  6. Writing data load scripts (ETL). GPT-n can aid in creating ETL scripts, automating data transformation, and ensuring data consistency across systems.
  7. Troubleshooting data and application issues. GPT-n can provide suggestions and insights to help identify potential issues and offer solutions for common data and application problems.
  8. Renaming fields from technical to business in Data Model. GPT-n can help us translate technical terms into a more accessible business language, making the data model easier to understand for non-technical stakeholders with few clicks.

No alt text provided for this image

AI assistants powered by GPT-n models can help us be more efficient and effective in our work by automating routine tasks and freeing time for more complex analysis and decision-making.

And this is the area where our browser extension for the Qlik Sense can deliver value. We’ve prepared for the upcoming release — of AI assistant, which will bring titles and description generation to Qlik developers just in the app while developing analytics apps.

Using fined-tuned GPT-n by OpenAI API for these routine tasks, Qlik developers and analysts can significantly improve their efficiency and allocate more time to complex analysis and decision-making. This approach also ensures that we leverage GPT-n’s strengths while minimizing the risks of relying on it for critical data analysis and insights generation.

Conclusion

In conclusion, let me, please give way to ChatGPT:

No alt text provided for this image

Recognizing both the limitations and potential applications of GPT-n within the context of Qlik Sense and other business intelligence tools helps organizations make the most of this powerful AI technology while mitigating potential risks. By fostering collaboration between GPT-n-generated insights and human expertise, organizations can create a robust analytical process that capitalizes on the strengths of both AI and human analysts.

To be among the first to experience the benefits of our upcoming product release, we would like to invite you to fill out the form for our early access program. By joining the program, you’ll gain exclusive access to the latest features and enhancements that will help you harness the power of AI assistant in your Qlik development workflows. Don’t miss this opportunity to stay ahead of the curve and unlock the full potential of AI-driven insights for your organization.

Join Our Early Access Program

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.