Post: Converting from CQM to DQM: A Cognos Customer’s Journey

Whether you’re considering moving to IBM Cognos Analytics on the Cloud, wanting to use JDBC drivers instead of a native database client, or just wanting greater visibility into performance-related characteristics of queries, adopting Dynamic Query Mode is a great idea.

When Performance Food Group, one of the most trusted leaders in the foodservice industry recently upgraded from Cognos 10.2.1 to 11.0.12 as part of their continuous improvement process, they decided they would update their packages from CQM to DQM as well. Sumit Kumar, PFG’s IT manager responsible for overseeing reporting, analysis, and consulting, was in charge of their package migration and tasked with overcoming any challenges they may encounter during this process.

Benefits Of Converting From CQM To DQM

Performance Food Group’s reasons for migrating were numerous. One of the main benefits of converting for Sumit and PFG as a whole was the ability to use reports that contain data from multiple packages within that same report. Dynamic Query Mode would allow Sumit to use reports that contained data from multiple subject areas such as sales, procurement, and inventory, even though they’re located in three completely different packages. Compatible Query Mode does not have this capability, so the choice was obvious.

Converting from Compatible Query Mode to Dynamic Query Mode would also afford them the ability to greatly reduce report execution time by leveraging the query execution on 64-bit architecture. By migrating, Sumit knew they were not only laying the infrastructure to make future upgrades easier but also giving Performance Food Group the ability to initiate predictive analytics in their automatization.

The benefits of converting were obvious, but what challenges lied ahead?

After choosing 13 Cognos packages to be converted, Sumit encountered his first obstacles during the project planning and execution phase.

Project Planning and Execution Challenges

The first roadblock Sumit faced was choosing between Waterfall or Agile Delivery. Sumit chose the latter for his CQM to DQM conversion because it allowed him to deploy each package independently. Packages were deployed when all the important reports ran successfully and if some low-priority reports had errors, they deployed the package anyway and fixed the reports later. This allowed them to deliver business value upfront without losing any time, but to play it safe they kept a one month buffer in case they needed additional help from the IBM product support team.

Now that Sumit and Performance Food Group had overcome the project planning and execution phase, it was time for them to tackle the next issue: technical and infrastructure challenges due to the behavior of the packages in Dynamic Query Mode.

“According to Sumit, converting from CQM to DQM was time and effort very well spent. After the conversion, report execution time was reduced by 60% on average!”

Technical and Infrastructure Challenges

Dynamic Query Mode enforces best practices that were optional in Compatible Query Mode. An example of this would be the use of hyphens and asterisks with a forward slash as a commented line, for example, ‘-‘ vs ‘/*’. CQM accepts all of these whereas DQM sometimes accepts it and sometimes doesn’t, depending on placement. These seemingly small issues can cause sporadic errors or even entire reports to fail. Comments in advanced filters, SQL query and custom calculations were also known to cause errors. A Compare sql query tool was considered to format placement and reduce the occurrence of errors from that area but the investigation went further in order to see all error occurrences. Inclusion of a sum function in the data model or package definition also created errors, but this can be remedied by replacing it with total function instead, or Sum() vs Total().

Dynamic Query Mode also makes certain assumptions that Compatible Query Mode does not, causing report outputs to differ. Running reports in CQM vs DQM may give you different results solely based on how they interpret the function. For example, the Total(Total(Sales)) in CQM will give you a result which is equivalent to total sales and ignores the duplicate total whereas in DQM it doesn’t ignore the duplicate total, giving you a different report output. Similarly, the way aggregation options are implemented in CQM vs DQM also varies. The filters on Calculated/Aggregate columns can lead to different results depending on aggregate property selection, such as “Before Aggregation” or “After Aggregation”.

Other Challenges

Dynamic Query Mode may apply a different order of operations that can cause changes in the report output.

  • Report level cardinality definition leads to changes in the report output.
  • Report validation can still show critical errors even after the warning messages are resolved. All warning messages must be fixed before the report compiler can show you the actual error. If a report isn’t running and only shows a warning message, you have to fix the warning message first before the report will reveal the critical error and allow you to fix it.
  • Rendering reports with huge amounts of data may fail with “Java out of memory” issues but can be fixed by disabling the local caching property in the report properties for those reports, and can also be aided by increasing the configured memory for the query service
  • JVM configuration has to be fine-tuned according to best practices in order to prevent future problems.

In Conclusion

According to Sumit, the migration process was time and effort very well spent. After the conversion, report execution time was reduced by 60% on average! He definitely recommends converting packages from CQM to DQM and your environment from 32-bit to 64-bit.

Considering converting from Compatible Query Mode to Dynamic Query Mode or recently converted? We’d love for you to share your experience with us or any tips and tricks you may have!

Want to learn more? We hosted a webinar where we discussed the challenges faced by Performance Group in their migration to Dynamic Query Mode. Reply the webinar by clicking here.

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.