Cognos and the Cost of NOT Testing Your BI

Updated August 28, 2019

Testing has been widely adopted as part of software development ever since software has been developed. Business Intelligence (BI) however, has been slower to adopt testing as an integrated part of development in BI software such as IBM Cognos. Let’s explore why BI has been slower to adopt testing practices and the consequences of NOT testing.

Why organizations do not test BI…

  • Time constraints. BI projects are under constant pressure to be delivered faster. What some organizations may not realize is that the easiest phase to reduce time is testing.
  • Budget constraints. The thinking is that testing is too expensive and can’t dedicate a testing team.
  • Faster is better. This is not necessarily an “agile” approach and may only get you to the wrong place quicker.

Bandage-Quote

  • The “just do it right the first time” mentality. This naive approach insists that the presence of quality control should reduce the need for testing.
  • Lack of ownership. This is similar to the previous bullet. The thinking is that “our users will test it.” This approach can lead to unhappy users and lots of support tickets.
  • Lack of tools. The misconception that they don’t have the right technology for testing.
  • Lack of understanding of testing.  For example,
    • Testing should evaluate the accuracy and validity of data, data consistency, timeliness of data, performance of delivery, and ease of use of the delivery mechanism.
    • Testing during a BI project may include regression testing, unit testing, smoke testing, integration testing, user acceptance testing, ad hoc testing, stress/scalability testing, system performance testing.

What Are the Costs of NOT Testing BI?

  • Inefficient designs. Poor architecture may not be discovered if testing is ignored. Design issues can contribute to usability, performance, re-use, as well as, maintenance and upkeep.
  • Data integrity issues. Data corruption or data lineage challenges can lead to lack of trust in the numbers.
  • Data validation issues. Decisions made on bad data may be devastating to the business. There’s nothing worse than trying to manage by metrics that are based on incorrect information.

Dilbert cartoon- the data is wrong

  • Decreased user adoption. If the numbers aren’t right, or if the application is not user-friendly, your user community just won’t use your shiny new enterprise BI software.
  • Increased costs due to lack of standardization.
  • Increased costs to repair defects in later stages of the BI development life cycle. Any issues discovered beyond the requirements phase will cost exponentially more than if discovered earlier.

Now that we’ve laid out why organizations might not be testing and the pitfalls that occur when you do not test BI, let’s look at some studies on testing in software development.

Studies Show Testing Your BI Platform Saves Money!

One study of 139 North American companies ranging in size from 250 to 10,000 employees, reported annual debugging costs of $5.2M to $22M. This cost range reflects organizations that do not have automated unit testing in place. Separately, research by IBM and Microsoft found that with automated unit testing in place, the number of defects can be reduced by between 62% and 91%. This means that dollars spent on debugging could be reduced from the $5M – $22M range to the $0.5M to $8.4M range. That’s a huge savings!

Debugging costs without testing and with testing

Costs to Fix Errors Quickly Escalate.

A paper on successful software development tactics demonstrates that most errors are made early in the development cycle and that the longer you wait to detect and correct, the higher it costs you to fix. So, it doesn’t take a rocket scientist to draw the obvious conclusion that the sooner errors are discovered and fixed, the better. Speaking of rocket science, it just so happens that NASA published a paper on just that – “Error Cost Escalation Through the Project Life Cycle.”

It is intuitive that the costs to fix errors increase as the development life-cycle progresses. The NASA study was performed to determine just how quickly the relative cost of fixing errors discovered progresses. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. This study is representative of other research which has been done.

SDLC Cost to fix errors scale

From the chart above, research from TRW, IBM, GTE, Bell Labs, TDC and others shows the cost of fixing errors during the different development phases:

  • The cost of fixing an error discovered during the requirements phase is defined as 1 unit
  • The cost to fix that error if found during the design phase is double that
  • At the code and debug phase, the cost to fix the error is 3 units
  • At the unit test and integrate phase, the cost to fix the error becomes 5
  • At the systems test phase phase, the cost to fix the error jumps to 20
  • And once the system is in the operation phase, the relative cost to correct the error has risen to 98, nearly 100 times the cost of correcting the error if found in the requirements phase!

The bottom line is that it is much more costly to repair defects if they’re not caught early.

Conclusions

Significant research has been conducted which demonstrates the value of early and continuous testing in software development. We, in the BI community, can learn from our friends in software development. Even though most formal research has been done related to software development, similar conclusions can be drawn about BI development. The value of testing is indisputable, but many organizations have been slower to take advantage of formal testing of their BI environment and integrate testing into their BI development processes. The costs of not testing are real. The risks associated with not testing are real.

Want to see some automated Cognos testing in action? Watch the videos on our playlist by clicking here!

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.