Cognos Auditing Blog – Tips and Tricks for Large & High Volume Environments

A blog by John Boyer and Mike Norris.

Introduction

It is important to have the Cognos Auditing capability working to know and understand how Cognos is being used by your user community and help answer questions like:

    • Who is using the system?
    • What reports are they running?
    • What are the report run times?
    • With the help of other tools, like MotioCI, what content is unused?

Considering how critical it is to maintain healthy Cognos Analytics environments, surprisingly little has been written about its auditing database beyond the standard product documentation. Perhaps, it’s taken for granted, but organizations that use it know that over time querying the Audit Database tables will start to slow – especially if your organization has a lot of users running lots of reports and has lots of history. What’s more is that the audit activity logging itself may be delayed because it is being queued when it can’t be added to the database quickly enough, for example. That’s when you start to think about database performance as you would with any operational database that has reporting requirements.

Large tables typically slow query performance. The larger the table, the longer it takes to insert and query. Remember that these tables and the Audit Database are basically an operational database; writes are happening frequently and work against us as we can’t focus them for only read operations as you would with a data mart.

Much like the content store, the health of the Cognos environment must also take into account the health of the Audit Database. Unbounded growth of the Audit Database can become an issue over time and may eventually even impact the overall performance of a Cognos environment. In many organizations with external regulations thrust upon them, not having a full audit record can land them in a non-compliance situation with heavy repercussions. So how do we deal with having to maintain so much data for historical auditing purposes — in some cases up to 10 years — yet still get the reporting we need to maintain the environment and keep users happy with the performance?

The Challenge

    • Unbounded growth of the Audit Database is negatively impacting the health of the Cognos environment
    • Reporting off the Audit Database has become slow or unusable
    • Cognos experiences delays in records being written to the Audit Database
    • The Audit Database is running out of disk space

All this means that it’s not just the reports which rely on the Audit Database which suffer, but often the entire system. If the Audit Database is on the same server as the Cognos content store, performance of all things Cognos will be affected in that environment.

The Setup

We assume:

    1. Cognos Analytics is installed and running
    2. Cognos is configured to log to an Audit Database
        • Have an Audit Database in place
        • Set appropriate Audit logging levels in Cognos administration
        • Record are being written to the database by Cognos
    3. The Audit Database has been in use for more than a year
    4. The environment is very active with users and executions
    5. The Audit package is being used to surface Cognos usage data
    6. We are looking to improve Audit Database reporting performance
    7. Starting over or deleting old records is not always an option

If you don’t, yet, have Cognos Audit installed and configured, Lodestar Solutions, a Motio partner, has an excellent post on enabling Audit in Cognos BI /CA.

The Solution

There are some possible solutions that quickly present themselves:

    1. Reduce the volume of data by:
        • Moving some of the older data to another database
        • Moving some of the older data to another table in the same database
    2. Just delete or archive some of the data and don’t worry about it
    3. Live with it. Kick the can down the road and push the Database Administrator for performance
      improvements while handcuffing them by not allowing alterations of the schema or
      indexes

We’re not going to deal with option 3. Option 2, deleting the data, is not a good option and I’d recommend keeping at least 18 months’ worth at a minimum. But, if you’re so inclined, IBM provides a utility, AuditDBCleanup (Cognos BI) or a script (Cognos Analytics) which will do exactly that. The utility for Cognos BI deletes records based on a timestamp while the scripts for Cognos Analytics just delete the indexes and tables.

The recommendations we’ve made to clients previously on this were to separate into two databases:

    1. Audit – Live : contains most recent week’s worth of data
    2. Audit – Historical : contains historical data (up to N years)

In short, the process runs weekly to move most recent records from Audit Live to Audit Historical. Audit Live starts over as a blank slate after this process runs.

    1. The Live DB is fast and tight, allowing inserts to happen as fast as possible
    2. Audit queries are exclusively directed to the Historical DB

Using this approach, there is no implicit “stitching together” of the Live data and the Historical data. I would argue that you probably want to keep it that way.

In Cognos Administration, you can add two different connections for the Audit Data Source. When a user runs a report against the Audit package, they get prompted for which connection they want to use:

On the off chance you want to look at live audit data rather than historical audit data, you just pick the “Audit – Live” connection when prompted (should be the exception, not the norm.)

If you REALLY also want to provide a consolidated view of both Live and Historical, you could do so, but it would impact performance.

For example, you could be create a 3rd Database called “Audit – Consolidated View” and then , for each table in the Audit schema: create an identically named view that is a SQL union between the table in the live DB and the table in the historical DB. Similarly, this could also be achieved in the Framework Manager model, but, again, performance would be a key consideration.

Some of our clients have created a consolidated view. It is our opinion that this is likely overkill. Performance would always be worse in this consolidated view and we haven’t come across many use cases which use both the Live data sets and Historical. The Live being used for troubleshooting and the Historical for trend reporting.

As of Cognos Analytics 11.1.7, the Audit Database has grown to 21 tables. You can find more information elsewhere on the Audit Database, sample audit reports and the Framework Manager model. The default logging level is Minimal, but you may want to use the next level, Basic, to capture use requests, user account management and runtime usage. One way you can maintain system performance is by keeping the logging level to the lowest level required. Obviously, the more logging that is done by the server, the more overall server performance can be affected.

The key tables most administrators will be interested in are the 6 tables which log the user activity and reporting activity in the system.

  • COGIPF_USERLOGON : Stores user logon (including log off) information
  • COGIPF_RUNREPORT : Stores information about report executions
  • COGIPF_VIEWREPORT : Stores information about report view requests
  • COGIPF_EDITQUERY : Stores information about query runs
  • COGIPF_RUNJOB : Stores information about job requests
  • COGIPF_ACTION : Records user actions in Cognos (this table may grow much more rapidly than the others)

The out-of-the-box configuration looks like this:

Recommended configuration:

The Cognos Audit Database – Live contains 1 week of audit data. Data older than 1 week is moved to the Cognos Audit Database – Historical.

The line from the Cognos Audit Database – Live to Cognos Audit Database – Historical in the diagram is responsible for:

  • Copying data from Live Audit to Historical Audit
  • Remove all rows in the Live Audit that are older than 1 week
  • Remove all rows in Historical Audit that are older than x years
  • Remove all rows in COGIPF_ACTION that are older than 6 months

Indexes

Different database types have different indexing types. A database index is a data structure, associated with a Table (or View), used to improve queries execution time when retrieving the data from that table (or View). Work with your DBA to create the optimum strategy. They will want to know the answers to questions like these to make the best decisions on what columns to index. Obviously, the database administrator could find out the answers to some or all these questions without your help, but it would take some research and some time:

  • How many records do the tables have and to what size do you expect them to grow? (Indexing a table will not be useful unless the table has large number of records.)
  • Do you know which columns are unique? Do they allow NULL values? Which columns have data type of integer or big integer? (The columns with numeric data types and that are UNIQUE and NOT NULL are strong candidates to participate in the index key.)
  • Where are your main performance problems today? Are they in retrieving the data? Are there specific queries or reports which are more of a problem? (This may lead the database administrator to some specific columns which can be optimized.)
  • What fields are used in joining tables for reporting?
  • What fields are used for filtering, sorting, grouping, and aggregating?

Not surprisingly, these are the same questions that would need to be answered for improving the performance of any database tables.

IBM Support recommends creating an index on columns “COGIPF_REQUESTID”, “COGIPF_SUBREQUESTID”, and “COGIPF_STEPID” for the following tables to improve performance:

  • COGIPF_NATIVEQUERY
  • COGIPF_RUNJOB
  • COGIPF_RUNJOBSTEP
  • COGIPF_RUNREPORT
  • COGIPF_EDITQUERY

Plus on other less-used tables:

  • COGIPF_POWERPLAY
  • COGIPF_HUMANTASKSERVICE
  • COGIPF_HUMANTASKSERVICE_DETAIL

You can use this as a starting point, but I would go through the exercise of answering the questions above to arrive at the best answer for your organization.

Other Considerations

  1. Audit FM Model. Remember that the Framework Manager model which IBM provides is modeled on the default tables and fields. Any changes you make to the reporting tables will need to be reflected in the model. The ease or complexity of these changes – or your organizational competence to make these changes – may affect the solution you choose.
  2. Additional fields. If you’re going to do it, now is the time to add additional fields for context or reference data to improve audit reporting.
  3. Summary tables. Instead of just copying the data to your historical table, compress it. You could aggregate the data to the day level to make It more efficient for reporting.
  4. Views instead of tables. Others say, “So, instead of having a ‘current’ database and a ‘historical’ database, you should only have one database, and all tables in it should be prefixed with ‘historical’. Then, you should create a set of views, one for each table that you want to see as ‘current’, and have each view filter out the historical rows that you do not want to see and let only the current ones pass through.”
    https://softwareengineering.stackexchange.com/questions/276395/two-database-architecture-operational-and-historical/276419#276419

Conclusion

The bottom line is that with the information provided here you should be well prepared to have a productive conversation with your DBA. Chances are good that she has solved similar problems before.

The proposed changes in Cognos Audit Database architecture will improve performance in both direct reporting as well as 3rd-party applications which rely on it, like Motio’s ReportCard and Inventory.

By the way, if you’ve had that conversation with your DBA, we’d love to hear about it. We’d also love to hear if you’ve solved the issue of a poorly performing Audit Database and how you did it.

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.