Post: Transitioning to a Different Cognos Security Source

When you need to reconfigure an existing Cognos environment to a use a different external security source (e.g. Active Directory, LDAP, etc), there are a handful of approaches you can take. I like to call them, “The Good, the Bad, and the Ugly.” Before we explore these Good, Bad, and Ugly approaches, let’s take a look at some common scenarios that tend to drive authentication namespace changes in a Cognos environment.

Common Business Drivers:

Updating Hardware or OS – Modernizing BI hardware/infrastructure can be a frequent driver. While the rest of Cognos may run like a champ on your sleek new hardware and modern 64-bit OS, good luck migrating your circa-2005 version of Access Manager over to that new platform. Access Manager (first released with Series 7) is a venerable holdover from days gone past for many Cognos customers. It is the sole reason that many customers keep around that crufty old version of Windows Server 2003. The writing has been on the wall for Access Manager for quite some time. It is legacy software. The sooner you can transition away from it, the better.

Application Standardization– Organizations who want to consolidate the authentication of all their applications against one centrally administered corporate directory server (e.g. LDAP, AD).

Mergers & Acquisitions– Company A buys Company B and needs Company B’s Cognos environment to point to Company A’s directory server, without causing issues to their existing BI content or configuration.

Corporate Divestitures– This is the opposite of the merger scenario, a portion of a company is spun off into its own entity and now needs to point it’s existing BI environment at the new security source.

Why Namespace Migrations can be Messy

Pointing a Cognos environment to a new security source is not as simple as adding the new namesapce with the same users, groups, and roles, disconnecting the old namespace, and VOILA!- all of your Cognos users in the new namespace are matched up with their content. In fact, you can often end up with a bloody mess on your hands, and here is why…

All Cognos security principals (users, groups, roles)  are referenced by a unique identifier called a CAMID. Even if all other attributes are equal, the CAMID for a user in an existing authentication namespace will not be the same as the CAMID for that user in the new namespace. This can wreck havoc on an existing Cognos environment. Even if you only have a few Cognos users, you need to realize that CAMID references exist in MANY different places in your Content Store (and can even exist outside your Content Store in  Framework models, Transformer Models, TM1 Applications, Cubes, Planning Applications etc).

Many Cognos customers mistakenly believe that CAMID’s really only matter for My Folder content, user preferences, etc. This could not be further from the truth. It’s not just a matter of the number of users you have, it’s the amount of Cognos objects that you need to be concerned with. There are over 140 different types of Cognos objects just in the Content Store, many of which may have multiple CAMID references.

For example:

  1. Its not uncommon for a single Schedule in your Content Store to have multiple CAMID references (the CAMID of the schedule owner, the CAMID of the user the schedule should run as, the CAMID of each user or distribution list it should email generated report output to, etc.).
  2. Every object in Cognos has a security policy that governs which users can access the object (think “Permissions Tab”). A single security policy hanging off that folder in Cognos Connection has a CAMID reference for each user, group & role which is specified in that policy.
  3. Hopefully you get the point – this list goes on and on!

It is not uncommon for a sizable Content Store to contain tens of thousands of CAMID references (and we’ve seen some large ones with hundreds of thousands).

Now, do the math on what’s in your Cognos environment and you can see that you’re potentially dealing with hordes of CAMID references. It can be a nightmare! Switching (or re-configuring) your authentication namespace can leave all of these CAMID references in an unresolvable state. This inevitably leads to Cognos content & configuration problems (e.g. schedules which no longer run, content that is no longer secured the way you think it is, packages or cubes which no longer correctly implement data level security, the loss of My Folder content and user preferences, etc.).

Cognos Namespace Transition Methods

Now, knowing that a Cognos environment can have tens of thousands of CAMID references that will require finding, mapping and updating to their corresponding new CAMID value in the new authentication namespace, let’s discuss the Good, Bad & Ugly approaches for solving this problem.

The Good: Namespace Replacement with Persona

The first method (Namespace Replacement) utilizes Motio’s, Persona IQ product. Taking this approach, your existing namespace is “replaced” with a special Persona namespace that allows you to virtualize all security principals that are exposed to Cognos. Pre-existing security principals will be exposed to Cognos with the exact same CAMID as before, even though they may be backed by any number of external security sources (e.g. Active Directory, LDAP or even the Persona database).

The beautiful part about this approach is that it requires ZERO changes to your Cognos content. This is because Persona can maintain the CAMID’s of pre-existing principals, even when they are backed by a new source. So… all those tens of thousands of CAMID references in your Content Store, external models and historical cubes? They can stay exactly as they are. There is no work required.

This is by far the least risky, lowest impact approach you can use for transitioning your existing Cognos environment from one external security source to another.  It can be done in under an hour with about 5 minutes of Cognos downtime (the only Cognos downtime is restarting Cognos once you’ve configured the Persona namespace).

The Bad: Namespace Migration using Persona

If the easy, low-risk approach just isn’t your cup of tea, then there is another option.

Persona can also be used to perform a Namespace Migration.

This involves installing a second authentication namespace in your Cognos environment, mapping (hopefully) all of your existing security principals (from the old namespace) to corresponding principals in the new namespace, then (here’s the fun part), finding, mapping and updating every single CAMID reference that exists in your Cognos environment : your Content Store, Framework Models, Transformer Models, Historical cubes, TM1 Applications, Planning Applications, etc.

This approach tends to be stressful and process intensive, but if you’re the kind of Cognos administrator who needs a bit of an adrenaline rush to feel alive (and doesn’t mind late night / early morning phone calls), then perhaps… this is the option you’re looking for?

Persona can be used to help automate portions of this process.  It will help you create a mapping between the old security principals and the new security principals, automate the brute force “find, analyze, update” logic for content in your content store, etc. What Persona can automate some of the tasks here, much of the work in this approach involves “people and process” rather than actual technology.

For example – compiling information on every Framework Manager model, every Transformer model, every Planning / TM1 application, every SDK application, who owns them, and planning how they will be updated and redistributed can be a lot of work. Coordinating outages for each of the Cognos environments you wish to attempt this in and maintenance windows during which you can attempt the migration can involve planning and Cognos “down time”.  Coming up with (and executing) an effective test plan for after your migration can also be quite a bear.

Its also quite normal that you’ll want to do this process first in a non-production environment before trying it in production.

While Namespace Migration with Persona does work (and its far better than the “Ugly” approach below), it is more invasive, riskier, involves far more personnel, and takes far more man hours to carry out than Namespace Replacement. Typically migrations need to be done during “off hours”, while the Cognos environment is still online, but restricted form use by end users.

The Ugly: Manual Namespace Migration Services

The Ugly method involves the unenviable approach of attempting to manually migrate from one authentication namespace to another. This involves connecting a second authentication namespace to your Cognos environment, then attempting to manually move or recreate much of the existing Cognos content and configuration.

For example, using this approach, a Cognos administrator might attempt to:

  1. Recreate the groups and roles in the new namespace
  2. Recreate the memberships of those groups and roles in the new namespace
  3. Manually copy the my folders content, user preferences, portal tabs, etc. from each source account to each target account
  4. Find every Policy Set in the Content Store and update it to reference equivalent principals in the new namespace in the exact same way it referenced principals from the old namespace
  5. Recreate all of the schedules and populate them with corresponding credential, recipients, etc.
  6. Reset all of the “owner” and “contact” properties of all objects in the Content Store
  7. [About 40 other things in the Content Store that you’re going to forget about]
  8. Gather all of the FM models with object or data level security:
    1. Update each model accordingly
    2. Republish each model
    3. Redistribute the modified model back to original author
  9. Similar work for Transformer models, TM1 Applications and Planning Applications which are secured against the original namespace
  10. [and many more]

While some Cognos masochists might secretly giggle with joy at the idea of clicking 400,000 times in Cognos Connection, for most sensible folks, this approach tends to be extremely tedious,  time consuming and error prone. That’s not the biggest problem with this approach, however.

The biggest problem with this approach is that it almost always leads to an incomplete migration.

Using this approach, you (painfully) find, and attempt to map those CAMID references that you know about…but tend to leave all of those CAMID references that you don’t know about.

Once you think you’re done with this approach, you’re often not really done.

You’ve got objects in your content store that are no longer secured the way you think they are… you’ve got schedules that aren’t running the way they used to run, you have data which is no longer secured the way you think it is, and you may even have unexplained errors for certain operations that you can’t really put your finger on.

Reasons Why the Bad and Ugly Approaches can be Dreadful:

  • Automated Namespace Migrations put a lot of stress on the Content Manager. The inspection and potential update of every single object in your Content Store, can often result in tens of thousands of SDK calls to Cognos (virtually all of which flow through the Content Manager). This abnormal querying typically spikes memory usage / load and puts the Content Manager at risk of crashing during the migration. If you already have any amount of instability in your Cognos environment, you should be very afraid of this approach.
  • Namespace Migrations require a sizable maintenance window. Cognos needs to be up, but you don’t want people making changes during the migration process. This will typically require the namespace migration to start when no one else is working, let’s say at 10 pm on a Friday night. No one wants to start a stressful project at 10 pm on a Friday night. Not to mention, your mental faculties are probably not at their best working nights and weekends on a project that does require you to be sharp!
  • I’ve mentioned Namespace Migrations are time and labor intensive. Here’s a bit more on that:
    • The content mapping process should be done with precision and that requires team collaboration and many man hours.
    • Multiple dry runs are required to check for errors or problems with a migration. A typical migration does not go perfectly on the first try. You’ll also need a valid backup of your Content Store that can be restored in such cases. We’ve seen many organizations that do not have a good backup available (or have a backup that they don’t realize is incomplete).
    • You need to identify everything outside the Content Store that may be potentially impacted (framework models, transformer models, etc). This task may involve coordination across multiple teams (particularly in large shared BI environments).
    • You need a good test plan that involves representative people with varying degrees of access to your Cognos content. The key here is to verify shortly after the migration completes that everything is fully migrated and functioning as you expect.  Its typically impractical to verify everything, so you end up verifying what you hope are representative samples.
  • You must have broad knowledge of the Cognos environment and things that depend upon it. For example, historical cubes with custom views HAVE to be rebuilt if you go the NSM route.
  • What if you or the company you’ve outsourced the namespace migration to forgets about something, like…SDK applications? Once you’ve flipped the switch, these things stop working if they’re not updated properly. Do you have the proper checks in place to notice this immediately, or will it be several weeks / months before the symptoms start to surface?
  • If you have undergone numerous Cognos upgrades, you can potentially have objects in your Content Store that are in an inconsistent state. If you don’t work with the SDK, you won’t be able to see which objects are in this state.

Why Namespace Replacement is the Best Option

The key risk factors and time consuming steps I just outlined are eliminated when the Persona Namespace Replacement method is used. Using the Namespace Replacement approach, you have 5 minutes of Cognos downtime, and none of your content has to change. The “Good” method seems like a cut and dry “no-brainer” to me. Friday nights are for relaxing, not stressing out over the fact your Content Manager just crashed in the middle of a Namespace Migration.

Scroll to Top
As the BI space evolves, organizations must take into account the bottom line of amassing analytics assets.
The more assets you have, the greater the cost to your business. There are the hard costs of keeping redundant assets, i.e., cloud or server capacity. Accumulating multiple versions of the same visualization not only takes up space, but BI vendors are moving to capacity pricing. Companies now pay more if you have more dashboards, apps, and reports. Earlier, we spoke about dependencies. Keeping redundant assets increases the number of dependencies and therefore the complexity. This comes with a price tag.
The implications of asset failures differ, and the business’s repercussions can be minimal or drastic.
Different industries have distinct regulatory requirements to meet. The impact may be minimal if a report for an end-of-year close has a mislabeled column that the sales or marketing department uses, On the other hand, if a healthcare or financial report does not meet the needs of a HIPPA or SOX compliance report, the company and its C-level suite may face severe penalties and reputational damage. Another example is a report that is shared externally. During an update of the report specs, the low-level security was incorrectly applied, which caused people to have access to personal information.
The complexity of assets influences their likelihood of encountering issues.
The last thing a business wants is for a report or app to fail at a crucial moment. If you know the report is complex and has a lot of dependencies, then the probability of failure caused by IT changes is high. That means a change request should be taken into account. Dependency graphs become important. If it is a straightforward sales report that tells notes by salesperson by account, any changes made do not have the same impact on the report, even if it fails. BI operations should treat these reports differently during change.
Not all reports and dashboards fail the same; some reports may lag, definitions might change, or data accuracy and relevance could wane. Understanding these variations aids in better risk anticipation.

Marketing uses several reports for its campaigns – standard analytic assets often delivered through marketing tools. Finance has very complex reports converted from Excel to BI tools while incorporating different consolidation rules. The marketing reports have a different failure mode than the financial reports. They, therefore, need to be managed differently.

It’s time for the company’s monthly business review. The marketing department proceeds to report on leads acquired per salesperson. Unfortunately, half the team has left the organization, and the data fails to load accurately. While this is an inconvenience for the marketing group, it isn’t detrimental to the business. However, a failure in financial reporting for a human resource consulting firm with 1000s contractors that contains critical and complex calculations about sickness, fees, hours, etc, has major implications and needs to be managed differently.

Acknowledging that assets transition through distinct phases allows for effective management decisions at each stage. As new visualizations are released, the information leads to broad use and adoption.
Think back to the start of the pandemic. COVID dashboards were quickly put together and released to the business, showing pertinent information: how the virus spreads, demographics affected the business and risks, etc. At the time, it was relevant and served its purpose. As we moved past the pandemic, COVID-specific information became obsolete, and reporting is integrated into regular HR reporting.
Reports and dashboards are crafted to deliver valuable insights for stakeholders. Over time, though, the worth of assets changes.
When a company opens its first store in a certain area, there are many elements it needs to understand – other stores in the area, traffic patterns, pricing of products, what products to sell, etc. Once the store is operational for some time, specifics are not as important, and it can adopt the standard reporting. The tailor-made analytic assets become irrelevant and no longer add value to the store manager.