A Maturity Model for De-Weaponizing Identity Systems – Part 3

In Part 1 of this series, I discussed the types of attackers who can weaponize your identity systems, use them to cause harm. In Part 2, I introduced the design goals of the Maturity Model as well as the disciplines needed to implement the Maturity Model. In this post, I’ll discuss each of the 5 levels of the Maturity Model and controls you should put in place to achieve those levels.

Level 1 – Managed

This level is table stakes. It optimizes your organization’s existing security controls for identity systems. I believe it helps make compliance with things like GDPR easier but it is in no way a “cure all” for regulatory burdens. To achieve Level 1, you’ll need a combination of access control, data protection, and audit:

  • Access Control
    • 2FA for admins
    • No developer access to production data
    • No program-lead access to production
  • Data Protection
    • No insecure data transfers
    • No insecure data staging
    • Data encrypted in transit
  • Audit
    • Audit all admin system configuration changes
    • Audit user access to systems

Some of things to note… 2FA for admins is just good practice in every setting, especially if you do not have a privileged account management procedure in place. We often hear about “no developer access to production” but in an era of DevOps, you want your developers in production… but that doesn’t mean they need to access to production data, just the production systems themselves. Similarly, while developers get a lot of attention, one constituency that doesn’t are program leads. People like me should not have access to production. If you oversee an IAM program, you should not have any sort of administrative access to your production systems. Sure, you are an end-user of those systems, like everyone else, but you should not have any other privileges.

Probably not a lot of surprises in the Data Protection section, but we still see people getting tripped up by staging data insecurely.

Audit too comes with little surprise. Know what admins are doing to your systems and know who it using your systems.

Level 2 – Defend Against Ourselves and Successor Attacker

With Level 2 we want to prevent rogue admin attacks, whether they are technically, morally, ethically, or financially compromised. We want to do a bit more to protect data at rest and mitigate attacks from adjacent compromised services. To achieve Level 2, you’ll need a combination of identity governance, access control, and data protection:

  • Identity Governance
    • Segregation of admin duties
    • No “Read All” or “Modify All” for admins
  • Access Control
    • Explicit delegation for System-to-System access
  • Data Protection
    • Selective encryption and hashing

It’s time to dust off your SoD tools… they aren’t just for SOX anymore! We should be using our segregation of duties tools and processes on admin accounts. Regardless of how you split up privileges between admin users, you should ensure not to assign permissions that grant unfettered access to data such as ReadAll- and ModifyAll-type permissions.

Regarding access control, we should keep an eye on mitigating the impact of systems adjacent to our identity systems, who integrate with our identity systems. If those adjacent systems get compromised then quickly ours do too. We should delegate access to adjacent systems specifically… no common integration users across all systems. Using OAuth here is a good idea.

Lastly, assuming we only collect and retain information we need in our identity systems, we should be selectively encrypting (and/or hashing as the situation warrants) data. Keyword here is ‘selectively.’ My friend Ramon Krikken’s words still ring true on this matter.

Level 3 – Defend Against Bulk Attackers

Here we want to stop attackers looking to extract a large amount of data in a short amount of time. To do so we must know who is access our information and insert a “breath” into the data extraction process. Disciplines we’ll need at Level 3 are access control, data management, and audit:

  • Access Control
    • 2-Person Rule for data extracts
  • Data Management
    • Query governors to prevent “large” extracts
  • Audit
    • Audit all CRUD operations

Whether it is two signatures on a piece of paper or a more formal digital approval process, I believe we need a 2-person rule for data extracts. This forces us to pause and consider, “Should this system be allowed to pull data from our identity system?” Pausing, considering, and then documenting, at the very least, helps us know where data is supposed to be flowing from and to where it is going.

Query governors are safety nets. It makes sense for sales people to run reports with dozens of records but not thousands. It makes sense for end users to pull a single record back (theirs) and not millions. What large means will differ by role, industry, and application but the spirit of this control is to put a safety net in place and manage the exceptions.

Level 4 – Defend Against Single Data Subject Attackers

Single Data Subject Attackers are the hardest to defeat and this is in part due to the fact that a Single Data Subject Attacker can pose as a data subject to execute the attack. Using data harvested in a previous bulk attack, a Single Data Subject Attacker can then present just enough information to be indistinguishable from the actual data subject. Here’s what we should bring to bear at Level 4:

  • Access Control
    • No self-referential multi-factor accesses to data about the subject
  • Data Management
    • Behavioral query governors

We know that knowledge-based authentication (KBA) is a weak assurance mechanism and static KBA is even weaker. But it’s still in use. The problem is that a Single Data Subject Attacker is a data nerd; they know the data subject’s last mortgage payment to the penny and the subject knows it to the dollar. Organizations must stop asking knowledge-based authentication questions, especially ones whose answers live in the data set about the subject. Asking for the last payment amount to the penny is a trivial task for the Attacker but hard for the subject. Have a listen to Bob Blakley’s talk on this.

I believe that we can ask more of our data services. We should be asking those services, not for static rule-based query governors, but behavioral ones. Knowing not only what role I play but also what my typical usage patterns are, lead to better query governors, that are both less annoying and business disrupting, but also more effective. This is, I believe, a space in which machine learning can play a role.

Level 5 – Transparency

I will admit that Level 5 is a bit of a nirvana state… but, hey, it’s good to have goals. Level 5 requires audit and data management:

  • Audit
    • Make “public” who is querying data
  • Data Management
    • Data provenance bound into data

By making “public” who is querying the data leads to interesting behavior. If you know that the fact that you looked at another salesperson’s deal was knowable to the entire company, would it change your behavior? If you knew that looking at another doctor’s celebrity patient’s files would be knowable to the entire hospital staff would it change your behavior? Would you like to be able to FOIA to see who has been looking at your passport information?

I believe that by making “public” (where public varies by industry, geography, etc) leads to normative social behavior kicking in and that, I think, leads to more responsible data use. If everyone can see that I looked up something I shouldn’t have, then I won’t do it – at least that is my supposition.

To be sure, this can also have a chilling effect. Ashkan and I have kicked this around a bit and there is no easy answer. But I leave this control as a thought experiment for you to consider in your own environment.

Lastly, we need to do more to bind data provenance into data itself. Said differently, if you don’t know where data comes from, you should consider it fraudulent. Sad to say but in our personal and professional lives, we make decisions based on data which we have no idea its origin. Fake news is an example of that. But if we, identity professionals, are building systems to make risk-based decisions, those systems better be making decisions on data whose origin we do know.

There are techniques for doing this that range from data watermarking to manifest files to additional columns in a database. Years ago, I described a schema for this called Relationship Context Metadata. There undoubtedly many other techniques. Point being we should be looking into ways of durably stamping the provenance of our data into our data for all to inspect and validate.

Conclusion

I offer this Maturity Model as a way to start the conversation of “How can we optimize our security and data protection controls for our identity systems?” This is a conversation all organizations should have to prevent the weaponization of their identity systems. None of us want the identity systems we have built to cause harm; none of us want them to be weaponized.

I believe that you can get your identity systems to Level 1 in six months and Level 2 in 12 months. But even if you don’t, you will strengthen your systems by starting this conversation of de-weaponization and in the end your stakeholders will be better off for it.

PS

The Maturity Model is by no means complete. Take it, beat it up, and make it stronger. People like Gerry Beuchelt have already proposed improvements and that is exactly the outcome I want. If you’ve got tweaks, improvements, complaints, etc, I’d love to hear from you. Moreover, if you happen to be in a position to formalizes this Model into industry practices, please do!

Leave a Reply