Hi. Welcome to this presentation in the Identity Manager-- the Unknown Unknowns series. I'm Rob Byrne, a field strategist at One Indentity, and our subject in this session is analytics, identity analytics, capabilities in Identity Manager, and, perhaps, a touch of machine learning.
We're going to go through some of the existing features or capabilities of Identity Manager around reporting analytics to some of the newer stuff you may not have had a chance to look at yet. So what's interesting in this presentation, think of it as, perhaps, a refresher on what's already there. Maybe you haven't had a chance to sit down and look at it, or you haven't been back to it for a while, and let's think again. Reconsider a little bit. Take the time to think how we can get value from existing metrics, analytics reports. KPIs are already present in the platform and how we can get most value from that. That's what we're going to talk about today.
In terms of the topics, just a brief, I would say, overview on what we mean by analytics and a little bit of its relationship to machine learning and artificial intelligence as well. Then, specifically, I've broken it down into those three topics. So access reportings, reporting for access, recommendations-- this is where a little bit [INAUDIBLE] oriented analytics comes in, and then more like insights and alerts.
Rather than just dumping a report on me, appoint me or show me things that, perhaps, I wasn't expecting. That kind of surfacing of valuable information certainly has a lot of value. Those are the topics that we're going to cover.
So, let's get started. What do we mean by ML, machine learning, artificial intelligence? Well, in a similar way that you can talk about blockchain and ask what that is, blockchain, if you look at-- it's an implementation of a data structure of a Merkle tree, which is a binary tree where there's hashing used so that all the different nodes in the tree have got the [INAUDIBLE] higher layers which gives you a tamper-proofing capability.
You can categorize blockchain as it's in its abstract way as a Merkle tree. ML, machine learning, artificial intelligence in a similar way is essentially it comes down to computational statistics. Using algorithms to tease and squeeze information and insight from data. So it makes it sound like there's nothing new there, which is unfair. What's new is the algorithms, the new techniques that are being used in computational statistics. The fact that we now have more computing power of large data sets that allows those algorithms to actually give us a value in reasonable amounts of time.
Well, what are these algorithms doing? They're counting. They're filtering. They're aggregating. The best fitting in linear regression which is the thing where you fit a line to a set of data points. They're clustering and grouping. We think about that for role mining in our context. They're baselining, so watching and measuring metrics and baselining what's normal. They're adjusting, adapting behaviors, tuning parameters to optimize performance, self-regulating over time. These are the kinds of things that they're doing.
These techniques that promise us a lot, and in terms of becoming more human allowing us easier interaction with these systems. Natural language processing, image recognition, autonomous vehicles, these are all the energy savings, right? If the system in the house knows that I'm not at home at certain times, it learns this, reduced operational headaches in computer systems and so on. Intelligent things could be everywhere. Things like an oven that won't burn your dinner. A pot that won't boil over. All these kind of things. I haven't explicitly told it not to under cook the cake or overcook the cake, and so it will know that's what the right amount actually is.
Now, in our world of identity and access, the promise here if we project-- extrapolate into the future is a zero maintenance of operations, self-healing databases, elastic load balancing for services, processes that will adapt to the load that the system is under. From access management point of view, secure seamless just in time access for anything, anytime, anywhere. These are the promises. These are big promises to make. It's of course something to aim for.
In terms of machine learning, AI technology we've had questions, hey, is it in a state where it's fit for purpose, for the kind of things that it's been used for. Well, we know that like all new technology, there's still problems, still needs to be tweaked or, with this type of technology, a lot of false negatives can happen.
In terms of these algorithms, particularly the machine learning one, they need to be trained. And then with the training and the data sets that are used, you can inject bias into these systems. This is one of the big problems of this data. So if you have poor data quality or biased data training the algorithms, that's going to be a problem. You can look at great examples like the Microsoft Chatbot that learned and grew over time, which was great except it grew and learned to be a foul-mouthed racist in these forums that they had introduced it to, and that had to be chopped-- removed. It didn't go too well.
Amazon and IBM interestingly have recently announced that they've suspended face recognition work and R&D in those areas, and the services around concerns around regulation that's currently unknown and a fear of abuse and bias in the system. The technology is still very much like an early adopter stage, and the self-driving cars, which are not really self-driving-- well they are self-driving but still needs the person to govern them, to be responsible --and we know that it cut both ways. They've saved people in tricky accident situations, but it's also gone the other way. If we've relied on them too much, they've actually caused problems because they're not-- the technology is not at a good enough state at the moment. So that's that. But the other thing that can go wrong with data sets is that the data can-- when we come down to actually interpret the data and use the data --that we can get into difficulty.
So the first example I have there is around extrapolation. It's dangerous to extrapolate our data. We know this. Misinterpretation, misuse of data, the sat-nav-- pretty smart system --but if it points a straight line across a water channel as a way to get me from A to B is the quickest route, then am I going to follow that blindly? Or am I going to do the right thing?
Another example would be something like this, which I found in an interesting case. During one of the many European wars, they had a look at the airplanes coming back from missions. They had a look at it, and the question was, hey, how can we better protect our airplanes?
Let's have a look at the statistical distributions of these bullet holes, and the initial reaction was to say, well, logically, we're going to put some more armor around those spots that are typically being hit. And, of course, there's a cost. It added weight, so less range in fuel. So there's a trade-off there.
It was actually a Hungarian engineer or mathematician who pointed out actually it was totally the wrong way to look at it. That, in fact, what we should really do-- because the airplanes that don't come back are the ones that have been hit in the other places, the places where there are gaps --what we should be doing is reinforcing those other areas around the engine, around the pilot, cockpit, around the wings where perhaps there's fuel stored, and so on. That makes it very, very sensitive.
Interpreting and making use of the data sets even when we have them and even when they're totally reliable, you get the plane sitting in front of you. We need to be careful about how we use them. This is all just setting the stage for a little bit of an overview. There's a whole topic there, of where a little bit we are with this machine learning, artificial intelligence, and interpretation of data.
We're going to move on obviously to talk specifically around Identity Manager, and I wanted to break this down as I said into those three categories of reporting, recommending and offering insight and alerting. If you take a look at Identity Manager, just open up the web, log in as somebody with an auditor role so access to a lot of information-- actually all information. Log in there. You'll get a whole bunch of default dashboards, a whole bunch of default reports, a whole bunch of default information, and, perhaps, there's too much information.
The real question, I think, is who is this information for? What kind of information are they particularly interested in? In terms of the category of access reporting, keep on over it. Who benefits from access reporting? The approach I want to take here to try and help us cut this down and think about it in the right way is to think about the personas. So I have a persona-driven approach.
The first type of persona that will be interesting reporting, I think, of all of the others would be the Person and Organization Managers. What does my team have access to? What kind of access has my organization got? Tell me who's recently joined, who's recently left, who's recently moved around the organization. These people will be interested in status of access, status of alerts, and trends, the movements.
In terms of the Risk Compliance Officers, those people interested in those subjects, System Administrators, the kinds of reports they'll be interested in-- orphans and outliers. In my experience, these are typically the first reports that we should bring out from a project. Typically within the first couple of months, we can start to issue these reports and already show some value to that class of user,
"Hey, you've got lots of orphan accounts. That's a risk."
"Oh, yeah, it is."
Lots of negative changes, certainly, if there are a lot of negative changes, perhaps that's normal if you don't have a system in place right now, but we should definitely start to measure that and be able to show value over time as the negative changes get reduced. Sleepers, dormant accounts, that's obviously the accounts that are existing but unused.
Here's just a couple of samples of status-and-trends-type reports that we have a standard in Identity Manager. You have one here. That's Employee Overview with History, so you get full access information for any employee and of course generated for all employees in the team as well. You have that information right at your fingertips. The profile and all the access they've got including the historical information. That's super powerful and super useful to show to Auditors, for the Risk Compliance crowd, but for the manager as well, just to see what access his people have.
In terms of the trends and the status, there are plenty of dashboards available that are showing, trending around your organization in terms of the size of it, in terms of events that are happening around it. Those are all the kinds of things in terms of the trends, there may be trends that are of specific interest to you or to organizations that you're working with. Of course we can add a dashboard showing that trending easily into the web portal where we can extend this reporting.
Now, in terms of the Risk Compliance Officers, those orphan accounts, very easily you would connect the systems to generate that information. Of course you can drill down into each system and see what those accounts are. Put that on the desk of each of the system owners and say, "We need to talk about this. How are we going to mitigate these accounts?" Sorry, I went the way.
The Outliers-- well, what's an Outlier? An Outlier is potentially an indicator of entitlement, cree. Show me all the accounts. You can see here, there's a threshold-- configurable threshold. So there's 1500 odd accounts here, and 31 of them are way out of kilter with respect to everybody else. Maybe that's normal. Maybe it's not. But it certainly deserves to be looked at.
Of course we can drill down and see what those accounts are so you immediately when you see this think of other questions to ask. OK, what are these Outliers? Are they growing over time? Or actually am I converging to some kind of norm in this solution? I'd like to see the trends, and I definitely, in terms of the KPI, I want to see that decreasing so that the project can claim improvements.
What about Outlier groups? Here we're looking at accounts. What about Outlier groups? Groups that have abnormal memberships in them. Is that growing or decreasing over time? Especially, for example, the domain admin account-- group. Sorry. Is that growing over time? Or is that decreasing? Is it fluctuating? What's going on with that? So having a vision onto that.
This is all data that we can-- some of these reports here are obviously the ones I'm showing out of the box, and some of that data is really just below the surface. It's certainly easily accessible to us. There's enormous value just in extracting these existing reports and data from the solution.
In terms of the negatives, negative changes, there's a report. Any negative changes in Active Directory it's always for obvious reasons interesting to see that. Then the sleepers, the dormant accounts. Those accounts that are still littering my environment and that are increasing the attack surface there and a big risk if any of those get compromised. All the more so that they're dormant and sleepers. "Oh, they're not being used. Perhaps easy to forget about them. Maybe they drop off the dashboards." They do need to be addressed.
We had a conversation recently with Cecil who told us that he still gets messages on LinkedIn from old employees that left the company saying, "Hey, it's a bit weird. I still seem to have access to some of those systems." People that come back, they get rehired and find they still have access. There's a lot of scope out there in the world to look at these metrics and make improvements in organizations and an enormous body there.
What about access recommendations? We're moving on from just pure reporting to, "Hey, system. You're supposed to be smart. You're supposed to have intelligence. Make some recommendations for me. What can we do?" Again, let's take a persona-oriented approach. There's so much we could talk about.
Requester and Approvers-- what kind of recommendations can we make to a business user making a request to an Approver approving a request? Same question for an Attester. Somebody who has to certify access or certify you're still on the same team. Can the system help a bit with that?
Exception Remediators. Oh dear, there's a lot of violations on my plate this week, "Hey, Identity Manager. Can you help me to manage those violations and make recommendations around remediation? I'm a Role Administrator. Can you recommend to me some new roles that I can put in place that will make everybody's life easier? And take some of the pain out of granting access out of reviewing and certifying access?"
The traditional view of what we can present from within the access management point of view is who's got the access? What? When? Why? And so on. Let's try to move to a situation where the system is saying, "Hmm, maybe this guy should have this access or should not have this access."
Let's have a look at some things we can do here. One of the things, and if you scratch the surface of machine learning, which is a technology as we've discussed, if you could scratch the surface, what people really want from it is intelligent adaptive automation, making their life easier. That's what they want.
Here's a Requester. He's logging into the web portal. The product offers this as standard-- let's offer them an express checkout. Let's minimize the number of clicks he's got to make to find already the access he wants and then to move through to submitting that access. On the express checkout, there's several things we can offer here. We're going to run through each of them.
The first one we'll have a look at is the notion of popular items. You click into the popular items section, and what we're seeing here is items that this person can request that are commonly requested by people in his team or in his organization or people that report to the same manager. That's the so-called peer group. These are items that all the people in your team have requested. Some of them you may already have, some of them you may not be authorized to have, but definitely some of them you might want to request. That's a great way to recommend to a Requester things that it's likely that they would need.
Here we move on to another recommendation for the Requester which was also in the express checkout list,.
"Hey here's things that your colleagues have." Clicking through you would have found a list of your colleagues. Drilling into the colleague, you then see a list of the access that colleague has. Again, these are rather going cutting into other ways rather than attacking by requestable item, attacking by my colleague.
So if I work with Gert Jan, and I know Gert Jan has got access and I don't, I can very easily get to this point, request the access that Gert Jan has that I don't have that's likely to help me, and then move through to a very efficient remediation of that problem and get on with my job which is what we want to achieve.
The third point here is what I call him here access bundles. In Identity Manager they're referred to as request templates. You can see choose a template. It's really bundles of access that are probably relevant to me. So if I work in finance, the finance bundle is going to be relevant. So a little package of requestable entitlement it's not. And you might say, "Well, what's the difference with a role here?" The difference is it's a much less formalized concept. It's easier to put in place. As we know some organizations in their maturity curve of identity access management, I'm not quite there yet. Let's recommend a little bundle of access. They all go for approval so it's all audited, tracked, manageable in terms of renewals, and on subscribing etc. but very easy for the Requester to get the access.
Let's switch persona now to the Approver. I've got a bunch of approvals that I need to do. I don't think manager already offers some good information around risk that can be associated historical information, perhaps if any of these entitlements revolving compliance violations is something to consider. But maybe, and this is something that's available in the product right now, let's make a recommendation to the Approver around what the system thinks. You see here, the system has put an X. It thinks that's maybe not a good idea. It's up to you. Same as the human being, it's up to him whether he brakes or accelerates in his autonomous self-driving car. It's up to you whether you click approve or deny. But the system based on, again, in this case peer group analysis-- I'll explain how it works in a minute --is saying, "Eh, that's probably not on balance."
Here's an example for an Attester. Always thinking about the persona. So an Attester, Identity Manager says, "Yeah. That kind of looks OK. Probably Hicham should, based on my analysis, have accounts payable access." What's behind this? Why is Identity Manager giving these recommendations? You can see that part of the workflow it has automatically calculated a peer group evaluation for that entitlement based on people like-- that's right, it's based on the recipient rather than the Requester. The Requester could be your manager. So the person who's going to receive the X analysis is directed. Sometimes it will say it's good, sometimes it'll say it's bad. Of course you can tune the threshold.
A situation here where the system is starting to make recommendations to the Requester and so on, perhaps for certain low-risk access you're willing to let the system make the choice itself. You could do that. My feeling I was discussing earlier is that it's still early days with this technology and that most customers will rather go for, "Hey, recommend this, and let me decide."
An Exception Approver. So what kind of recommendations-- guidelines will the system put in place for an Exception Approver? If I log into Identity Manager as an Exception Approver, I've got some work to do. Potentially, I can validate, approve, deny, but I can also resolve violations, and that's a very powerful capability. And so the Exception Approver is going to be interested in what is it that I have to resolve? How did the person get their access? And what's the impact going to be if I move in along this workflow and actually remove that access from the person? And of course, Identity Manager, thanks to its origin of entitlements, its knowledge about why you have the access, it knows how to actually remove access in a correct, process-respecting way. It will be aborted in the first case. It'll be just deleted because it was a direct assignment in that case. And those recommendations including loss of entitlements information are all available in the product guide and recommend a behavior, oh and of course ultimately allowing the Exception Approver to make his own decision. But we're guiding him in that way based on all the analytics that Identity Manager is running in the background.
In terms of an access recommendation, what about the Role Administrator persona? The Role Administrator is interested in classically role mining. Have a look at all the entitlements there, and recommend roles that I can create to group those entitlements together and make everybody's life easier. Make it easier for Requesters, for Role Owners. Manage that access in a coherent way so you can have a Role Owner, but you do the entitlement [INAUDIBLE] so gives you that separation. This is all good reasons to do this kind of thing, and of course, Identity Manager has this capability to do this kind of role mining. Also, I would like to mention that if you look on our integrations page on OneIdentity.com/integrations then you will find in there integrations with other analytic systems where we can bring the roles in-- import roles from another system specializing in that. So we have an integration there with some technology partnerships where that kind of integration has been done. So that's an option as well.
Moving on to access insights, same question. Access Insight, who's it for? Who's going to benefit? Auditors will be interested in having an insight into access, the what, the when, the why that the access has been granted. Auditors, business owners of roles, of commissions, of organizations, are going to be interested. And what I'm going to look at here is that question of show me. Show me star by star. Show me roles by department. Show me risk by your request. Show me request by Approver. Show me request by location. There's all different ways to cut the data, and Identity Manager makes it very easy as a standard to get that data out of the product. All you have to do is to think about who you want to show it to, and what precisely that star value should be. The data is there for you.
Here's an Auditor persona. He's interested in what, who, and why of access. So here's an entitlement-- it's called CRM. Some entitlement in the system. What is it? It's an entitlement. Who's got it? Simon's got it. Why does he got it? Little shopping cart. Because he made a request that went through and was approved.
Here's another entitlement. What is it? So it's a business role. Who's got it? Simon's got it. How did he get it? Through dynamic membership. Why all this information analytical information is available is standard in decision. That has been for many years now. And it's interesting to see some other vendors in the market suddenly discovering the why of access and making a big fuss about it. Perhaps at One Identity we don't make enough fuss about what we do and perhaps we should do more of that. Fuss, I'm making a fuss.
So it is an entitlement. It's an AD group as you can see. Who's got it? Simon's got it. How did he get it? It's a direct assignment. Direct assignments are nuanced. Was it a direct assignment from Identity Manager? Or was it a direct assignment insofar as it was a discovered direct assignment from AI system? Now that information is available not there, but it is on the accompanying sister screen. You can tell whether that was a direct native assignment or a direct assignment within Identity Manager, which is one step closer to being acceptable, although any kind of manual action is something that deserves to have an eye kept on it.
Let's move on then from the Auditor. What about star by star type considerations? Auditor, business owner, they're going to love things like-- what about requests by location? Who's making the most requests? Oh, look. It's the annoying UK crowd. They're making so many requests for stuff. It's good to know. Now of course we can drill down in the solution to find out who they are, what are they making requests for? So that's really interesting information.
Just another example, star by star so risk by department. Where are my riskiest departments in terms of the risk as it's aggregated by Identity Manager across all the access that identities in that department have? So you can see the red stuff are the riskier ones, the ones with more critical access, the access that could cause more harm. That's the meaning there.
And of course we can drill down again into that. So Identity Manager, we've had these capabilities, again, for many years. Here's a question. How many customers are really exploiting this very powerful risky information in this solution in your experience? So if we were at real unite conference, I would pose that question, and then I would really hope people would come afterwards or we'd have a discussion about that. But that's certainly something.
However, if they're not making use of this type of risk information, why would we think they would make use of any other kind of fancy machine learning generated metrics that we're going to put there in the product? OK, so I'm just asking this question. I think we should be making more use of these kind of metrics that are available, the ones I'm showing here. Perhaps we're not doing as good a job as we should at articulating the value to our end customers around this. So there's something to think about.
Here's another star by star roll by department but, rather than in a heat map format, it's coming to us in this kind if nice, navigable graph format that I can navigate through. I want to cut my business role, and see where it's being used. It's usage information. Where is it being used across departments, across locations, across call centers? And so on.
And I can look at this very easily and say, "That doesn't seem normal. Why would people in sales need access to my developer business role? That's kind of odd to me." So any kind of anomalies being surfaced there, it's super useful information.
How could we go a step further with this? How could we have this pop open in an alert right to say, "Hey, Rob. It doesn't seem normal to me, the system, that people in sales would have access to your business role." Is that the next step as in metrics to go for? I find it interesting that once you start to look at and just think a little bit about the metrics the product is making available, you immediately start thinking how you can go that one step further.
Here's another information for a Role Administrator. Standard reports will be standard in [INAUDIBLE], but if you need a copy you can get that now. It's a little preview here. Similarity matrix. It's kind of like role mining. Again, could be a help desk use case, I say Role Administrator, or it could be a help desk persona. I'm getting a call from Rob because he has no access to this system, but his colleague is working fine. So let me give a similarity matrix to see what exactly and where the differences are. Maybe I just need to align them. That's really nice kind of report that's again just below the surface. We'll be shipping that as standard in the not too distant future.
So just to wrap up on this, what would I say about futures here or takeaways? Oh dear. It's all mucked up. I think the important points are Identity Manager provides a lot of analytics goodness that I've been showing. Some of it we've had for a long time. Some of it is relatively new. The peer group analysis and the use of that in the web portal is something that may have hopefully particular interest.
The thing then is once that you have a look at the metrics are available, let's think straight away in a persona sense. I think is very helpful. If you think differently, let me know. I think cutting it by persona is really helpful. This type of profile, what is the data it's interesting to? And then asking that question, what is the data that's pertinent? That speaks to-- that's relevant to that persona, and every customer will have some way of thinking about the data that they would like to specifically see.
And all we got to do is usual Identity Manager paradigm, copy, paste, modify. That existing data, those existing reports, that existing metrics, and get it up on the web portal for those personas and have them have that information right at their fingertips. That's super, super useful. That's the whole point of this analytics machine learning direction that we're taking. Define our KPIs, know what they are.
Of course it's at all different levels-- operational, placative, in terms of access risk-related. You hear people talking about RPIs. I think Risk Performance Indexes. How am I doing on risk? And so that's something that we can work out. You can't have a system automatically make things better if you don't first have a measurement about what it is that you're trying to improve. It's got to start with the definition of KPIs, and, again, I would claim that we're still somewhat immature in our rollout of this type of project around defining KPIs and I think we need to do a better job there.
All the more so as it will feed into these machine learning type futures that we have. That will hopefully allow us to improve the KPIs over time. And there's lots of metrics there. Let's look at the low hanging ones. Let's take advantage of the out of the box functionality that's already there to give most value to our projects and to our customers.
So I would say just to finish on this, let us know if you have thoughts around metrics. If you have ideas for your customers, for adaptive automation or access recommendations. Things you think are missing that we're really open, I'm certainly open to having a chat with you. Just get in touch. You know where we are, and that would be, I think, really interesting way to move forward with this.
So with that, thanks for your attention and your time. I hope it was useful. Let's stay in touch. Thank you. Goodbye.