Insider risks aren’t just a security problem (UNCOVERING HIDDEN RISKS – Episode 3)
Published Mar 04 2021 09:00 AM 2,354 Views
Microsoft

Host:  Raman Kalyan – Director, Microsoft

Host:  Talhah Mir -   Principal Program Manager, Microsoft

Guest:  Dan Costa – Technical Manager, Carnegie Mellon University

 

The following conversation is adapted from transcripts of Episode 3 of the Uncovering Hidden Risks podcast.  There may be slight edits in order to make this conversation easier for readers to follow along.  You can view the full transcripts of this episode at:  https://aka.ms/uncoveringhiddenrisks

 

In this podcast we explore how partnering with Human Resources can create a strong insider risk management program, a better workplace and more secure organization.  We uncover the types of HR data that can be added to an insider risk management system, using artificial intelligence to contextualize the data, all while respecting privacy and keeping in line with applicable policies.

 

RAMAN:  Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team.

 

TALHAH:  And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team.

 

RAMAN:  And this is episode three with Dan Costa, and we are talking about how do you bring in HR, legal, privacy, and compliance into building an effective insider risk management program.

 

TALHAH:  Yeah, super important. This is not like security where you can just take care of this in your SOC alone, you need collaboration, and he's gonna tell us more on why that's critical.

 

RAMAN:  Yeah, it was awesome talking to Dan last week. So, let's do it.

 

TALHAH:  You gave a great example of an organizational stressor, like somebody being demoted or somebody being put on a performance improvement plan. You can also have personal stressors outside of work that you have talked about openly in a lot of your guidance and whatnot. When you look at organizational stressors, a lot of times they reside with your human resources department, right? So, this is a place where you have to negotiate with them to be able to bring this data in. Talk to me about that. How do you guide the teams that are looking to establish these connections with their human resources department, the HR department, and negotiate this kind of data so that it's for insider risk management purposes, and also talk about opportunities that you see where you could potentially infer sentiment by looking at communication patterns or physical movement patterns or digital log-in patterns and things like that? So how can you help to identify these early indicators if you will?

 

DAN:  So let's start with how we bridge the gap between the insider threat program and stakeholders like human resources, because Talhah, you're spot on. They're one of the key stakeholders for an insider threat program, really in two respects. One is they own a lot of the data that will allow us to gather the context that we can use to augment or supplement what we're seeing from our technical detection capabilities, to figure out was that activity appropriate for the job role, the responsibility of the individual associated with the activity. How can we pull left relative to an incident progression and find folks that might be experiencing these organizational stressors, right? That's data that our human resources stakeholders have and hold. We've seen insider threat programs over the years struggle with building the relationships between stakeholders like human resource management. A lot of the challenges there, from what we've seen, come down to a lack of understanding of what it is that the insider threat program is actually trying to do.

 

In many cases, the insider threat program isn't necessarily without fault in making that impression stick in the minds of human resources. So this goes back to the insider threat program's not trying to be duplicative or boil the ocean, or carve off too big of a part of this broader enterprise-wide activity that needs to happen to manage insider risk. In that early relationship building and establishment, there's an education piece that has to happen. Human resources folks aren't spending all day every day thinking about how insiders can misuse their access like we are, right? So much of it is these are the threats that our critical assets are subject to, by the nature of our employees having authorized access to them. We understand that this isn't always the most comfortable subject to talk about, but here's a myriad of incident data that shows where vulnerabilities existed within a human resource process, or a lack of information sharing between HR and IT enabled an insider to carry out their attack or to evade detection for some significant amount of time.

 

So much of it just starts with education. Once we've got them just aware of the fact that this is something that the organization has to consider as a part of its overarching security strategy, we need to help them understand the critical role that they play. Understanding how we use contextual information. Understanding how we don't use contextual information and helping them understand what, really, what an insider threat program is designed to do is help them make better data-driven decisions faster by giving them access to analysis that can only be conducted by folks that can take the data that they have and stitch it together with IT data, with SOC data, with information assurance data, with the risk register that's owned by our chief risk officer. They probably don't want to be spending all of their times writing analytics and making the relationships with IT and legal to facilitate some of that stuff.

 

That's what the insider threat program is here for. So, helping them understand that this is a mutually beneficial relationship, that the data that they provide will help the organization more proactively manage insider risk and that they are a stakeholder in terms of they are potentially recipients of the results of the analysis that the insider threat program itself will conduct. Helping them better understand how to do things like make refinements, enhancements, or improvements to the onboarding and offboarding processes. Helping them understand when it might be time to make a change to employee compensation strategies within the organization, how the employee performance management system is leveraged within the organization. The insider threat program, once it's up and running and bringing in all the different data and engaging with all the different stakeholders, can help highlight and emphasize where those processes are working, where they can be refined.

 

It's hard for the stakeholders to do that work on their own. So there's a fine line to walk there too, right? Which is, you can't go in and say, "We think we can be doing your job better than you can because we have data scientists and all this other cool data." So a lot of effective insider threat program building from a relationship building perspective really comes down to having an insider threat program manager who has that organizational savvy, that can find the right ways to build and establish these relationships within their organizations. So it's not easy by any stretch of the imagination, but we're seeing lots of organizations be successful at helping their stakeholders understand the threat, helping their stakeholders understand the two way street of, "This is the information we need. Here's why we need it from you. Here's why you're the only part of the organization that can help us with this, and here's how we think it can be beneficial to you and the organization more broadly."

 

RAMAN:  I think this is the kind of conversation that Talhah and I had when we first got into this space together. He'd already been in it, but I had come at it from a product perspective, as we think about helping our customers tackle these issues. One of the things we talked about early on was hey, look, this is, like you mentioned, Dan, this is a human problem. This is the employees that you're dealing with. These are people that are part of your organizational family, right? You just can't set something up that starts investigating people, snooping on them, and doing that sort of thing. You got to take a little more holistic viewpoint here. The things that Talhah and I talked about were around insider...

 

This is why insider risks makes more sense than insider threat because as you think about HR, they're stewards of the corporate culture, right? They're the ones that are responsible for help building a corporate culture of inclusion and of people feeling like they are wanted, and they are rewarded and they're building towards a positive outcome for the organization. For them, the program itself can highlight, to your point, the risks that are there that might impact that organizational health and in a way actually helps support a better, stronger organization by pointing out areas that are vulnerable [inaudible] that they can go after and build training around, that they can go out and say, "Hey, this is something that we should be doing," or, "People aren't feeling supported and so they're doing things that they shouldn't be doing." Now rather than treating the symptom, treat the underlying issue.

 

DAN:  That's a great point, Raman. I would add to that and take that a step further, which is, I think stakeholder parts of the organization like human resources don't intuitively or necessarily think about the things that they do as being influencers of increased security or resilience within the organization. So much of that education is helping them understand, "Look, organizations that have better management practices have higher degrees of employee engagement, have higher degrees of perceived organizational support amongst all the other great benefits that they experience." They also experience less security incidents.

 

So, what you're doing and these practices are security controls, and we really have to start to help our organizations broaden their understanding of what constitutes a security control. And, "Oh, by the way, HR, if you know that these are things that you'd like to be doing just to increase morale, we can amplify that ask and message up the chain when it comes time for budget requests, by saying, 'Hey, not only is this a good thing for us to do from a talent management perspective, but it's a key security strategy for our organization.'" Another way for those two disparate parts of the organization to work together in some mutually beneficial way.

 

A couple of years ago, we did a study called the Critical Role of Positive Incentives in Mitigating Insider Threats. It was really looking at just this, Raman, which is, could we establish a relationship between levels of connectedness at work, employee engagement, perceived organizational support, and a reduction in the number of insider incidents that organizations experience? We actually leveraged our opensource insider threat information sharing group for a lot of that work to conduct surveys. What we saw was positive correlation between... Increases across those dimensions of connectedness at work, perceived organizational support, and a decrease in the number of insider incidents that organizations were experiencing. So the key takeaway from that study was better places to work ended up also being more secure organizations, particularly as it pertains to insider risk.

 

Now, we're trying to continue that work and drive that towards a causal model and really being able to show that these are the root causes. These management practices, these HR practices, by putting them in place, you cause a reduction in insider incidents. So it's an area of ongoing research, but intuitively it just makes sense, right? So much of what we're trying to do in 2020 with insider threat programs is help folks recharacterize what constitutes security controls and what constitute valid response options for the things that insider threat programs should be on the lookout for.

 

TALHAH:  I assume there's an inverse correlation then between an organization being potentially disconnected because of things like work from home and what's happening in today's environment and an increase in potentially insider risk activity. Is that a fair extrapolation to make?

 

DAN:  Well, I think it's a fair hypothesis to consider testing, right? It's the opposite side of the coin. I think now would be a fantastic time to be making sure that we can collect evidence and data that would show those data points trending in maybe a different direction, right? As our organizations are experiencing unprecedented volumes of personal and professional stressors across our workforces, what's that doing to the rates of occurrence or the rates of frequency with which we're experiencing insider misuse cases? It's the kind of thing where it takes organizations a while to collect that data, right? So, I don't know that we're going to know for sure until we're maybe a little bit further out because these incidents tend to not evolve over the course of days and weeks or even months, but usually on the order of magnitude of several months, if not years in most cases. So it's one of those things, right? I think there are going to be far reaching implications, particularly from an insider threat perspective, that we'll be able to attribute to just how drastically everyone's normal changed over the past several months.

 

TALHAH:  Dan, we talked about stressors and a lot of times we hear customers talk about insider risk management. It really boils down to a game of indicators. When you have the right set of indicators and the ability to orchestrate over that, correlate over that, that's when you start to at least do the first part of the whole problem, which is to identify them. One of those indicators you talked about are these stressors, and you talked about the importance of partnering with your human resources organization, but how do you think about the potential to infer those stressors through communication channels or other means of looking at certain indicators in an environment to see if somebody is potentially disgruntled? We'd love to get your thoughts on that based on what you've researched.

 

DAN:  Yeah, certainly. Leveraging text analytics and natural understanding has been a hallmark of some of the research that we've done in this space. We've got a multi-part blog series that talks about how to apply text analytics to various stages of insider threat detection. What you'll find in there is a real strong emphasis and focus on the detection of the concerning behaviors and activities and the stressors that can precede attacks. So, the state of the practice a few years ago was keyword-based searches, right? These big buckets of words that we associate with topics like money or topics like code name sensitive projects within organizations, and every time we see one of those words being used, let's generate an alert and have an analyst dig in there.

 

Over the past several years, we've seen the state of the practice move past those simple keyword-based searches and start to leverage AI and ML to help with natural language understanding that can better contextualize that, deal with the nuances of electronic communication, and then again, form the foundation for features that comprise a broader model that's really a model of models, that helps us understand the data that we're seeing in aggregate across our organizations, relative to what we constitute as our highest priority risks to our critical assets.

 

This goes back to our HR friends and also our friends in legal and privacy, when this can be a tough pill to swallow from a legal privacy and civil liberties protection perspective for lots of organizations, right? This is the next step. We're going to start reading our employees electronic communications to figure out if they're talking about having money problems. This goes back to the need to educate those stakeholders in terms of what it is that we're actually trying to do, what we're not trying to do, who gets access to that analysis and what the allowable response options are with regards to the end products or end results of that analysis.

 

And helping them understand that, "No, no, we want to feed this back to you all so that we can help you all and support you all in your decision-making processes about what it is that we're seeing on our organizations networks and systems." It's something that we're seeing lots of organizations start to incorporate after close consultation and collaboration with their legal, privacy and civil liberties folks. Certainly, as you're considering this for large organizations that are operating maybe outside of the United States, you're going to have to make sure that you're working within all of those different legal jurisdictions that your program might be operating to understand what is and is not allowable in those different jurisdictions, because the privacy protection rules obviously change depending on operating location.

 

RAMAN:  Yeah, that's a great point. I mean, I had a conversation with a customer back in springtime, and the CSO was all in on, "Hey, I want to go and identify these risks, use an automated tool," et cetera. About a week later, I get another phone call, which is, "Hey, now my chief privacy officer wants to have a conversation with you, and she needs to better understand how information is being protected, how the PII of individuals is being protected." Because to your point, I could generate an alert, but that doesn't mean that person is necessarily doing something wrong. It's just an alert that's popping up. Now at that point, I'd want to protect their information. I want to make it private, anonymized so that there isn't this bias, this discrimination that might happen at the analyst level.

 

Once we went through all of the different ways to protect information and privacy, anonymization pieces, that convinced them and they said, "All right, great. We're going to roll this out worldwide across multiple divisions in multiple countries." So, you bring that up. That's a great point there. The other point you made, Dan, was around there's multiple sources here. I think Talhah touched upon this earlier, which is customers struggle to figure out how do I get started, right? You talked about, "Hey, you need sentiment analysis. You need contextual information. You need data sources beyond just..." For example, a lot of organizations say that they have an insider risk program, but yet they've just implemented DLP and that's it. It's like, "Well, that's only one piece of the puzzle and that's going to create alert fatigue for you."

 

When we talk to him and we say, "Hey, you need not only the endpoint indicators, but you need signals from sentiment. You need signals from maybe HR data and other sources." They're like, "Wow, that's a lot for me to try to figure out and [crosstalk 00:19:43], you know what I mean?" I think getting started quickly and, to your point, around scoping it to what are the risks that are most important and then quickly getting started on tackling those, scoping the right people and involving the cross-organizational parties, is probably is the foundational step that most organizations...

 

DAN:  Yeah. That, "We have a DLP program or we're trying to expand it," that's a very common pattern that we've seen in industry. One of the places that we've helped organizations get started in that space, what's that next data source to incorporate, was simply a list of known departing employees provided to us by human resources. So just knowing who is departing from the organization at any given time gives us the opportunity to supplement or augment what we have in the data loss prevention tools. So that if we see alerts or hits associated with folks that have announced their resignation, we've found that most theft of intellectual property cases that we've studied tend to occur within a 30-to-90-day window of an individual announcing their departure from the organization.

 

One of the earliest ways to address alert fatigue from something like a DLP is to just grab that tiny piece of context owned by human resources, right? It's focused. It's specific. We can point to data that just provides a rationale or justification as to why we think we need access to this information, how we'll use it, how we'll protect it. It gives us an opportunity to start small, but still make a big impact and show, "Look, HR, because we were able to incorporate this information, we've reduced our false positives rate by X or Y percentage, and we were able to increase our ability to recover intellectual property as it was being targeted for exfiltration by our departing employees." So, it's finding those use cases that are important to our organizations that you can back up with empirical data and starting small and taking those quick wins, high-impact solutions and finding ways to build on those successes to establish broader relationships.

 

TALHAH:  Another thing that I remember, that a customer of mine said that "DLP is just one piece. Another way to think about that is because DLP's really want data loss and if you just focusing on that, as far as your insider risk program is concerned, you're automatically focusing just on the confidentiality type risks." What about fraud? What about sabotage? What about physical issues that you might come up with, right? So, you have to take that holistic approach, and then from there start to prioritize and figure out what you want to try to target.

 

RAMAN:  I was going to say, also the other parts of it, which around, like you mentioned this, the workplace harassment, right? You have other risks that are more human-oriented that DLP can't necessarily identify. The one thing that you just talked about just now, which is what happened 60, 90, 180 days prior? That's not going to get picked up by a transactional tool that just looks at today's data. You need that historical data to go back and reason over, right?

 

DAN:  Yeah. I mean, so much about what we're doing with insider threat detection is about anomaly detection, right? An understanding of a deviation from a defined process or the ways that things normally happen as it pertains to authorized use of our organization's critical assets. So Raman, you're spot on.

 

If we're trying to determine deviations from normal, we need to have the capability to understand what normal has looked like historically, right? So, it's finding how long back is long enough to look to establish a pattern of normal. But as we've seen over the past couple of months, it's also having an understanding of knowing when normal is going to change and knowing how quickly it's going to take for us to establish what a ‘new normal’ looks like.

 

That's something that we saw a countless number of insider threat programs struggle with over the past few months, which is every baseline that they were relying on as the foundation for an anomaly detection strategy was completely turned upside down and onto its head and rendered almost ineffective or useless when everybody fundamentally changed their normal in the way that they normally conduct authorized access to their organization's critical assets. The last several months have really shone a light on the fact that we've got to get better at being able to find ways to articulate and describe what normal or expected is that might not necessarily have to rely on six months or years’ worth of data.

 

Where do we start there with policies and procedures? How do we make it easier for our technical detection strategies to mirror our policies and procedures when everybody changed their policies and procedures as it pertained to remote work and authorized the use of information technology systems, which insider threat programs really struggle to catch up to those changes and to make sure that the detection strategies and the prevention strategies caught up?  There's lots of lessons learned over the past few months about how we do that and where our opportunities for improvement are as a community.

 

RAMAN:  We did a survey here at Microsoft with well over 200 CSOs focused on insider risk. One of the things that we found was 73% of them said to us that they're planning on spending more on insider risk technology now with COVID than they were before. I think this highlights the point that you just made, which is the systems and processes, if you even had them, that you were using nine months ago aren't necessarily relevant today, right? You need things that can... Because people are accessing data from end points that don't have agents on them, right? You have people that are working in new ways, sharing things with others and new mechanisms, right? I mean, just look at this particular podcast, videocast. We're doing it from our houses, right? I've got a courier coming to pick up SanDisk card here. It's one of those things where it's challenging for organizations in this new world, right?

 

DAN:  Yeah, certainly. So now that we've got the spotlight on the insider threat problem, particularly with everything that's going on in the world, that highlights the need for organizations to be intentional about where they put that expenditure, right? This goes back to where we started this discussion. You've got to think through... Now that you've got evidence that suggests that maybe you've got some gap areas from an insider threat detection or prevention mechanism, how are you going to prioritize where your next security dollar goes for insider threats? To get that answer right, you've got to take a risk-management-based approach to this. You've got to understand what's currently in place, and you've got to also be a little bit future forward-thinking here, in terms of when will things go back to the way that they were or something somewhat resembling the way that they were? What lessons learned are we going to incorporate from the last several months into that new normal?

 

So I'm happy to hear that 73% of CSOs intend to spend more for inside threats, but also kind of terrified for them because I want to make sure that they understand what they're actually trying to do with that security investment and making sure that it's aligned with the actual risks to their organizations and being done in a way that is cognizant of their actual risk and what their current capabilities are and how that risk landscape might change and shift even within the next calendar year. It's a hard problem to juggle and it's a continuously evolving process and a continuously evolving problem for organizations.

 

RAMAN:  Dan, thank you so much. This was an awesome conversation today. You brought a lot of insights. How can people get more information about some of the research that you all are doing over there?

 

DAN:  Yeah, certainly. For more information, check out our website, Cert.org/insider-threat. You can also contact us at insider-threat-feedback@cert.org. In anticipation of National Insider Threat Awareness Month in September, we're going to be out and about a lot, trying to transition our research into as many broad communities of practices as we can. We'll be blogging to our Insider Threat blog about once a week in September and stay tuned for the seventh edition to ‘The Common Sense Guide to Mitigating Insider Threats,’ which we're targeting for a late 2020 release.

 

                                            

To learn more about this episode of the Uncovering Hidden Risks podcast, visit https://aka.ms/uncoveringhiddenrisks.

For more on Microsoft Compliance and Risk Management solutions, click here.

To follow Microsoft’s Insider Risk blog, click here.

To subscribe to the Microsoft Security YouTube channel, click here.

Follow Microsoft Security on Twitter and LinkedIn.

 

Keep in touch with Raman on LinkedIn.

Keep in touch with Talhah on LinkedIn.

Keep in touch with Dan at:  insider-threat-feedback@cert.org or https://cert.org/insider-threat

Version history
Last update:
‎May 11 2021 02:02 PM
Updated by: