This article is contributed. See the original author and article here.

Host:  Raman Kalyan – Director, Microsoft


Host:  Talhah Mir –   Principal Program Manager, Microsoft


Guest:  Dan Costa – Technical Manager, Carnegie Mellon University


 


The following conversation is adapted from transcripts of Episode 2 of the Uncovering Hidden Risks podcast.  There may be slight edits in order to make this conversation easier for readers to follow along.  You can view the full transcripts of this episode at:  https://aka.ms/uncoveringhiddenrisks


 


In this podcast we explore the challenges of addressing insider threats and how organizations can improve their security posture by understanding the conditions and triggers that precede a potentially harmful act.  And how technological advances in prevention and detection can help organizations stay safe and steps ahead of threats from trusted insiders. 


 


RAMAN:  Hi, I’m Raman Kalyan, I’m with Microsoft 365 Product Marketing Team.


 


TALHAH:  And I’m Talhah Mir, Principal Program Manager on the Security Compliance Team.


 


RAMAN:  We’re going to be talking about insider threat challenges and where they come from, how to recognize them, what to do, and today we’re talking to Dan Costa.


 


TALHAH:  Dan Costa, the man who’s got basically the brainpower of hundreds of organizations that he works with across the world and given a chance to talk to him and distill this down in terms of what are some of the trends and what are some of the processes and procedures you can take to manage this risk. Super excited about this, man. Let’s just get right into it.


 


TALHAH:  Dan, you want to just introduce yourself, give a little background on yourself, and Carnegie Mellon and all that stuff?


 


DAN:  Yeah, sure thing. So Dan Costa, I’m the Technical Manager of the CERT National Insider Threat Center here at Carnegie Mellon University Software Engineering Institute. We’re a federally funded research and development center solving long term enduring cybersecurity and software engineering challenges on behalf of the DOD. One of the unique things about the Software Engineering Institute is that we are chartered and encouraged to go out and engage with industry as well, solving those long term cybersecurity and software engineering challenges.


 


And my group leads kind of the SEI’s insider threat research. So collecting and analyzing insider incident data to gain an understanding of how insider incidents tend to evolve over time, what vulnerabilities exist within our organizations that enable insiders to carry out their attacks, and what organizations can and should be doing to help better protect, prevent, detect, and respond to insider threats to their critical assets.


 


RAMAN:  That’s awesome. Dan, how did you get into this space?


 


DAN:  Yeah, so I’ve been with the SEI (Software Engineering Institute) since 2011. I came onboard actually to work on the insider threat team as a software engineer, developing some data collection and analysis capabilities for some of our early insider threat vulnerability assessment methodologies. And since 2011, have really gotten a chance to have my hand in nearly every phase of kind of the insider threat mitigation challenges that organizations experience, not only on the government side, but in the industry as well. Since 2011, I’ve been able to stand up insider threat programs within the government, within industry, help organizations measure their current security posture as it pertains to insider risk, and try to find ways that organizations can collect and aggregate data from disparate sources within their organization that can help them more proactively manage insider risk.


 


So that’s been work, rolling my sleeves up, working with insider threat analysts, spending lots of time with insider threat analysts in the early years, conducting numerous vulnerability assessments and program evaluations, helping organizations explain to their boards and their senior leadership team the scope and severity and the breadth of the insider threat problem, and help folks understand kind of what they already have in place that can form the foundation for an enterprise-wide insider risk management strategy.


 


I’ve been very fortunate since 2011 to really have a hand in almost every aspect of insider threat program building, assessment, justifying the need to have an insider threat program in the first place. Obviously since then had a lot to do with actually collecting and analyzing insider incident data, not only what we have access to publicly, but also learning from how we’ve collected and analyzed data here at the SEI over almost 20 years, and help organizations understand how they can use their own data collection and analysis capabilities to bolster their insider threat programs.


 


TALHAH:  Awesome. Okay. So Dan, one of the things that roam and I talked about quite a bit is my own journey in this space. I mean, I haven’t been fortunate to be in the space as long as you have, but I remember when I came into this space a couple of years back, one of the first places I turned to was Carnegie Mellon. And specifically, CERT. And one of the places you pointed us towards was this treasure trove of knowledge that you have, that you then sort of complement with the OSIT Group to really drive awareness and learning, cross-learning across different subject matter experts. So I’d love to get your story of that journey of how OSIT came about, where was it, where is it going now, and what it looks like going forward.


 


RAMAN:  And for those listening, what does OSIT stand for?


 


DAN:  Yeah, so that’s a good place to start. It’s the Open Source Insider Threat Information Sharing Group. It’s a community of interest of insider threat program practitioners in the private sector that are all trying to help their organizations more effectively manage insider risk. And in the group really is kind of a grassroots activity that was started by the first director of the insider threat center here at CERT, Dawn Capelli who I hear you’ll be talking to soon. When Dawn left the SEI to go kind of put her research into practice out in industry, she wanted to establish this community of interest.


 


And this is something that Dawn had been working on even while she was here at CERT, which was “How do I establish kind of a community of people who are all kind of going down the same roads within their organizations? How can we learn from each other? How can we benchmark? How can we share challenges faced early on? And how we’re getting past and around and through and over those challenges?” So in the beginning, the OSIT Group was really probably a handful fold or two of folks that were just in the earliest phases of getting insider threat programs off the ground.


 


And over the past six to seven years, we’ve really seen the group blossom really by word of mouth only, into an organization that currently boasts over 500 members and representing about 220 organizations in industry, all building out their own insider threat programs. So because of the community building that that was successful early on and finding time to get together and talk shop with folks that were going through the same things within their organizations, we’ve been able to over the years, continue to grow that.


 


And then to mine the knowledge and the experiences gained by the folks that are building their own insider threat programs and try to find ways to generalize those conversations into resources like our commonsense guide to mitigating insider threats, a variety of other research projects that we’ve been able to leverage the expertise insights, and really willingness to experiment and try new things that we’re finding with those insider threat program practitioners.


 


So we’re really there kind of just as stewards of the community. It is governed by members of the group at large. We’re there to kind of facilitate conversation discussions, make connections, and do what we can to either bring research questions out from those conversations or find opportunities to apply the findings from our research into organizations that are currently working on these insider threat challenges.


 


RAMAN:  When you think about when things first started, the types of challenges that you were facing the beginning to the types of challenges you’re facing now with regards to insiders, how have things evolved?


 


DAN:  Yeah, that’s a great question.


 


RAMAN: Is risk different, or what’s evolved in your opinion?


 


DAN:  In the beginning. It was, “What do I call this thing? How do I convince the stakeholders within my organization that I need to work closely with for this to be successful? Information security, human, human resources, legal. How do I convince those folks to share their time, share their resources and partner with us to get this off the ground? How do we navigate successfully incorporating legal, privacy, civil liberties protections into our data collection and analysis efforts?” And those were really the challenges that a handful of years ago, folks were just starting to wrap their heads around, how to address particularly in the industry space. A little bit different for government in insider threat program practitioners, because for cleared populations, not only do you have kind of different expectations for privacy, but you’ve also got a mandate and a requirement here in the United States to have an insider threat program.


 


So in the absence of a requirement like that for industry, getting that initial buy-in without having to have had your organization experience a harmful or a loss event perpetrated by an insider were some of the earliest challenges. And now that was six, seven years ago. The conversations that are had within that group now are far beyond that. And certainly, as folks come to the group that are in organizations that are just getting insider threat programs off the ground, they’re asking the same questions, because there are the natural questions that they ask me to get started. But for the folks that have been at this for several years now and are a little bit further down the road, it’s really interesting to see how those conversations have evolved. Lots of organizations now are trying to think about how we most effectively integrate things like a security operations center, a team of insider threat analysts, our data loss prevention capabilities, our fraud detection capabilities.


 


How do we make sure that those capabilities we have within our organizations are integrated, not duplicative? What’s the right way to share information between them? How do we see the insider threat program being a force multiplier for managing the employee employer relationship within the organizations? How can we be more proactive in our response strategies to not necessarily figuring out how to recover stolen intellectual property? But how can we leverage what we have internal to the organization to address the concerning behaviors and activity that might precede that harmful or loss event? So it’s really been a rapid and fascinating evolution over the past handful of years in terms of the types of challenges organizations are taking on within their programs.


 


TALHAH:  So I was going to say, although it feels like there’s clearly been an evolution in this space, at the same time, it feels like compared to combating external adversary, we’re still very much in the infancy of really getting our hand around as an industry, insider risk management. So for those customers that are new, that are coming into this space, that understand that this is a problem, particularly in this day and age of COVID and work from home, what are some of the guidance or tips that you provide? The top three, five things they should worry about to start off on the right footing when it comes to establishing a robust insider risk management program?


 


DAN:  Yeah. Great question, Tahlah. You bring up a good point, which is we’ve made a progress kind of as a community, particularly on the industry side over the past several years, but we’re still seeing organizations still and insider threat programs, more broadly, struggle with an identity crisis. Which is it’s hard for organizations to pinpoint exactly what they mean by insider threats, what the insider threats to their critical assets are, what insider threats to their critical assets they’re actually going to do something about compared to what they already have in place.


 


And because the definition of insider threat is so expansive and overarching, our definition really opens up to just the potential for any misuse of an organization’s authorized access to critical assets. So that can span theft of intellectual property, that can lead completely leave the cyber realm and branch out into workplace violence, that that can incorporate things like fraud or theft of intellectual property, IT system sabotage, or even things that aren’t necessarily conducted with malicious intent. Because the scope of what the insider threat problem or challenge is, we see organizations use that word to refer to a lot of different things from organization to organization.


 


So because what the scope of the problem is so broad, we see organizations vary greatly in what chunk of this problem they decide to carve off and try to solve. And compounding that even further, even if we scope the program to one or more of those threats scenarios, let’s take theft of intellectual property, for example, there are some prerequisite knowledge that has to be kind of understood within the organization to most effectively address that. What intellectual property are we worried about protecting? Who has authorized access to it? What is normal pattern of access and use look like for that intellectual property?


So where we tell organizations to start is know your critical assets. Know and understand what it is that you’re trying to protect from insider misuse. And lots of insider threat programs over the years, we’ve seen make the mistake of trying to answer that question on their own, taking their best guess, their best educated guess within their organization, and not really reaching out to finding the folks that might have ground truth or the best answer for their organization. So trying to do these things in a bubble within an insider threat program is an early recipe for calamity, an early recipe for either duplicating effort, or not finding the best right answers for your organization.


 


And also if you can’t kind of articulate the scope of what it is that you’re trying to protect, you’re going to have a really hard time measuring whether or not you’ve actually been successful at doing the things that you were trying to do. So that’s where we always tell folks to start. We have a common sense guide for mitigating insider threats. We’re on the sixth edition currently, we’re working on the seventh edition now. And there’s 21 best practices in there currently that are the foundational things for building an insider threat program.


 


The first, and they’re ordered intentionally by importance. The first is know your critical assets. Know what it is that you’re trying to protect. And once you’re there, work towards developing a formal insider threat program that engages all of the necessary stakeholders across the organization that can help you understand where your critical assets are, how they’re currently being protected, where the gaps are, and how the organization is interested in investing to buy down risk to those critical assets in inky areas.


 


TALHAH:  I love that. I love it. And I know that’s one of the educations that I certainly got, one of the things that I learned working in OSIT. And the way we frame that is a lot of companies make this mistake. We certainly tried that approach, which is try to boil the ocean. And it doesn’t work right? Learned the hard way. You got to be able to compartmentalize your problem space and say, “Out of this ocean of risks that you might have in your organization, what are the most critical ones? How do you prioritize that?” And once you prioritize that, the problem actually becomes a lot more tractable. Then you can kind of divide and conquer in terms of your prioritized approaches are. In a lot of ways, this is risk management 101, if you think about it. It’s like, identify your assets, identify your risks, and then put the processes and programs in place to go tackle it. So, yeah, it makes a ton of sense.


 


DAN:  Yeah. So the risk management thing is really interesting because I think it’s either best practice three or four, is make sure that insider threats are being addressed in organization-wide enterprise risk assessments. So if it’s something that we’ve been saying for a really long time, and intuitively it makes sense, but we were in parallel with kind of insider threat program maturity. We’re seeing organizations start to get more serious about managing risks across the enterprise in a more structured and in a more data-driven way, in a way that engages the folks that own the business processes.


 


So it’s been fascinating. So to watch the two activities come up in parallel when, when a lot of what the insider threat programs are having to do really depends on the organization having those enterprise risk management answers already established. So where we’re struggling is when you go to talk to the folks that should know these answers, they don’t have the right answers yet. So we’re seeing organizations in parallel have to work these two activities, or try to find a way to get them to sync up and align better. And it’s more pressing for insider threats as opposed to just broader cyber risk for lots of organizations, because our insiders are the ones that know where our crown jewels are.


 


They’re the ones that know the things that might not necessarily have externally the most value or the most tangible dollar value associated with impacts, but they know how and where to hurt organizations from an operational perspective. So when we’re trying to figure out how bad one of these potential threats scenarios would be if it happened within the organization, those calculations and figuring that out with the right answer is for those scenarios can be a lot harder for insider threat programs because we’re having to consider the second and third order impacts associated with something like IT system sabotage or something like fraud.


 


So it’s been really interesting to watch those two bodies of research and practice grow in parallel. And a little bit of inside baseball, but those two bodies of research at the Software Engineering Institute are housed within the same part of CERT. So it makes intuitive sense to have those things laid out in terms of parallel bodies of research. And what we’re seeing is advances in cyber risk management and enterprise risk management more broadly from a data collection and analysis perspective, really translating over nicely into insider threat program operations.


 


RAMAN:  Wow, that’s great. One thing as you were talking, Dan, that occurred to me is that there’s a lot of, not a lot. But a fair number of the insider challenges and issues actually stem from accidental behavior, people being distracted, which of course, with a work from home environment probably gets expanded even more so because there’s so many distractions going on. How do companies think about that and how do you advise organizations? Because now as we’ve spoken to industry analysts and even customers, they’re thinking about insider instance less about the threats in general, but risks. So it encompasses both the malicious and the inadvertent side. And how do you think about that? Or how do you advise organizations in that area?


 


DAN:  Yeah, so we really buttered our bread on malicious insiders early on here at the Software Engineering Institute. About 2012, 2013, we conducted a foundational study on unintentional insider threats, where someone who wasn’t necessarily motivated to cause harm to the organization, either through error or through being taken advantage of by an external threat actor, had their authorized access to the organization’s critical assets misused. And a lot of what you’ll find in that foundational study is when the motivation and intents differ, there are different response options that become what the organization can and should be pursuing. So, once we figure out the intent associated with kind of some concerning behavior actor activity that we’re seeing, or even a harmful event once it’s occurred, we can then figure out the most appropriate strategies to take in terms of response options.


 


Is this someone who needs free training? Have we misconfigured access control, like this person shouldn’t have even had authorized access to that asset to begin with in the first place? How do we better educate the workforce about their individual responsibilities to protect the authorized access to the critical assets that they’ve been given by the nature of their employment with the organization? So, it requires kind of a broadening of the aperture of what you consider to be kind of response options for insider threat incidents. And almost even a re characterization of how you declare an insider incident in the first place.


 


So it’s a worthwhile undertaking for organizations because the loss to your organization doesn’t really care about whether or not there was malicious intent or not. The bad thing happened, and it caused harm to the organization. So, what we need to do is understand the impacts associated with malicious versus unintentional insider threats are kind of relatively equivalent and at high levels. And from there, broaden our aperture and understanding in terms of what response options the organization needs to take. Once we’ve been able to infer either we think that there’s some malicious intent here or there’s there was no malicious intent here. And that intent inference, that’s where we need our human capital folks. That’s where we need the contextual data that lives outside of the purview of our technical tools and capabilities. And our friends in the social and behavioral sciences to be all a part of our insider threat program teams and our inside our risk mitigation efforts to help us understand kind of the human aspects and elements of what we’re seeing on the technical side of the house.


 


That was one of the earliest findings that came out of our insider threat research here at the SEI was take a what we call a sociotechnical approach to insider threat mitigation. This is not just a bits and bytes problem. This is a people problem. We have to be able to collect and analyze data by using automated tools, to just deal with the scale and scope of this problem for larger organizations. But at the end of the day, we’re talking about people that we brought into the organization, granted a position of trust to. We hopefully screened them on their way in, and they were good folks when they started here and they’ve been experiencing things in their lives that are causing them to kind of go down a path, a path that might potentially lead them to cause harm to the organization.


 


So early, early on finding those proactive sociotechnical approaches to the problem was a hallmark of our research. And that was amplified as we and other folks started to kind of broaden the aperture to consider unintentional insider threats as a part of the scope of their insider threat programs and insider risk management strategies.


 


RAMAN:  So the context is key here, right? And one of the things that of touched on is the sentiment. They started out as a good individual, but maybe they got distracted. Maybe they’re not happy now, or something’s happening and that’s causing them to do something that is causing risk to the organization. The other thing you brought up earlier, which I wanted to kind of touch on was the sense of the preemptive nature, because one of the things we have always talked about is once somebody has downloaded sensitive content from a repository onto their desktop, and then copy that to a USB, you’re already like 80% out of the door. What were they doing prior to that? How could we identify that they may be going down this path? How do you all think about that? Because that’s one of the questions that we continually get from customers.


 


DAN:  Yeah. So early on, when we were collecting and analyzing insider incident data to form the foundation of, of our understanding of how different types of insider incidents tend to evolve over time, we were looking at the incidents really from the beginning of the insider’s relationship with the organization, basically through the final resolution of the incident itself. And what we found was for almost every case that we’ve collected and analyzed was the presence of concerning behaviors and activity that preceded the harmful act associated with the incident, that if the organization would have either known about prior or taken a different response to, might have taken the insider down a different path that did not cause harm to the organization.


 


So in those different types of insider incidents that we’ve studied, fraud, theft of intellectual property, and IT systems sabotage, we’ve developed models that we’ve mined from the incidents that we’ve collected and analyze for those particular incident types. And those models capture not only how the insider attempted to evade detection or how they actually caused harm, but what were their personal predispositions and what stressors were they experiencing when combined with their personal predispositions that caused them to exhibit some concerning behaviors, detectable things, either from a technical perspective or from a behavioral perspective that the organization responded to in some maladaptive way?


 


Either by paying no attention to it, either because they didn’t think that that was something that could lead someone down the path of causing harm, or they didn’t have a detection capability in place. They simply didn’t know about it. Or they zagged when they should’ve zigged. A good example of this is in our IT sabotage model where we’ve found kind of a pattern of disgruntled insiders being maladaptively responded to by their organizations, through things like sanctions, being demoted, being pulled off of important projects, having their access revoked. And those sanctions, those responses by the organization led, the insider to become even more disgruntled.


 


And you see patterns of this increased disgruntlement, another sanction, the insider gets more and more disgruntled, and at a certain point reaches the tipping point and decides that now it’s time to strike back. Motivated by revenge against the organization, or they decide to leave the organization. Now they’re going to take some intellectual property with them to benefit a competitor organization. So it’s in those kind of feedback loops between concerning behaviors in maladaptive organizational responses where we found opportunities for organizations to improve their security posture as it pertains to insider risk, by gaining a better understanding of kind of those conditions that precede the harmful act and considering a much broader array of response options that might not necessarily lead someone to be motivated to cause harm, but might let them feel like they are supported by the organization, that they understand their relationship from a contractual perspective to the intellectual property that they’re creating, and really a myriad of other different nuances for those different types.


 


So that’s, again, something early on that we’ve established. It’s these patterns of concerning behaviors and maladaptive organizational responses that exacerbate the threats and lead insiders causing harm to the organization in finding those feedback loops and trying to propose different strategies and then find ways to measure the effectiveness of those alternative strategies.


 


                                            


To learn more about this episode of the Uncovering Hidden Risks podcast, visit https://aka.ms/uncoveringhiddenrisks.


For more on Microsoft Compliance and Risk Management solutions, click here.


To follow Microsoft’s Insider Risk blog, click here.


To subscribe to the Microsoft Security YouTube channel, click here.


Follow Microsoft Security on Twitter and LinkedIn.


 


Keep in touch with Raman on LinkedIn.


Keep in touch with Talhah on LinkedIn.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.