March 16 | FREE Microsoft Security Public Webinar | Diversity in Cybersecurity
This article is contributed. See the original author and article here.
This article is contributed. See the original author and article here.
This article is contributed. See the original author and article here.
Now let’s imagine this. You have the developer tenant from your Microsoft 365 Developer Subscription, and you enrolled the Power Apps Community Plan with it, so you have the full range of possibilities on your dev tenant, right?
Not quite.
Yes, you have access to Dataverse now and yes, you can even use premium connectors and that’s cool. But you can go one step further.
The Power Apps Community Plan gives you a Development Environment and you can add even more Trial Environments on your dev tenant. But you can’t set up new Production Environments.
What are Production Environments and why do you want one?
Production Environments are used for your daily business with production data. If your organization is spread over multiple countries or even continents, it makes sense to give every country its own production environment. Production Environments are backed up frequently to protect your data.
But on your dev tenant you don’t need one, right? Because you’re just testing out and learning things, correct? Well yes, but what if you want to learn how to set up a staging environment for your organization, where you have the recommended environments?
And imagine you want to learn how to set up the correct security roles in Azure to test things out. You can’t do that on your regular dev tenant, because you are not allowed to set up more environments (especially Production Environments).
Many users are simply not allowed to get full control in the organizations they’re working for, because of security and compliance reasons.
So, what to do?
Wouldn’t it be cool, if you could get a Power Apps Standalone license for your dev tenant? Sure, that costs money but you would have it all:
And guess what? You can actually purchase a Power Apps standalone license for your dev tenant! You are capable to build a complete safe tenant to test, learn and grow for your organization with just the cost of the Power Apps license.
Here is how you do it
Log into your dev tenant and go to the Admin center. Click on the waffle menu in the upper left corner and click on Admin.
Next, click on Purchase services.
In the category Business apps you find all the Power Platform plans. In this example I highlighted the Power Apps per user plan, but you can also go for the Power Automate per user plan (it’s way cheaper and you have very similar possibilities. Remember, you just want to create different environments and you can do that with both plans. Then click on Details to see, what’s included.
In the next screen you can check all the details. Click on Included apps to see every detail. If you’re happy with it click on Purchase.
Now Now
Now you’re almost done. You already are at the Checkout. Fill in your postal address and your billing information. After that click on Place order.
After a few minutes you will get your confirmation mail.
If you click on Get started you will be transferred to the Your products section of the Admin center. Click on Assign licenses.
For whatever reasons, you have to click on Assign licenses once again.
After that you can type in a name and click on the chevron icon next to Turn apps and services on or off. Check all the apps and services you want to assign, then finish the process by clicking on Assign again, on the bottom of the page.
BOOM, there you go.
As described in my previous post How to enhance your “dev tenant” to unleash the full potential of the Power Platform, you can now add multiple environments and yes, even Production Environments. Check out the Microsoft documentation about environments to get more details and why and how to use them. Enjoy.
This article is contributed. See the original author and article here.
This article was written by Nick Hughes, an avid contributor in the extended reality space as part of our Humans of Mixed Reality Guest Blogger Series. Nick Hughes shares about his approach and core best practices to the Mixed Reality stack and how YOU can bring value in XR projects.
The whole extended reality space is vast, fluid, and sometimes foolish to try to define. There are so many ways that we can leverage this innovative modality that it also makes it somewhat difficult to pitch. The idea of bridging the gap between extended reality and the tangible world is still relatively fresh to most people. In this article, I’m going to give my experience taking a pragmatic approach to the Microsoft mixed reality stack and how you can bring value to your XR projects.
The first ever HoloLens I used is still in my office!
You’re probably thinking to yourself, “This guy is probably a Microsoft partner or some industry expert in mixed reality.” I assure you that two years ago I was running network cables down soot-covered walls in a foundry.
If there is someone who has asked all the silly questions, someone who has had to search the acronym on the Internet, it is me. However, that is why what I am going to share might be helpful to you too. I am a jack-of-all-trades-master-of-none type of person who just happens to be exceptional at translating all things technology into a humanly digestible form. This is going to land right in your backyard.
Two years ago, I was first exposed to XR. I put on a friend’s PlayStation VR headset and found myself captivated with the experience. The next couple of months had me buried in articles, YouTube videos, blogs, you name it. When I found the HoloLens, I couldn’t stop imagining all the business applications and use case scenarios. I knew this was going to be transformative, but I didn’t know it was going to radically change my entire career direction as well.
Today, I help lead an extended reality team that covers the entire globe. We have deployed HoloLens and Remote Assist in nearly twenty countries and on almost every continent. This technology has dramatically transformed the way we can conduct business and I’m thrilled to have the opportunity to share with everyone how we did it.
Extended reality is here to stay! There is no doubt about it. Believe me – if you don’t get on board now, you’ll be ten years behind in just three years.
I welcome you to connect with me on Twitter and share your experiences! I love hearing what others are doing on the implementation side of things. @TheNerdNick #doSomethingGreatToday #XR
This article is contributed. See the original author and article here.
This article was written by Nick Hughes, an avid contributor in the extended reality space as part of our Humans of Mixed Reality Guest Blogger Series. Nick Hughes shares about his approach and core best practices to the Mixed Reality stack and how YOU can bring value in XR projects.
The whole extended reality space is vast, fluid, and sometimes foolish to try to define. There are so many ways that we can leverage this innovative modality that it also makes it somewhat difficult to pitch. The idea of bridging the gap between extended reality and the tangible world is still relatively fresh to most people. In this article, I’m going to give my experience taking a pragmatic approach to the Microsoft mixed reality stack and how you can bring value to your XR projects.
The first ever HoloLens I used is still in my office!
You’re probably thinking to yourself, “This guy is probably a Microsoft partner or some industry expert in mixed reality.” I assure you that two years ago I was running network cables down soot-covered walls in a foundry.
If there is someone who has asked all the silly questions, someone who has had to search the acronym on the Internet, it is me. However, that is why what I am going to share might be helpful to you too. I am a jack-of-all-trades-master-of-none type of person who just happens to be exceptional at translating all things technology into a humanly digestible form. This is going to land right in your backyard.
Two years ago, I was first exposed to XR. I put on a friend’s PlayStation VR headset and found myself captivated with the experience. The next couple of months had me buried in articles, YouTube videos, blogs, you name it. When I found the HoloLens, I couldn’t stop imagining all the business applications and use case scenarios. I knew this was going to be transformative, but I didn’t know it was going to radically change my entire career direction as well.
Today, I help lead an extended reality team that covers the entire globe. We have deployed HoloLens and Remote Assist in nearly twenty countries and on almost every continent. This technology has dramatically transformed the way we can conduct business and I’m thrilled to have the opportunity to share with everyone how we did it.
Extended reality is here to stay! There is no doubt about it. Believe me – if you don’t get on board now, you’ll be ten years behind in just three years.
I welcome you to connect with me on Twitter and share your experiences! I love hearing what others are doing on the implementation side of things. @TheNerdNick #doSomethingGreatToday #XR
This article is contributed. See the original author and article here.
Sync Up is your monthly podcast hosted by the OneDrive team taking you behind the scenes of OneDrive, shedding light on how OneDrive connects you to all your files in Microsoft 365 so you can share and work together from anywhere. You will hear from experts behind the design and development of OneDrive, as well as customers and Microsoft MVPs. Each episode will also give you news and announcements, special topics of discussion, and best practices for your OneDrive experience.
So, get your ears ready and Subscribe to Sync up podcast!
Meet your show hosts and guests for the episode:
Jason Moore is the Principal Group Program Manager for OneDrive and the Microsoft 365 files experience. He loves files, folders, and metadata. Twitter: @jasmo
Ankita Kirti is a Product Manager on the Microsoft 365 product marketing team responsible for OneDrive for Business. Twitter: @Ankita_Kirti21
Cory Kincaid is a Customer Success Manager for Modern Work, who advises customers on how to use technologies like Teams and OneDrive to improve their business productivity.
Additional guests:
Ryan Voelki and Tatyanah Castillo, also customer success managers for Modern Work, who take a #HumansFirst approach to helping customers navigate their digital transformation.
Quick links to the podcast
Links to resources mentioned in the show:
Be sure to visit our show page to hear all the episodes, access the show notes, and get bonus content. And stay connected to the OneDrive community blog where we’ll share more information per episode, guest insights, and take any questions from our listeners and OneDrive users. We, too, welcome your ideas for future episodes topics and segments. Keep the discussion going in comments below.
As you can see, we continue to evolve OneDrive as a place to access, share, and collaborate on all your files in Office 365, keeping them protected and readily accessible on all your devices, anywhere. We, at OneDrive, will shine a recurring light on the importance of you, the user. We will continue working to make OneDrive and related apps more approachable. The OneDrive team wants you to unleash your creativity. And we will do this, together, one episode at a time.
Thanks for your time reading and listening to all things OneDrive,
Ankita Kirti – OneDrive | Microsoft
This article is contributed. See the original author and article here.
In this blog I want to show you, how you can build, test and publish an FAQ bot for Microsoft Teams within minutes. We will use the Power Virtual Agents for Teams, which means, that you will not need any additional license to your Microsoft 365 license, for reference see also Power Virtual Agents for Microsoft Teams plan.
Power Virtual Agents belongs like Power Apps, Power Automate and Power Bi to the Power Platform (wow, that was a powerFULL sentence :smiling_face_with_halo:). You can create chatbots, which can interact with users in apps and websites, trigger workflows and more, without the need of writing code. You can choose if you want to use it in the Power Virtual Agents standalone web app or as app within Microsoft Teams.
I will guide you how to create an FAQ bot. To feed our bot we will need some FAQ so that our can bot can learn them. I will use FAQ regarding licensing :nerd_face:, but you can choose any FAQ from a website or PDF or even Word file that you like.
You see, that some basic topics are already created for you. You can take a look later.
Now we want to work on feeding our bot with the FAQ from the website that we selected.
This may take now a couple of minutes. Grab a coffee in the meanwhile: :hot_beverage:. Soon you will see the message that your new suggested topics are now in:
You can now review and edit each topic:
After you are done with reviewing and editing your topics, you will need to turn on the topics
Train your bot by entering more trigger phrases. This way, it is more likely that the Chatbot understands users asking questions even if they don’t exactly match the trigger phrases.
Time to test the bot!
You can now review and edit your topics until you are happy with the results.
It took us only a few minutes to create, test and publish a chatbot, that now works inside of Microsoft Teams. Want to do some more? We could extend the capabilities of our Power Virtual Agents bot: Let’s say our bot can’t answer a question and needs to transfer the chat to a human agent, who will answer that question. What if we trained the bot with that answer so that our bot gets smarter over time? I will cover that in one of my next blog posts. What do you use chatbots for? Did you already try to make a 5 minute bot? Please share below :)
This article is contributed. See the original author and article here.
Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun!
Even a tea light suffices to create a great effect
In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle.
This TikTok creator has thousands of views for their handshadow tutorials
But what’s a developer to do when trying to capture that #cottagecore vibe in a web app?
While exploring the art of hand shadows, I wondered whether some of the recent work I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually?
Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this!
When you start researching hand poses, it’s striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere:
MSR throwing hands
There are dozens of handpose libraries already on GitHub:
There are many applications where tracking hands is a useful activity:
• Gaming
• Simulations / Training
• “Hands free” uses for remote interactions with things by moving the body
• Assistive technologies
• TikTok effects :trophy:
• Useful things like Accordion Hands apps
One of the more interesting new libraries, handsfree.js, offers an excellent array of demos in its effort to move to a hands free web experience:
Handsfree.js, a very promising project
As it turns out, hands are pretty complicated things. They each include 21 keypoints (vs PoseNet’s 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging.
There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js’s handposes, and MediaPipe’s. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe’s handposes are perfect for our project. We will have to compromise.
TensorFlow.js’s handposes allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows.
MediaPipe’s handpose models (which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious.
One other library, fingerposes, is optimized for finger spelling in a sign language context and is worth a look.
Since it’s more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand.
Let’s get to work to build this app.
As a Vue.js developer, I always use the Vue CLI to scaffold an app using vue create my-app and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named app and creating an api folder to include an Azure function to store a key (more on this in a minute).
In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages:
“@tensorflow-models/handpose“: “^0.0.6“,
“@tensorflow/tfjs“: “^2.7.0“,
“@tensorflow/tfjs-backend-cpu“: “^2.7.0“,
“@tensorflow/tfjs-backend-webgl“: “^2.7.0“,
“@tensorflow/tfjs-converter“: “^2.7.0“,
“@tensorflow/tfjs-core“: “^2.7.0“,
…
“microsoft-cognitiveservices-speech-sdk“: “^1.15.0“,
We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows:
<div id=“canvas-wrapper column is-half”>
<canvas id=“output” ref=“output”></canvas>
<video
id=“video”
ref=“video”
playsinline
style=”
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
visibility: hidden;
width: auto;
height: auto;
position: absolute;
“
></video>
</div>
<div class=“column is-half”>
<canvas
class=“has-background-black-bis”
id=“shadowCanvas”
ref=“shadowCanvas”
>
</canvas>
</div>
Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video’s keyframes for hand poses. It’s important at these steps to ensure error handling in case the model fails to load or there’s no webcam available.
async mounted() {
await tf.setBackend(this.backend);
//async load model, then load video, then pass it to start landmarking
this.model = await handpose.load();
this.message = “Model is loaded! Now loading video“;
let webcam;
try {
webcam = await this.loadVideo();
} catch (e) {
this.message = e.message;
throw e;
}
this.landmarksRealTime(webcam);
},
Still working asynchronously, set up the camera to provide a stream of images
async setupCamera() {
if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
throw new Error(
“Browser API navigator.mediaDevices.getUserMedia not available“
);
}
this.video = this.$refs.video;
const stream = await navigator.mediaDevices.getUserMedia({
video: {
facingMode: “user“,
width: VIDEO_WIDTH,
height: VIDEO_HEIGHT,
},
});
return new Promise((resolve) => {
this.video.srcObject = stream;
this.video.onloadedmetadata = () => {
resolve(this.video);
};
});
},
Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas – red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky!
async landmarksRealTime(video) {
//start showing landmarks
this.videoWidth = video.videoWidth;
this.videoHeight = video.videoHeight;
//set up skeleton canvas
this.canvas = this.$refs.output;
…
//set up shadowCanvas
this.shadowCanvas = this.$refs.shadowCanvas;
…
this.ctx = this.canvas.getContext(“2d“);
this.sctx = this.shadowCanvas.getContext(“2d“);
…
//paint to main
this.ctx.clearRect(0, 0, this.videoWidth,
this.videoHeight);
this.ctx.strokeStyle = “red“;
this.ctx.fillStyle = “red“;
this.ctx.translate(this.shadowCanvas.width, 0);
this.ctx.scale(–1, 1);
//paint to shadow box
this.sctx.clearRect(0, 0, this.videoWidth, this.videoHeight);
this.sctx.shadowColor = “black“;
this.sctx.shadowBlur = 20;
this.sctx.shadowOffsetX = 150;
this.sctx.shadowOffsetY = 150;
this.sctx.lineWidth = 20;
this.sctx.lineCap = “round“;
this.sctx.fillStyle = “white“;
this.sctx.strokeStyle = “white“;
this.sctx.translate(this.shadowCanvas.width, 0);
this.sctx.scale(–1, 1);
//now you’ve set up the canvases, now you can frame its landmarks
this.frameLandmarks();
},
As the keyframes progress, the model predict new keypoints for each of the hand’s elements, and both canvases are cleared and redrawn.
const predictions = await this.model.estimateHands(this.video);
if (predictions.length > 0) {
const result = predictions[0].landmarks;
this.drawKeypoints(
this.ctx,
this.sctx,
result,
predictions[0].annotations
);
}
requestAnimationFrame(this.frameLandmarks);
Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand’s coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist.
Re-identify the fingers and palm:
fingerLookupIndices: {
thumb: [0, 1, 2, 3, 4],
indexFinger: [0, 5, 6, 7, 8],
middleFinger: [0, 9, 10, 11, 12],
ringFinger: [0, 13, 14, 15, 16],
pinky: [0, 17, 18, 19, 20],
},
palmLookupIndices: {
palm: [0, 1, 5, 9, 13, 17, 0, 1],
},
…and draw them to screen:
const fingers = Object.keys(this.fingerLookupIndices);
for (let i = 0; i < fingers.length; i++) {
const finger = fingers[i];
const points = this.fingerLookupIndices[finger].map(
(idx) => keypoints[idx]
);
this.drawPath(ctx, sctx, points, false);
}
const palmArea = Object.keys(this.palmLookupIndices);
for (let i = 0; i < palmArea.length; i++) {
const palm = palmArea[i];
const points = this.palmLookupIndices[palm].map(
(idx) => keypoints[idx]
);
this.drawPath(ctx, sctx, points, true);
}
With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story.
To do this, get a key from the Azure portal for Speech Services by creating a Service:
You can connect to this service by importing the sdk:
import * as sdk from “microsoft-cognitiveservices-speech-sdk”;
…and start audio transcription after obtaining an API key which is stored in an Azure function in the /api folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted.
async startAudioTranscription() {
try {
//get the key
const response = await axios.get(“/api/getKey“);
this.subKey = response.data;
//sdk
let speechConfig = sdk.SpeechConfig.fromSubscription(
this.subKey,
“eastus“
);
let audioConfig = sdk.AudioConfig.fromDefaultMicrophoneInput();
this.recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig);
this.recognizer.recognized = (s, e) => {
this.text = e.result.text;
this.story.push(this.text);
};
this.recognizer.startContinuousRecognitionAsync();
} catch (error) {
this.message = error;
}
},
In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end.
In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API:
const stream = this.shadowCanvas.captureStream(60); // 60 FPS recording
this.recorder = new MediaRecorder(stream, {
mimeType: “video/webm;codecs=vp9“,
});
(this.recorder.ondataavailable = (e) => {
this.chunks.push(e.data);
}),
this.recorder.start(500);
…and displayed below as a video with the storyline in a new div:
const video = document.createElement(“video“);
const fullBlob = new Blob(this.chunks);
const downloadUrl = window.URL.createObjectURL(fullBlob);
video.src = downloadUrl;
document.getElementById(“story“).appendChild(video);
video.autoplay = true;
video.controls = true;
This app can be deployed as an Azure Static Web App using the excellent Azure plugin for Visual Studio Code. And once it’s live, you can tell durable shadow stories!
Try Ombromanie here. The codebase is available here
Take a look at Ombromanie in action:
Learn more about AI on Azure
Azure AI Essentials Video covering speech and language
Azure free account sign-up
This article is contributed. See the original author and article here.
You don’t necessarily need to change your data models and applications to run operational analytics workloads on Azure SQL. In part two of this three-part series with Silvano Coriani, we will explore Azure SQL capabilities to optimize your databases for mixed workloads on real-time data.
Resources:
Get started with Columnstore for real-time operational analytics
Sample performance with Operational Analytics in WideWorldImporters
T-SQL Window Functions: For data analysis and beyond, 2nd Edition
Real-Time Operational Analytics:
Memory-Optimized Tables and Columnstore Index
DML operations and nonclustered columnstore index (NCCI) in SQL Server 2016
Filtered nonclustered columnstore index (NCCI)
This article is contributed. See the original author and article here.
Note: Nominations for this program are now closed
We would like to have Exchange Online customers (who would like to participate in a preview program) help us improve the public folder hierarchy synchronization process. The preview program is called Hierarchy-Read-Replica (HRR) Preview.
Before we tell you more details of the HRR Preview program, you may want to read about the current public folder hierarchy synchronization here and then return to this post again.
Welcome back!
As you learned here, currently, the primary PF mailbox is the only source of hierarchy synchronization for all secondary PF mailboxes. This model can cause the following issues:
As a solution to this problem, we are going in with a more distributed approach to propagate hierarchy changes, using a set of hierarchy-healthy secondary mailboxes and excluding them from serving hierarchy to end-users (which will dedicate them to hierarchy propagation tasks). This design will distribute the load from a single master mailbox to multiple mailboxes, hence improving overall performance.
The benefit you get by participating in the preview program is:
To join, you must meet the following eligibility criteria:
Sounds interesting? You can send us your nomination by filling in this form (link removed because nominations are closed), and we’ll contact you with further details.
This article is contributed. See the original author and article here.
Host: Raman Kalyan – Director, Microsoft
Host: Talhah Mir – Principal Program Manager, Microsoft
Guest: Dan Costa – Technical Manager, Carnegie Mellon University
The following conversation is adapted from transcripts of Episode 2 of the Uncovering Hidden Risks podcast. There may be slight edits in order to make this conversation easier for readers to follow along. You can view the full transcripts of this episode at: https://aka.ms/uncoveringhiddenrisks
In this podcast we explore the challenges of addressing insider threats and how organizations can improve their security posture by understanding the conditions and triggers that precede a potentially harmful act. And how technological advances in prevention and detection can help organizations stay safe and steps ahead of threats from trusted insiders.
RAMAN: Hi, I’m Raman Kalyan, I’m with Microsoft 365 Product Marketing Team.
TALHAH: And I’m Talhah Mir, Principal Program Manager on the Security Compliance Team.
RAMAN: We’re going to be talking about insider threat challenges and where they come from, how to recognize them, what to do, and today we’re talking to Dan Costa.
TALHAH: Dan Costa, the man who’s got basically the brainpower of hundreds of organizations that he works with across the world and given a chance to talk to him and distill this down in terms of what are some of the trends and what are some of the processes and procedures you can take to manage this risk. Super excited about this, man. Let’s just get right into it.
TALHAH: Dan, you want to just introduce yourself, give a little background on yourself, and Carnegie Mellon and all that stuff?
DAN: Yeah, sure thing. So Dan Costa, I’m the Technical Manager of the CERT National Insider Threat Center here at Carnegie Mellon University Software Engineering Institute. We’re a federally funded research and development center solving long term enduring cybersecurity and software engineering challenges on behalf of the DOD. One of the unique things about the Software Engineering Institute is that we are chartered and encouraged to go out and engage with industry as well, solving those long term cybersecurity and software engineering challenges.
And my group leads kind of the SEI’s insider threat research. So collecting and analyzing insider incident data to gain an understanding of how insider incidents tend to evolve over time, what vulnerabilities exist within our organizations that enable insiders to carry out their attacks, and what organizations can and should be doing to help better protect, prevent, detect, and respond to insider threats to their critical assets.
RAMAN: That’s awesome. Dan, how did you get into this space?
DAN: Yeah, so I’ve been with the SEI (Software Engineering Institute) since 2011. I came onboard actually to work on the insider threat team as a software engineer, developing some data collection and analysis capabilities for some of our early insider threat vulnerability assessment methodologies. And since 2011, have really gotten a chance to have my hand in nearly every phase of kind of the insider threat mitigation challenges that organizations experience, not only on the government side, but in the industry as well. Since 2011, I’ve been able to stand up insider threat programs within the government, within industry, help organizations measure their current security posture as it pertains to insider risk, and try to find ways that organizations can collect and aggregate data from disparate sources within their organization that can help them more proactively manage insider risk.
So that’s been work, rolling my sleeves up, working with insider threat analysts, spending lots of time with insider threat analysts in the early years, conducting numerous vulnerability assessments and program evaluations, helping organizations explain to their boards and their senior leadership team the scope and severity and the breadth of the insider threat problem, and help folks understand kind of what they already have in place that can form the foundation for an enterprise-wide insider risk management strategy.
I’ve been very fortunate since 2011 to really have a hand in almost every aspect of insider threat program building, assessment, justifying the need to have an insider threat program in the first place. Obviously since then had a lot to do with actually collecting and analyzing insider incident data, not only what we have access to publicly, but also learning from how we’ve collected and analyzed data here at the SEI over almost 20 years, and help organizations understand how they can use their own data collection and analysis capabilities to bolster their insider threat programs.
TALHAH: Awesome. Okay. So Dan, one of the things that roam and I talked about quite a bit is my own journey in this space. I mean, I haven’t been fortunate to be in the space as long as you have, but I remember when I came into this space a couple of years back, one of the first places I turned to was Carnegie Mellon. And specifically, CERT. And one of the places you pointed us towards was this treasure trove of knowledge that you have, that you then sort of complement with the OSIT Group to really drive awareness and learning, cross-learning across different subject matter experts. So I’d love to get your story of that journey of how OSIT came about, where was it, where is it going now, and what it looks like going forward.
RAMAN: And for those listening, what does OSIT stand for?
DAN: Yeah, so that’s a good place to start. It’s the Open Source Insider Threat Information Sharing Group. It’s a community of interest of insider threat program practitioners in the private sector that are all trying to help their organizations more effectively manage insider risk. And in the group really is kind of a grassroots activity that was started by the first director of the insider threat center here at CERT, Dawn Capelli who I hear you’ll be talking to soon. When Dawn left the SEI to go kind of put her research into practice out in industry, she wanted to establish this community of interest.
And this is something that Dawn had been working on even while she was here at CERT, which was “How do I establish kind of a community of people who are all kind of going down the same roads within their organizations? How can we learn from each other? How can we benchmark? How can we share challenges faced early on? And how we’re getting past and around and through and over those challenges?” So in the beginning, the OSIT Group was really probably a handful fold or two of folks that were just in the earliest phases of getting insider threat programs off the ground.
And over the past six to seven years, we’ve really seen the group blossom really by word of mouth only, into an organization that currently boasts over 500 members and representing about 220 organizations in industry, all building out their own insider threat programs. So because of the community building that that was successful early on and finding time to get together and talk shop with folks that were going through the same things within their organizations, we’ve been able to over the years, continue to grow that.
And then to mine the knowledge and the experiences gained by the folks that are building their own insider threat programs and try to find ways to generalize those conversations into resources like our commonsense guide to mitigating insider threats, a variety of other research projects that we’ve been able to leverage the expertise insights, and really willingness to experiment and try new things that we’re finding with those insider threat program practitioners.
So we’re really there kind of just as stewards of the community. It is governed by members of the group at large. We’re there to kind of facilitate conversation discussions, make connections, and do what we can to either bring research questions out from those conversations or find opportunities to apply the findings from our research into organizations that are currently working on these insider threat challenges.
RAMAN: When you think about when things first started, the types of challenges that you were facing the beginning to the types of challenges you’re facing now with regards to insiders, how have things evolved?
DAN: Yeah, that’s a great question.
RAMAN: Is risk different, or what’s evolved in your opinion?
DAN: In the beginning. It was, “What do I call this thing? How do I convince the stakeholders within my organization that I need to work closely with for this to be successful? Information security, human, human resources, legal. How do I convince those folks to share their time, share their resources and partner with us to get this off the ground? How do we navigate successfully incorporating legal, privacy, civil liberties protections into our data collection and analysis efforts?” And those were really the challenges that a handful of years ago, folks were just starting to wrap their heads around, how to address particularly in the industry space. A little bit different for government in insider threat program practitioners, because for cleared populations, not only do you have kind of different expectations for privacy, but you’ve also got a mandate and a requirement here in the United States to have an insider threat program.
So in the absence of a requirement like that for industry, getting that initial buy-in without having to have had your organization experience a harmful or a loss event perpetrated by an insider were some of the earliest challenges. And now that was six, seven years ago. The conversations that are had within that group now are far beyond that. And certainly, as folks come to the group that are in organizations that are just getting insider threat programs off the ground, they’re asking the same questions, because there are the natural questions that they ask me to get started. But for the folks that have been at this for several years now and are a little bit further down the road, it’s really interesting to see how those conversations have evolved. Lots of organizations now are trying to think about how we most effectively integrate things like a security operations center, a team of insider threat analysts, our data loss prevention capabilities, our fraud detection capabilities.
How do we make sure that those capabilities we have within our organizations are integrated, not duplicative? What’s the right way to share information between them? How do we see the insider threat program being a force multiplier for managing the employee employer relationship within the organizations? How can we be more proactive in our response strategies to not necessarily figuring out how to recover stolen intellectual property? But how can we leverage what we have internal to the organization to address the concerning behaviors and activity that might precede that harmful or loss event? So it’s really been a rapid and fascinating evolution over the past handful of years in terms of the types of challenges organizations are taking on within their programs.
TALHAH: So I was going to say, although it feels like there’s clearly been an evolution in this space, at the same time, it feels like compared to combating external adversary, we’re still very much in the infancy of really getting our hand around as an industry, insider risk management. So for those customers that are new, that are coming into this space, that understand that this is a problem, particularly in this day and age of COVID and work from home, what are some of the guidance or tips that you provide? The top three, five things they should worry about to start off on the right footing when it comes to establishing a robust insider risk management program?
DAN: Yeah. Great question, Tahlah. You bring up a good point, which is we’ve made a progress kind of as a community, particularly on the industry side over the past several years, but we’re still seeing organizations still and insider threat programs, more broadly, struggle with an identity crisis. Which is it’s hard for organizations to pinpoint exactly what they mean by insider threats, what the insider threats to their critical assets are, what insider threats to their critical assets they’re actually going to do something about compared to what they already have in place.
And because the definition of insider threat is so expansive and overarching, our definition really opens up to just the potential for any misuse of an organization’s authorized access to critical assets. So that can span theft of intellectual property, that can lead completely leave the cyber realm and branch out into workplace violence, that that can incorporate things like fraud or theft of intellectual property, IT system sabotage, or even things that aren’t necessarily conducted with malicious intent. Because the scope of what the insider threat problem or challenge is, we see organizations use that word to refer to a lot of different things from organization to organization.
So because what the scope of the problem is so broad, we see organizations vary greatly in what chunk of this problem they decide to carve off and try to solve. And compounding that even further, even if we scope the program to one or more of those threats scenarios, let’s take theft of intellectual property, for example, there are some prerequisite knowledge that has to be kind of understood within the organization to most effectively address that. What intellectual property are we worried about protecting? Who has authorized access to it? What is normal pattern of access and use look like for that intellectual property?
So where we tell organizations to start is know your critical assets. Know and understand what it is that you’re trying to protect from insider misuse. And lots of insider threat programs over the years, we’ve seen make the mistake of trying to answer that question on their own, taking their best guess, their best educated guess within their organization, and not really reaching out to finding the folks that might have ground truth or the best answer for their organization. So trying to do these things in a bubble within an insider threat program is an early recipe for calamity, an early recipe for either duplicating effort, or not finding the best right answers for your organization.
And also if you can’t kind of articulate the scope of what it is that you’re trying to protect, you’re going to have a really hard time measuring whether or not you’ve actually been successful at doing the things that you were trying to do. So that’s where we always tell folks to start. We have a common sense guide for mitigating insider threats. We’re on the sixth edition currently, we’re working on the seventh edition now. And there’s 21 best practices in there currently that are the foundational things for building an insider threat program.
The first, and they’re ordered intentionally by importance. The first is know your critical assets. Know what it is that you’re trying to protect. And once you’re there, work towards developing a formal insider threat program that engages all of the necessary stakeholders across the organization that can help you understand where your critical assets are, how they’re currently being protected, where the gaps are, and how the organization is interested in investing to buy down risk to those critical assets in inky areas.
TALHAH: I love that. I love it. And I know that’s one of the educations that I certainly got, one of the things that I learned working in OSIT. And the way we frame that is a lot of companies make this mistake. We certainly tried that approach, which is try to boil the ocean. And it doesn’t work right? Learned the hard way. You got to be able to compartmentalize your problem space and say, “Out of this ocean of risks that you might have in your organization, what are the most critical ones? How do you prioritize that?” And once you prioritize that, the problem actually becomes a lot more tractable. Then you can kind of divide and conquer in terms of your prioritized approaches are. In a lot of ways, this is risk management 101, if you think about it. It’s like, identify your assets, identify your risks, and then put the processes and programs in place to go tackle it. So, yeah, it makes a ton of sense.
DAN: Yeah. So the risk management thing is really interesting because I think it’s either best practice three or four, is make sure that insider threats are being addressed in organization-wide enterprise risk assessments. So if it’s something that we’ve been saying for a really long time, and intuitively it makes sense, but we were in parallel with kind of insider threat program maturity. We’re seeing organizations start to get more serious about managing risks across the enterprise in a more structured and in a more data-driven way, in a way that engages the folks that own the business processes.
So it’s been fascinating. So to watch the two activities come up in parallel when, when a lot of what the insider threat programs are having to do really depends on the organization having those enterprise risk management answers already established. So where we’re struggling is when you go to talk to the folks that should know these answers, they don’t have the right answers yet. So we’re seeing organizations in parallel have to work these two activities, or try to find a way to get them to sync up and align better. And it’s more pressing for insider threats as opposed to just broader cyber risk for lots of organizations, because our insiders are the ones that know where our crown jewels are.
They’re the ones that know the things that might not necessarily have externally the most value or the most tangible dollar value associated with impacts, but they know how and where to hurt organizations from an operational perspective. So when we’re trying to figure out how bad one of these potential threats scenarios would be if it happened within the organization, those calculations and figuring that out with the right answer is for those scenarios can be a lot harder for insider threat programs because we’re having to consider the second and third order impacts associated with something like IT system sabotage or something like fraud.
So it’s been really interesting to watch those two bodies of research and practice grow in parallel. And a little bit of inside baseball, but those two bodies of research at the Software Engineering Institute are housed within the same part of CERT. So it makes intuitive sense to have those things laid out in terms of parallel bodies of research. And what we’re seeing is advances in cyber risk management and enterprise risk management more broadly from a data collection and analysis perspective, really translating over nicely into insider threat program operations.
RAMAN: Wow, that’s great. One thing as you were talking, Dan, that occurred to me is that there’s a lot of, not a lot. But a fair number of the insider challenges and issues actually stem from accidental behavior, people being distracted, which of course, with a work from home environment probably gets expanded even more so because there’s so many distractions going on. How do companies think about that and how do you advise organizations? Because now as we’ve spoken to industry analysts and even customers, they’re thinking about insider instance less about the threats in general, but risks. So it encompasses both the malicious and the inadvertent side. And how do you think about that? Or how do you advise organizations in that area?
DAN: Yeah, so we really buttered our bread on malicious insiders early on here at the Software Engineering Institute. About 2012, 2013, we conducted a foundational study on unintentional insider threats, where someone who wasn’t necessarily motivated to cause harm to the organization, either through error or through being taken advantage of by an external threat actor, had their authorized access to the organization’s critical assets misused. And a lot of what you’ll find in that foundational study is when the motivation and intents differ, there are different response options that become what the organization can and should be pursuing. So, once we figure out the intent associated with kind of some concerning behavior actor activity that we’re seeing, or even a harmful event once it’s occurred, we can then figure out the most appropriate strategies to take in terms of response options.
Is this someone who needs free training? Have we misconfigured access control, like this person shouldn’t have even had authorized access to that asset to begin with in the first place? How do we better educate the workforce about their individual responsibilities to protect the authorized access to the critical assets that they’ve been given by the nature of their employment with the organization? So, it requires kind of a broadening of the aperture of what you consider to be kind of response options for insider threat incidents. And almost even a re characterization of how you declare an insider incident in the first place.
So it’s a worthwhile undertaking for organizations because the loss to your organization doesn’t really care about whether or not there was malicious intent or not. The bad thing happened, and it caused harm to the organization. So, what we need to do is understand the impacts associated with malicious versus unintentional insider threats are kind of relatively equivalent and at high levels. And from there, broaden our aperture and understanding in terms of what response options the organization needs to take. Once we’ve been able to infer either we think that there’s some malicious intent here or there’s there was no malicious intent here. And that intent inference, that’s where we need our human capital folks. That’s where we need the contextual data that lives outside of the purview of our technical tools and capabilities. And our friends in the social and behavioral sciences to be all a part of our insider threat program teams and our inside our risk mitigation efforts to help us understand kind of the human aspects and elements of what we’re seeing on the technical side of the house.
That was one of the earliest findings that came out of our insider threat research here at the SEI was take a what we call a sociotechnical approach to insider threat mitigation. This is not just a bits and bytes problem. This is a people problem. We have to be able to collect and analyze data by using automated tools, to just deal with the scale and scope of this problem for larger organizations. But at the end of the day, we’re talking about people that we brought into the organization, granted a position of trust to. We hopefully screened them on their way in, and they were good folks when they started here and they’ve been experiencing things in their lives that are causing them to kind of go down a path, a path that might potentially lead them to cause harm to the organization.
So early, early on finding those proactive sociotechnical approaches to the problem was a hallmark of our research. And that was amplified as we and other folks started to kind of broaden the aperture to consider unintentional insider threats as a part of the scope of their insider threat programs and insider risk management strategies.
RAMAN: So the context is key here, right? And one of the things that of touched on is the sentiment. They started out as a good individual, but maybe they got distracted. Maybe they’re not happy now, or something’s happening and that’s causing them to do something that is causing risk to the organization. The other thing you brought up earlier, which I wanted to kind of touch on was the sense of the preemptive nature, because one of the things we have always talked about is once somebody has downloaded sensitive content from a repository onto their desktop, and then copy that to a USB, you’re already like 80% out of the door. What were they doing prior to that? How could we identify that they may be going down this path? How do you all think about that? Because that’s one of the questions that we continually get from customers.
DAN: Yeah. So early on, when we were collecting and analyzing insider incident data to form the foundation of, of our understanding of how different types of insider incidents tend to evolve over time, we were looking at the incidents really from the beginning of the insider’s relationship with the organization, basically through the final resolution of the incident itself. And what we found was for almost every case that we’ve collected and analyzed was the presence of concerning behaviors and activity that preceded the harmful act associated with the incident, that if the organization would have either known about prior or taken a different response to, might have taken the insider down a different path that did not cause harm to the organization.
So in those different types of insider incidents that we’ve studied, fraud, theft of intellectual property, and IT systems sabotage, we’ve developed models that we’ve mined from the incidents that we’ve collected and analyze for those particular incident types. And those models capture not only how the insider attempted to evade detection or how they actually caused harm, but what were their personal predispositions and what stressors were they experiencing when combined with their personal predispositions that caused them to exhibit some concerning behaviors, detectable things, either from a technical perspective or from a behavioral perspective that the organization responded to in some maladaptive way?
Either by paying no attention to it, either because they didn’t think that that was something that could lead someone down the path of causing harm, or they didn’t have a detection capability in place. They simply didn’t know about it. Or they zagged when they should’ve zigged. A good example of this is in our IT sabotage model where we’ve found kind of a pattern of disgruntled insiders being maladaptively responded to by their organizations, through things like sanctions, being demoted, being pulled off of important projects, having their access revoked. And those sanctions, those responses by the organization led, the insider to become even more disgruntled.
And you see patterns of this increased disgruntlement, another sanction, the insider gets more and more disgruntled, and at a certain point reaches the tipping point and decides that now it’s time to strike back. Motivated by revenge against the organization, or they decide to leave the organization. Now they’re going to take some intellectual property with them to benefit a competitor organization. So it’s in those kind of feedback loops between concerning behaviors in maladaptive organizational responses where we found opportunities for organizations to improve their security posture as it pertains to insider risk, by gaining a better understanding of kind of those conditions that precede the harmful act and considering a much broader array of response options that might not necessarily lead someone to be motivated to cause harm, but might let them feel like they are supported by the organization, that they understand their relationship from a contractual perspective to the intellectual property that they’re creating, and really a myriad of other different nuances for those different types.
So that’s, again, something early on that we’ve established. It’s these patterns of concerning behaviors and maladaptive organizational responses that exacerbate the threats and lead insiders causing harm to the organization in finding those feedback loops and trying to propose different strategies and then find ways to measure the effectiveness of those alternative strategies.
To learn more about this episode of the Uncovering Hidden Risks podcast, visit https://aka.ms/uncoveringhiddenrisks.
For more on Microsoft Compliance and Risk Management solutions, click here.
To follow Microsoft’s Insider Risk blog, click here.
To subscribe to the Microsoft Security YouTube channel, click here.
Follow Microsoft Security on Twitter and LinkedIn.
Keep in touch with Raman on LinkedIn.
Keep in touch with Talhah on LinkedIn.
Recent Comments