Cloud printing in 10 minutes

Cloud printing in 10 minutes

This article is contributed. See the original author and article here.

With Universal Print, Microsoft makes moving print to the cloud easy. Time spent managing on-premises print servers and fiddling with print drivers is soon a thing of the past. Going further with YSoft OMNI Series™, a business can get take advantage of the Universal Print feature in Microsoft 365 on the printers they have today without using an expensive print server. But what is it like to install YSoft OMNI Series? How long does it take? Answers to these questions and more are answered.

 

OMNI Series consists of two parts. YSoft OMNI Bridge™ and YSoft OMNI Apps™.

 

YSoft OMNI Bridge, a serverless edge device, sits in your secure network, much like a router, minding its own business and doing its job, connecting to Microsoft 365 to pull your print jobs to a printer in your office.

 

The second part of OMNI Series is the OMNI UP365™ App. This is a cloud service for Universal Print that OMNI Bridge uses to do its job.

Bruce_Leistikow_0-1613830197406.jpeg

Now that we have that out of the way, let’s install it.

 

Installing OMNI Bridge
When you purchase , the OMNI Bridge is shipped to you. There are four quick installation steps to perform before connecting it to the OMNI 365 cloud service.

 

  1. Unpack the box
  2. Plug OMNI Bridge into a power source and connect an ethernet cable
  3. Make a note of the verification code that will appear in OMNI Bridge’s display
  4. Go to https://microsoft.com/devicelogin and enter the verification code and login to Microsoft 365

That’s it. OMNI Bridge’s LED light turns blue and the display says “Idle”. Now it’s time to connect your printers to the OMNI UP365 service.

 

Connecting Printers to OMNI UP365
As you might expect, this part has a few more steps, oh wait, it is only four steps too! In this part, you will assign each cloud subscription purchased to one of your printers.

 

  1. Navigate and login to http://omni.ysoft.com (available only to OMNI Series customers) and login with your M365 admin credentials (requires the Printer Administrator role).
  2. Select Connectors – all the OMNI UP365 subscriptions that you purchased are listed.
  3. OMNI Series will auto discover printers on your network and display them. (Manual registration is also possible.) Select the desired printer and confirm connecting it to an OMNI UP365 App. Each OMNI Bridge can connect up to twenty-five printers using twenty-five OMNI UP365 Apps. The display on OMNI bridge will change to indicate the number of printers connected.
  4. Share the Universal Print / OMNI Series connected printer with your users.

    omni-series-connectors.PNG

Users do not have to do anything differently to begin printing. They can CTRL-P or submit a print job as they normally do from their workstation and retrieve it at the same printer they’ve always used.

Congratulations! You have moved printing to the cloud.  On your current device, no less!!

 

[RECAP] 24h Change Collaboration in Healthcare | Holographic Surgery Event

[RECAP] 24h Change Collaboration in Healthcare | Holographic Surgery Event

This article is contributed. See the original author and article here.

 


oniellucrisia_1-1613604440138.png


 


Earlier this week,15 surgeons from across the globe undertook 12 mixed reality-support holographic surgery operations as part of a 24-hour Microsoft-hosted online event in collaboration with AP-HP group hospitals.


 


These surgeries with real-life footage during the 24-hour event demonstrated how mixed reality technologies such as Dynamics 365 Remote Assist on the HoloLens 2 have tremendous potential to greatly enhance how surgeons operate, enrich the learning experiences of doctors, and ultimately improve patient outcomes. Through a custom app, surgeons were able to interact with anatomical images of their patients in holograms projected in real time in the operating room as well as have critical access to interactive tutorials during the surgeries.


 


In parallel to the surgeries, specialists from the hospitals, surgeons, partners, customers, public health and patients from all over the world participated in diverse roundtables and interviews to share about their own first-hand experiences with mixed reality.


 


Want to discover how Microsoft technologies can positively impact healthcare and be the new standard of healthcare collaboration in operating room? Read more in the Microsoft Innovation Stories articleHoloLens project enables collaboration among surgeons worldwide | Innovation Stories (microsoft.com)


 


Microsoft CEO Satya Nadella also tweeted about the immense power and potential of mixed reality in healthcare – indeed, it has “never been more important” to expand access to healthcare, and mixed reality is truly empowering people around the world to “transcend space” to collaborate globally”: Satya Nadella on Twitter: “At a time when expanding access to healthcare has never been more important, it’s fantastic to see how surgeons around the world are using mixed reality to transcend space, in order to collaborate with colleagues and improve patient care.” / Twitter


 


[Alt text] Satya's tweet reads, "At a time when expanding access to healthcare has never been more important, it's fantastic to see how surgeons around the world are using mixed reality to transcend space, in order to collaborate with colleagues and improve patient care."[Alt text] Satya’s tweet reads, “At a time when expanding access to healthcare has never been more important, it’s fantastic to see how surgeons around the world are using mixed reality to transcend space, in order to collaborate with colleagues and improve patient care.”


 


 


The event also spotlighted our global healthcare ecosystem of partners and customers who have helped healthcare organizations worldwide to deploy D365 Remote Assist on HoloLens 2, enabling surgeons and clinicians to be more productive and get critical real-time support with 3D annotations from remote subject matter experts during critical times of need.


 


Key highlights:



  • 15,000 viewers from 130 countries took part in this unique experience

  • 12 holographic surgeries

  • 15 roundtables and live interviews with 20 exclusive guests

  • 20 partners involved

  • 70+ health experts from all around the world joined the adventure as speakers. oniellucrisia_0-1613604798791.png

     



    • Key speakers included:

      • Tom McGuinness – Corporate Vice President Healthcare, Microsoft

      • Elena Bonfiglioli – Managing Director Health & Lifescience EMEA, Microsoft

      • Charlie Han – Product Lead HoloLens , Microsoft

      • Charles Calestroupat – Public Sector Lead , Microsoft France

      • Martin Hirsch – General director AP-HP (France)

      • Cyrille Isaac-Sibille – Deputy & Member of the Social Affairs Commission, French National Assembly (France)

      • Dr. James Kinross – Consultant Surgeon and Senior Lecturer – Imperial College London (U.K)

      • Pr. Igor Sauer – Head of experimental Surgery – Charité Hospital (Germany)

      • Dr. Robert L. Hannan – Director of the Cardiovascular Surgery Innovation Laboratory, Nicklaus Children’s Health System (U.S.)

      • H.E. Dr. Amin Hussain Al Ameeri – Assistant Undersecretary of Health Policy and License in the Ministry of Health and Prevention (United Arab Emirates)

      • H.E. Saaid Amzazi – Minister of Education, Vocational Training, Higher Education and Scientific Research (Morocco)

      • Pr. Thomas Gregory – Head of Orthopedic & Traumatology Department, Avicenne Hospital, AP-HP (France)

      • Dr. Gao Yujia – Associate consultant for hepatobiliary and pancreatic surgery, NUHS (Singapore)






Worldwide Social Impact 


The tweets below are just a glimpse of the event buzz from around the world:


 


oniellucrisia_2-1613604474882.png


 


oniellucrisia_3-1613604474894.png


 


oniellucrisia_4-1613604474902.png


 


 


oniellucrisia_5-1613604474916.png


Missed the event? Catch all the recordings on demand below:



  • Session recordings available here

  • Roundtables & surgeries available here

  • LinkedIn Elevate here


Want to explore using mixed reality in your organization?


Check out this form to connect with a Microsoft representative to find out how mixed reality can help your business.


 


Already using mixed reality solutions in your healthcare organization and want to share your story with the world?


Submit your story pitch at https://aka.ms/MRGuestBloggers today!


 


#MixedReality #HealthcareTech


 

2021: Printing in & out of the office, the smoke and the cloud

This article is contributed. See the original author and article here.

Printing in the new normal
Since Covid-19 changed our world, applications are shifting away from corporate office servers and users work much more from home and satellite offices. The print architecture must adapt to this new way of working as the Cloud becomes its center. Using a print management solution hosted on a VM or server inside the company intranet becomes less relevant in the “New Normal”. 
  
Smoke is not even a small Cloud 
A print cloud solution can only be trusted when there is no independent environment (no VM, no gateway) and the whole solution relies on APIs. Most current Print Management solutions are on-premises server-based solutions that merely extend their reach to the cloud by interfacing with some Cloud API. What they lack is an agent inside the printer able to communicate directly with the cloud and display print release menus. They still use costly gateway on premises. 
 
Several printer vendors propose their own flavor of cloud-based print management, but many clients prefer a vendor-agnostic solution for investment protection reasons. 
  
Mission-critical service 
When a printer or MFP includes a complete print management solution, it will cover at least the user ID, PIN or card, rights management, pull printing and usage tracking. High Availability is necessary to handle cloud connection issues and require solutions agents to be running inside the printers (Edge Computing), not requiring any local PC or server. 


  
The revolution in 2021 
Is anything changing in the slow-motion office printing industry? Yes, a revolution more than an evolution and it is Universal Print from Microsoft, a fully comprehensive, ambitious print infrastructure targeting companies using Microsoft 365 cloud and software developers. Universal Print manages the overall security, the communication protocol between printers and the cloud, the spooling, and the availability of the print queues in Windows 10.  It offers access to the native printer capabilities such as duplex, paper tray, stapling, punching and more and thus the “Universal” in the product name. 
 
With Universal Print, it is now possible to propose true cloud-based print management, with just printers, Microsoft 365, and Windows 10 client PCs which results in a pure cloud architecture. 
  
When Microsoft releases Universal Print, clients will be able to migrate their print management to cloud, benefit from out-of-the-box basic “click & print” capabilities and complement it with innovative Celiveo solutions using APIs. 
 
Differentiating smoke from cloud, identifying true extensions to Universal Print, or sound architecture will be necessary, as the choice of the right tools for a smooth full and true cloud migration is key. 
 


Learn more about Celiveo.

Artificial intelligence hunts for insider risk (UNCOVERING HIDDEN RISKS – Episode 1)

This article is contributed. See the original author and article here.

Host:  Raman Kalyan – Director, Microsoft


Host:  Talhah Mir –   Principal Program Manager, Microsoft


Guest:  Robert McCann – Principal Applied Researcher, Microsoft


 


The following conversation is adapted from transcripts of Episode 1 of the Uncovering Hidden Risks podcast.  There may be slight edits in order to make this conversation easier for readers to follow along.  You can view the full transcripts of this episode at:  https://aka.ms/uncoveringhiddenrisks


 


In this podcast we’ll take you through a journey on insider risks to uncover some of the hidden security threats that Microsoft and organizations across the world are facing.  We will bring to surface some best-in-class technology and processes to help you protect your organization and employees from risks from trusted insiders; all in an open discussion with topnotch industry experts. 


 


RAMAN:  Hi, I’m Raman Kalyan, I’m with Microsoft 365 Product Marketing Team.


 


TALHAH:  And I’m Talhah Mir, Principal Program Manager on the Security Compliance Team.


 


RAMAN:  Welcome to episode one, where we’re talking about using artificial intelligence to hunt for insider risks within your organization. Talhah, we’re going to be talking to Robert McCann today.


 


TALHAH:  Yeah, looking forward to this!  Robert’s been here for 15 years, crazy-smart guy. He’s an applied researcher, a Principal Applied Researcher at Microsoft, and he’d been like a core partner of ours, leading a lot of the work in the data science and the research space.  In this podcast, we’ll go deeper into what are some of the challenges we’re coming across, how we’re planning to tackle some of those challenges, and what they mean in terms of driving impact with the product itself.


 


RAMAN:  Robert, how long you’ve been in this space now?


 


ROBERT:  I’ve been doing science for about 15 years at Microsoft. The insider risk, about a year.


 


RAMAN:  Nice. What’s your background?


 


ROBERT:  I am an applied researcher at Microsoft. I’ve been working on various forms of security for many years. You can see all the gray in here, it’s from that.  I’ve done some communication security, like email filtering or attachment, email attachment filtering. I’ve done some protecting Microsoft accounts or user’s accounts, a lot of reputation work. And then the last few years I’ve been on ATP products. So basically, babysitting corporate networks, looking to see if anybody had got through the security protections, post breach stuff. So, that’s a lot of machine learning models across that whole stack. The post breach thing is a lot about looking for suspicious behaviors on networks or suspicious processes. And then the last year or so, I wanted to try to contribute to the insider threat space.


 


RAMAN:  What does it mean to be an applied researcher?


 


ROBERT:  An applied researcher, that’s a propeller head. So we all know what propeller heads are. Basically, I get to go around and talk to product teams, figure out their problems, and then go try to do science on it and try to come up with technical solutions. AI is a big word. There’s a lot of different things that we do under that umbrella. A lot of supervised learning, a lot of unsupervised learning to get insights and to ship detectors. I basically get to do experiments, see how things would work, and then try to tech transfer it to a product.


 


RAMAN:  So, you said you spend most of your time in the external security space, things like phishing, ransomware, people trying to attack us from the outside. How is insider threat different? Do you ever think, “Wow, this isn’t what I expected,” or, “Here are some challenges,” or, “Here’s some cool stuff that I think I could apply.”


 


ROBERT:  Yeah. It’s a very cool space. Number one, because it’s very hard from a scientist’s perspective, which I enjoy.  The first thing that you hit on, that’s really the sort of fundamental first thing that makes it hard is that they’re already inside. They’re already touching assets. People are doing their normal work and they inside threaten might not even be malicious. It might be inadvertent. It’s a very challenging thing. It’s different than trying to protect a perimeter. It’s trying to sort of watch all this normal behavior inside and look for any place that anybody might be doing anything that’s concerning from a internal assets perspective.


 


RAMAN:  When you think about somebody doing something challenging, is it just like, hey, I’ve downloaded a bunch of files. Because today I might download a bunch of files. Tomorrow, I might just go back to my normal file thing. But if I look across an organization, besides a Microsoft, that’s 200,000 people. That could probably produce a lot of noise, right? So how do you kind of filter through that?


 


ROBERT:  So actually, the solutions that are right now in the product and what we’re trying to leverage to improve the product are built on a lot of AI things.  There are very sophisticated algorithms that try to take documents and classify what’s in those documents, or customers might go and label documents, and then you try to use those labels to classify more documents. There’s a lot of very sophisticated, sort of deep learning, natural language processing stuff that we leverage. And those are very strong signals to try to see, okay, this behavior over here, that’s not so concerning, but this behavior right here, that’s a big deal. Now we need to fire an alert. Or maybe it’s a little more of a deal, but then I sort of got some sentiment based on how the person’s doing, the employee, if I combine those things, now it becomes compelling. It’s a very hard noise reduction problem.


 


RAMAN:  As you were talking, Robert, one thing that sort of occurred to me is I’ve had conversations with customers, and you mentioned this around leveraging, artificial intelligence and learning and helping the system learn. A lot of questions I get from customers is like, “What is artificial intelligence in this context? And how do I know that this is something that I should trust, or how is it different than maybe what I’m doing today?”


 


ROBERT:  I’ve seen this play out time and time again on many, many times that sort of a security team has tried to start leveraging AI to do smart detections. It’s a very different game. It’s not, “I have precise detection criteria, and if you satisfy that, then I understand what I did, and I understand the detection.” It is a very statistical machine that sometimes you must assume it’s going to make mistakes. So, one key thing you need to be able to do to trust that machine is you need to measure how well it’s doing. You have to have a way to babysit the thing, basically. And you have to set your expectations to understand that there is error going to happen, but there has to be an error bar met. So that’s basically what you’re babysitting against.


 


ROBERT:  Another very key thing is when it fires a detection, that thing can’t be opaque. It needs to explain how in the heck or why in the heck it thinks that this thing is a threat, right? So, the deep learning folks, like for image classification or natural language processing work, they sort of jumped on board real fast with the deep learning thrust without really worrying too much about being able to explain why that thing was classifying images the way it was. And they were ecstatic because they’re getting so much better results than they’ve gotten the decade before. Right? But then it came to the point where they started realizing, hey, I can game this thing, and I’ll prove it to you. And then you take a picture, and you change a few pixels, and then I make that thing classify the cat as somebody else. When you use a camera for detecting people, facial recognition, and identity verification, that becomes a serious problem.


 


They sort of went under this phase now, and it’s very hot right now, can you do these sophisticated models that also can … you can explain why they did what they did. And there’s a ton of science and a ton of work trying to crack open the black boxes, right? Those big, sophisticated learners. But you don’t have to go to that phase. There’s all this other AI that works very, very well and is a very effective, and I would say is probably the most common stuff that’s used and delivers the most value in industry that’s not so opaque. And the models are simple enough or I guess opaque enough, or they’re explainable enough that you can tell a customer, “I detected this threat because this, and this, and this happened.” Right? So, explainability is very key to trying to trust AI.


 


TALHAH:  That brings up another key question we get from customers a lot. This idea of transparency in the model or the explainability in the model that is a key attribute, right?  So it looks like we’re learning from years and years of data science and research in this space to apply that into the models that we build.  Can you talk about a little bit? Insider risk, what do you think constitutes a good model? What kind of explainability should be in that model so we can help our customers make the right decision on whether something is bad or not?


 


ROBERT:  Well, you have to put on the customer hat, which sometimes is hard as a scientist. A scientist might be satisfied saying, “If the explanation for some prediction by some model is … The feature 32 was this far away from a margin.” Okay? So, there’s some technical explanations why a classification might happen. But the customer, they just want to know, “What are the actually human actions that caused that?” You got to have a model where you can add simple enough features where you can boil it down and say, “This person’s suspicious because they printed this document that’s highly confidential, and then they did it again two days later, and then they did it again three days later, and then they did it again four days later.” And you must have that very human intelligible output from your model, which is something that is very easy to skip if you don’t have explainability top of mind. You have to pick the appropriate technologies.


 


TALHAH:  Because it’s really about trying to abstract the way all the science behind the scenes, right? We should just be able to easily explain to the customer, “Here’s what we saw.” How we detected should be irrelevant to them. Here’s what is happening with this potential actor. Let’s go make the decision on how to manage that risk.


 


RAMAN:  Yeah. And I think that is the sort of the key here, right? As you think about there’s the tech, which is how do I try to detect these things? And then there’s the person consuming the output of the tech, right? And typically, the person consuming the output of the tech is somebody who may be in HR or in legal, maybe a security analyst, but they have to interface with HR and legal. And they may not be as sophisticated. I’m technical, but I’m not as technical obviously as Robert and probably you. And I don’t want to go deep dive into some algorithm to try to figure out, “Well, what’s going on here?” I want to do, “Hey, the risk score of this individual is high and here’s the related activity that the system found, and this is why you should believe it.”


 


TALHAH:  Yeah. In fact, we’ve seen this in our customers. We’ve seen this in our own experience in that the people that have to make the timely and informed decision on how to manage insider risk is oftentimes the business or HR or legal. They don’t want to get into the technical details behind the model that was used or this, that, or whatnot. They just need something that’s easy to understand in business terms so they can make that determination on what needs to happen. Rob and I were just on a call with a customer earlier this week and they raised this question on why we can’t do supervised learning for these detectors, so I’d love to get your thoughts on some of the challenges or maybe some of the opportunities or how you’re looking at the types of learning models that you use for these detectors.


 


ROBERT:  One of the challenges is how much context it needs. And if you want labels, you got to be able to take and give that context to the customer when they have alerts, right? They need to be able to accurately say, “Hey, this alert’s right, and it’s easy for me to tell that, and I can do it in an efficient way because the product just gave me an explanation.” Now, once you’re able to sort of explain yourself and you’re able to give it to the customers, so they can efficiently triage, now you’re starting to crack open this sort of virtuous cycle where they can start giving you labels, and you can pull them back in house and you can start learning how to do supervised classification on this stuff. It’s very key. You need this sort of label generation mechanism, right?


 


ROBERT:  So, that’s key for opening supervised learning. But it’s also key in that insider threats can be very subjective.  One tenant can want to see the same activity, and another tenant might say, “Ah, that’s not important to me. Don’t tell me that, please. That’s noise.” Right? So now you got to be able to do classification that’s customized per tenant, right? And that each tenant doesn’t want to go in and fiddle with all your AI and make it to work just right for them. An easier way for them to express what they want is to give you feedback. We explain detections, they give us feedback, and now we can start learning. Okay, supervised model works for these types of customers. This other supervised model works for these types of customers, and now we can sort of get this customization game going as well. But all of that and all of those supervised learning techniques, they rely on labels, and you got to do a good job explaining to your customers to get that feedback.


 


RAMAN:  One question, Robert, I also get is around … Today, a lot of the tools or a lot of my detection capabilities are reactionary. I got fired or I’m not happy, and I downloaded a bunch of stuff and I’m out of here. I resign. Right? But prior to that, maybe a month prior, or maybe it’s four months prior, or even three weeks prior, there might’ve been some activity that was happening that might’ve indicated that I was about to do it. Well, can you help me predict? Can you help me be more proactive? And I think, again I go back to this is a spectrum of things, right? We’re not going to know today, is Talhah bad tomorrow? Probably not. Right? But it could be like, hey, review time’s coming up. Didn’t get the bonus he wanted. He’s been working on insider risk for the last two years. And now it’s like, “Okay, I’m out of here, man. I’m going to go somewhere else.” So I guess the big question I want to ask is, how do we answer that for customers when they ask us that? What would be your answer?


 


ROBERT:  There’s something here, and Raman, I think you sort of hinted at it; is that there’s past behavior that we could look at and we could say, “Okay, from our past experience, this sort of sequence, 10% of the time end up with something that we didn’t like. So, if we see that in the future, let’s do that again.” So actually, on a technical side, we’re doing a lot of work on sequential pattern mining, and it boils down to just that. What are sequences of activity based on the type of context that Talhah mentioned, it might be sentiment, or it might be something else that tend to lead up to things that in hindsight we know were bad. Okay, so we’re going to use that to predict in the future. But there’s also stuff that maybe we didn’t see before. So maybe we also look for here’s some machinery that today … Here’s sequences that are totally abnormal, but let’s go get somebody on them, and let’s look at that and let’s start get that labeling loop going on, so we can understand if that sequence is good or bad, so in the future, we can protect other people with the same observations. But your question about being preemptive is a good one. And I think sort of the sequential mining aspect, very fun from a technical standpoint. And I think it’d be very valuable for our customers, for sure.


 


RAMAN:  Because I think that this is highlighting for me from a tech perspective … You know, I’m a marketing guy, so I’m about selling it, selling the story. But as I think about this, what becomes very clear to me is that you can’t just use one thing, one signal. Can’t just be like, “Oh, somebody is on an endpoint and they tried to copy something to a USB and that might be bad.” There are multiple things going on, right? There’s sentiment analysis. There might be other activity. It’s who they’re talking, to how many times they’re trying to access stuff. Did they come into a building when they shouldn’t have been in the building?


 


All of these different elements can come into play, and to Talhah’s earlier point, it’s really about … because we’re dealing with employees, you can’t assume that everybody is bad, right. It could be like, “Wow, I couldn’t get my PC to turn on at home, so now I got to go to the office and do it there.” Maybe that was in the middle of the night. I don’t know. But I think that’s the big challenge in this space from my perspective is that you just can’t rely on one set of signals. It has to be multiple signals, and the machine learning is key to really driving an exposure of, this could be something that you might want to take a closer look at. You’re always going to have a human element, I guess, right?


 


TALHAH:  That’s absolutely true. In fact, this reminds me, when we were sort of establishing the program at the company, we had a whole virtual team put together and we were trying to kind of ground ourselves on a principle, and one of the guys on the team actually proposed something that just stuck, which is this program should be built on the principle of assume positive intent but maintain healthy skepticism. What that effectively means is you just follow the data. That’s it. Don’t start off thinking everybody’s bad. Don’t start off thinking you’re going to catch bad guys. This is about looking at the data, as much of the data, as much of the context, to Rob’s point. And just follow that until you get to a point where it’s like, this looks odd. This looks potentially risky. And then you take that information, you surface it for the business with the right context, right explainability in the model so that they can make the decision.


 


RAMAN:  I think presenting that in a way that allows you to make that informed decision does two things. One, it gives you the ability to kind of say, “Hey, this might be bad for me,” but two, it also allows you to filter out the noise to say, “Hey, not everything is bad,” because what I also hear is, “I’m done with …” Let’s imagine using a data loss prevention tool to try to detect insider risk, right? That’s challenging because, A, that’s just one set of signals. It’s a very siloed approach. And B, you’re going to be overwhelmed with a ton of alerts because it’s very rules-based, right? It’s not [crosstalk] using all this machine learning type of stuff. How do you prevent alert fatigue? And I think that’s where you need this combination of signals to not only look at what might be potentially problematic but presents it in a way that you can then make that informed decision.


 


RAMAN:  So, Rob, one of the things that … As we look forward, there’s a number of different types of detections that we could, potentially look at. One is sequential modeling. That’s an interesting one, and we’d love for you to explain about that. The other one is around this concept of low and slow. From what I understand, it’s not about this big burst of, “I come in today, I download a thousand files, and I’m out of here.” It’s more, “I’m now a little bit irritated, and over the next six months, I’m going to download a file here, a file there, 10 files here.” I’d love for you to kind of deep dive into that.


 


ROBERT:  Yeah. I mean, those are the really interesting cases, right? Those are the people that are being very stealthy, right? And the people that we want to try to detect. It’s a little bit different of a game. Like you said, the bursty stuff, did they do something abnormal to themselves or did they go over some globally agreed upon threshold that this thing is just bad behavior, right? That’s a different game than looking at somebody who’s trying to stay under the radar and taking long-term. You got to model things a little differently. Number one, you got to look at longer history. I’m not looking at bursts of daily activity. I’m looking at what they’ve done in the long term. So now you have engineering issues because you got to have the scale to look at everybody’s rich, long history. But then after you get that, okay, are monitoring somebody, it’s very hard to tell.  In stock markets, how do you tell the difference between two flat lines where one’s a good investment and one’s not a good investment? It’s hard because it’s low and it’s slow, right? The behavior is subtle.


 


ROBERT:  One thing that we’re looking at is how can we tighten the screws when we do anomaly detection, right? So, it’s easy to tighten anomaly detection to the level of detecting a burst. Okay? You can do that, right? Now we want to tighten anomaly detection to the point we can pick out two flat lines and tell the difference from good behavior and bad behavior. Right? What does normal mean? I mean, normal has got to be right in between those two. How do we find that normal, right?  The way that we’re doing that is we’re modeling people based upon what’s normal for groups of similar employees, right? How tight can we say what’s normal behavior for devs so that we can have a model that looks at low and slow normal work behavior for devs and low and slow, little bit worse than normal behavior for devs and pick that apart.  You just got to do tighter anomaly detection, and you got to compare them to groups that’s going to give you a definition of normal behavior that’s tight enough that you’re going to be able to pick out, even though they’re low and slow, you’re going to be able to pick out the different behavior over a long period of time.


 


TALHAH:  So Rob, being a long-term researcher, what are some of the pet peeves or some of the things that really have annoyed you about some of the product pitches you’ve seen where they over-promise or the way they position AI?  I’d love to hear some of the stories that you have on what kind of just gives you the shivers.


 


ROBERT:  As scientists, we have a community and we go talk to each other, and you get to know people, and you figure out what’s really behind that magic sauce. And it’s not as impressive sounding as the marketing. So that means the marketing is doing a good job, I guess. Right? But that’s sort of a pet peeve from a scientist standpoint. I mean, good signs that you should see to sort of prove that stuff out is you should see scientific activity. If they say they’re doing good science, they probably have scientists working for them. And if they have scientists working for them, then those scientists like to do things, publish, or make patents.  You should see some scientific evidence happening there. I think that’s sort of a telltale sign. So that’s one pet peeve; overselling how much is going on there.


 


Another pet peeve is this idea that machine learning or AI is a magic bullet that you just throw stuff at and it magically gives you exactly what you want. It doesn’t work that way. Computers are basically just big, really fast calculators, right? And we’ve figured out some algorithms that they can look at some data and pick out some patterns quickly, but that’s what they are. They’re pattern finders. The scientific community has been clever in how they take that sort of big, fancy calculator and put it into making some business decisions that are crucial and stitching them together. Like we talked about, you know, here’s a module that does sentiment analysis. Big, fancy calculator, right? Here’s a module that does confidentiality of the file. Big, fancy calculator. And then there’s all this business stuff that comes in that has to stitch that together to make a good decision. It’s not just the AI. It’s the stitching together in the appropriate ways that solves your business problem that’s really the magic sauce, right? So that’s another pet peeve. You just throw stuff in AI and then you suddenly got a million-dollar business. It doesn’t work that way. You’ve got to put these components together and work hard on them because they’re challenging, but you got to stitch them together correctly. It’s the whole ecosystem.


 


RAMAN:  And that’s actually an interesting point, Robert. I like that because in a way, what you’d say is:  I’m creating clothing, right? And I’ve got different types of fabric, different types of zippers. And I stitch it together and I produce it and it’s like, “Hey, here you go. Here’s your shirt.” And somebody says, “I don’t like it that way. I want to be able to stitch it in a different way.” Or if new fabric comes out, I’m going to use that in new types of clothing. And I think this is what to me is interesting about what you just said, which is you’ve got these different calculators that are looking at different parts of the puzzle, right? Taking different signals in, and then the secret sauce is how do you stitch it together to produce something that you might want to consider as being an anomaly or abnormal behavior, but then be able to provide feedback back into that calculator to say, “Hey, I didn’t like that.” Or “This didn’t work for me. Stitch it together somewhat differently.”


 


ROBERT:  Yeah, you’re right. I mean, how do you trust these black boxes? It’s all that logic that babysits it. You’ve got to have some guardrails in there, so the thing doesn’t go off the rail and mess up with everything else that you’re stitching together. It’s that sort of business logic on top that’s super, super valuable and just as impressive to me as the AI under the hood, to tell you the truth.


 


RAMAN:  Robert, appreciate you being here today. This has been great, great conversation on the tech. As you think about the future and where we see ourselves in five years from now, what are your projections in terms of what might be different than what we have today?


 


ROBERT:  Yeah, that’s a great question. I think some of the big thing is, solve these sorts of challenging tweaks, which is like, Talhah mentioned multi-users. We solve multi-users. We get good enough anomaly detection that we can pick off the low and slow, even differentiate that. I think one thing would be super powerful that you get to, is if you get this sort of feedback coming, right? Because once you get this feedback loop coming, then you crack open the AI door for all kinds of algorithms. There’s a lot more supervised stuff that we could use, and we could leverage that would make us even more powerful, which would give better detectors to people, which would give us more labels to get even more powerful. And when you sort of get that mutual synergy going, I think the detections, they skyrocket.


 


And then one other thing is tax space. Industry has these threat matrices, right? And they sort of have this benchmarks that they’re trying to work against, and they’re writing down simple rules to detect that, and they’re using sophisticated AI targeted at known bad behaviors. I see that sort of landscape roadmap start happening in the insider threat space as well. Because it’s going to prioritize what we do from a product standpoint and from a research standpoint, and it’s going to be an input to our models. “Hey, this is known bad stuff.  We better be able to detect that.” Stitch things together to detect those sequences.                                            


 


 


To learn more about this episode of the Uncovering Hidden Risks podcast, visit https://aka.ms/uncoveringhiddenrisks.


For more on Microsoft Compliance and Risk Management solutions, click here.


To follow Microsoft’s Insider Risk blog, click here.


To subscribe to the Microsoft Security YouTube channel, click here.


Follow Microsoft Security on Twitter and LinkedIn.


 


Keep in touch with Raman on LinkedIn.


Keep in touch with Talhah on LinkedIn.


Keep in touch with Robert on LinkedIn.

AKS on Azure Stack HCI February Update

This article is contributed. See the original author and article here.

Hi All,


 


I am excited to let you know that the first update for AKS on Azure Stack HCI of 2021 is now available.  You can evaluate the AKS on Azure Stack HCI February Update by registering for the Public Preview here: https://aka.ms/AKS-HCI-Evaluate (If you have already downloaded AKS on Azure Stack HCI – this evaluation link has now been updated with the February Update).


 


Some of the new changes in the AKS on Azure Stack HCI February Update include:


 


Support for completely Static IP environments


With this update we no longer require a DHCP server for any part of the AKS-HCI infrastructure.  When you deploy AKS-HCI you can now specify if you want to use DHCP or Static IP addresses.  If you choose to use Static IP addresses – we will ask you to provide two IP address ranges.  The first range will be used for any Kubernetes control plane and worker node virtual machines that we create, while the second range will be used for any containerized applications that you deploy on top of AKS-HCI.


 


Integration with Active Directory


When we first launched the public preview of AKS-HCI, we talked about all the security work we had in the pipeline (in this blog post: https://techcommunity.microsoft.com/t5/azure-stack-blog/security-capabilities-in-azure-kubernetes-service-on-azure-stack/ba-p/1705759).  With the February Update for AKS-HCI we are now introducing integration with Active Directory.  This means that when you create a new Kubernetes cluster you can now enable Active Directory integration.  The effect of this is:



  • Your kubeconfig file will no longer contain a secret hash

  • You can specify users or user groups in your Active Directory environment who have access to the Kubernetes cluster

  • You can even use Active Directory and Kubernetes RBAC to give users in your environment limited access to only a subset of deployments in your Kubernetes cluster


 


Evaluation in Azure


Finally – many of you let us know that while you were interested in learning more about AKS-HCI – it was hard to get the hardware necessary for a full AKS-HCI deployment.  In response to this we have created a guide for evaluating AKS-HCI inside an Azure VM: https://aka.ms/aks-hci-evalonazure


 


Obviously, this is not meant for production environments.  And if you want to run containers on Azure you should just be using AKS!  But this provides an easy way to get up and running with AKS-HCI with zero hardware – so you can figure out how it could work for you.


 


There have been several other changes and fixes that you can read about in the February Update release notes (https://github.com/Azure/aks-hci/releases/tag/AKS-HCI-2102)


 


Once you have downloaded and installed the AKS on Azure Stack HCI February Update – you can report any issues you encounter, and track future feature work on our GitHub Project at https://github.com/Azure/aks-hci


 


I look forward to hearing from you all!


 


Cheers,


Ben