by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
IT administrators can now use filters in Microsoft Endpoint Manager to target apps, policies and other workload types to specific devices. Available in public preview with the May release of Microsoft Intune, the filters feature gives IT admins more flexibility and helps them protect data within applications, simplify app deployments, and speed up software updates.
Microsoft built filters with a consistent and familiar rule authoring experience for admins who use Azure Active Directory dynamic device groups or are discovering the new filters capability in Conditional Access. With filters, administrators can achieve granular targeting of policies and applications to users on specific devices.
For example, this new capability makes it easier for administrators to comply with their organizational policies and compliance requirements by deploying:
- A Windows 10 device restriction policy to just the corporate devices of users in the Marketing department while excluding personal devices
- An iOS app to only the iPad devices for users in the Finance group
- An Android compliance policy for mobile phones to all users in the company but exclude Android-based meeting room devices that don’t support the settings in that mobile phone policy
Filters work in conjunction with Azure AD group assignments or the “All users” or “All devices” groups to dynamically filter the assignment to only apply to a subset of devices during check-in. Dynamic filtering means that devices can be targeted with the right security policy and applications faster than ever before.
Filters are re-usable objects that can be applied to many workload types across the Endpoint Manager admin center. IT administrators can create a filter object using expressions across a set of supported device properties and then apply to that filter with an app or policy assignment. When devices check in to receive the policy, the filter evaluation engine determines applicability – either applying or not applying the policy based on the filter result. Results are reported back to the Endpoint Manager admin center so administrators can track policy and app deployment.
Workflow:



Filters is being rolled out with full support across platforms (Windows, Android, iOS and macOS) and an initial set of supported workloads and filter properties. Based on customer feedback, we will expand the capabilities across workloads in the coming months.
We value the input we received from customers in private preview. Here are a few highlights:
“We are starting to use filters a lot more. We are really looking forward to the previews coming up.”
“The Endpoint Manager filters feature has solved the challenges we faced with managing user-targeted settings and apps for users who have access to both a laptop and virtual desktop. For example, we can now apply a filter to prevent a user-assigned VPN profile from being applied when a user signs into their virtual desktop”
“Since we support a large number of different use cases, it’s always difficult to find a seamless way to target your workloads to ensure everyone in the field gets exactly what they need (configurations, apps, certificates, profiles). This is exactly where the Filters feature play a key role to accomplish difficult targeting scenarios. Filters helped us achieve complex assignment models eliminating the need of manual assignment work and helping IT stuff save important time to focus on further strategical and technical design key aspects for a truly modern workplace in our organization.”
“MEM Filters feature is allowing more granularity for assigning our policies as well as applications. Filters helped us adopt MEM even further in our very mixed environment, allowed us creating a better targeted approach. Filters also addressed a specific use case where we had to exclude virtual devices and critical systems from some of our assignments.”
“At Krones we support a large number of different use cases and it has always been difficult to find a way to target the specific workloads. Besides we have to ensure, that all employees get the tools they need for their work, like configurations, apps, certificates or profiles. This is exactly where the Filters feature plays a key role to accomplish difficult targeting scenarios. Filters helped us achieve complex assignment models eliminating the need of manual assignment work. As a result, our IT staff saved important time and is now able to focus on further strategic and technical design key aspects for a truly modern workplace within our organization.” –Roman Kleyn, Head of Workplace Design at Krones AG
As always, we appreciate your feedback. Please feel free to post your comment here or or tag me on LinkedIn.
To learn more about AAD, go here: https://aka.ms/RSACIdentity2021
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Howdy folks!
We’re excited to be joining you virtually at RSA Conference 2021 next week. Security has become top-of-mind for everyone, and Identity has become central to organizations’ Zero Trust approach. Customers increasingly rely on Azure Active Directory (AD) Conditional Access to protect their users and applications from threats.
Today, we’re announcing a powerful bundle of new Azure AD features in Conditional Access and Azure. Admins can gain even more control over access in their organizations and manage a growing number of Conditional Access policies and Azure AD authentication for virtual machines (VMs) deployed in Azure. These new capabilities enable a whole new set of scenarios, such as restricting access to resources from privileged access workstations or even specific countries or regions based on GPS location. And with the capability to search, sort, and filter your policies, as well as monitor recent changes to your policies you can work more efficiently. Lastly, you can now use Azure AD login for your Azure VMs and protect them from being compromised or used in unsanctioned ways.
Here’s a quick overview of the features we’re announcing today:
Public Preview
Named locations based on GPS: You can now restrict access to sensitive resources from specific countries or regions based on the user’s GPS location to meet strict data compliance requirements.
Filters for devices condition: Apply granular policies based on specific device attributes using powerful rule matching to require access from devices that meet your criteria.
Enhanced audit logs with policy changes: We’ve made it easier to understand changes to your Conditional Access policies including modified properties to the audit logs.
Azure AD login to Linux VMs in Azure: You can now use Azure AD login with SSH certificate-based authentication to SSH into your Linux VMs in Azure with additional protection using RBAC, Conditional Access, Privileged Identity Management and Azure Policy.
General Availability
Named locations at scale: It’s now easier to create and manage IP-based named locations with support for IPv6 addresses, increased number of ranges allowed, and additional checks for mal-formed addresses.
Search, sort, and filter policies: As the number of policies in your tenant grows, we’ve made it easier to find and manage individual policies. Search by policy name and sort and filter policies by creation/modified date and state.
Azure AD login for Windows VMs in Azure: You can now use Azure AD login to RDP to your Windows 10 and Windows Server 2019 VMs in Azure with additional protection using RBAC, Conditional Access, Privileged Identity Management and Azure Policy.
We hope that these enhancements empower your organization to achieve even more with Conditional Access and Azure AD authentication. And as always—we’re always listening to your feedback to make Conditional Access even better.
Named locations based on GPS location (Public Preview)
This capability empowers organizations to meet strict compliance regulations that limit where specific data can be accessed. Due to VPNs and other factors, determining a user’s location from their IP address is not always accurate or reliable. GPS signals enable admins to determine a user’s location with higher confidence. When the feature is enabled, users will be prompted to share their GPS location via the Microsoft Authenticator app during sign-in.
Conditional Access named locations is more versatile than ever with the addition of new GPS-based country locations. When selecting countries or regions to define a named location that will be used in your Conditional Access policies, you can now decide whether to determine the user’s location by their IP address or GPS location through the Authenticator App. This feature will be available in public preview later this month.
To configure a GPS-based named location for Conditional Access:
- Go to Azure AD -> Security -> Conditional Access -> Named locations
- Click + Countries location to define a new named location defined by country or region
- Select the dropdown option to Determine location by GPS coordinates (Preview)
- Select the countries you want to include in your named location and click Create.

Once you’ve created a GPS-based country named location, you can use Conditional Access to restrict access to selected applications for sign-ins within the named location. In the locations condition of the policy, select the named locations where you want your policy to apply.
When users sign-in, they’ll be asked to share their GPS location through the Authenticator app to access applications in scope of the policy.

At left, users are asked in the browser to share their location. At right, users are prompted to share their location.
Filters for devices (Public Preview)
Next, we’re excited to release a powerful new Filters for devices condition. With filters for devices, security admins can enhance protection of their corporate resources to the next level by targeting Conditional Access policies to a set of devices based on device attributes. This capability unlocks a plethora of new scenarios we have envisioned and heard from customers, such as restricting access to privileged resources from privileged access workstations. Additionally, organizations can leverage the device filters condition to secure use of Surface Hubs, Teams phones, Teams meeting rooms, and all sorts of IoT devices. Filters were built with a consistent and familiar rule authoring experience for admins who use Azure AD dynamic device groups or are discovering the new filters capability in Microsoft Endpoint Manager.
In addition to the built-in device properties such as device ID, display name, model, MDM app ID, and more, we’ve provided support for up to 15 additional extension attributes. Using the rule builder, admins can easily build device matching rules using Boolean logic, or they can edit the rule syntax directly to unlock even more sophisticated matching rules. We’re excited to see what scenarios this new condition unlocks for your organization! This feature will be available before end of this month.
Enhanced Conditional Access audit logs with policy changes (Public Preview)
Another important aspect of managing Conditional Access is understanding changes to your policies over time. Policy changes may cause disruptions for your end users, so maintaining a log of changes and enabling admins to revert to previous policy versions is critical. Today, we’re announcing that in addition to showing who made a policy change and when, the audit logs will also contain a modified properties value so that admins have greater visibility into what assignments, conditions, or controls changed. Check it out today!

If you want to revert to a previous version of a policy, you can copy the JSON representation of the old version and use the Conditional Access APIs to quickly change the policy back to its previous state. This is just the first step towards giving admins greater back-up and restore capabilities in Conditional Access.
Named locations at scale (General Availability)
We’re also announcing the general availability for IPv6 address support in Conditional Access named locations. We’ve made a bunch of exciting improvements including:
- Added the capability to define IPv6 address ranges, in addition to IPv4
- Increased limit of named locations from 90 to 195
- Increased limit of IP ranges per named location from 1200 to 2000
- Added capabilities to search and sort named locations and filter by location type and trust type
Additionally, to prevent admins from defining problematic named locations, we’ve added additional checks to reduce the chance of misconfiguration:
- Private IP ranges can no longer be configured
- Overly large CIDR masks are prevented (prefix must be from /8 to /32)
As a result of these improvements, admins can define more accurate boundaries for their Conditional Access policies, increasing Conditional Access coverage and reducing misconfigurations and support cases.

Search, sort, and filter policies (General Availability)
We know that as you deploy more Conditional Access policies, managing a growing list of policies can become more difficult. That’s why we’re excited to give admins the ability to search policies by name, and sort and filter policies by state and creation/modified date. Also, as part of General Availability we will be gradually rolling out the feature to Government clouds. Say goodbye to scrolling through a long list of policies!

Azure AD login for Azure VMs (General Availability – Windows, Preview Update – Linux)
Organizations deploying virtual machines (VMs) in the cloud face a common challenge of how to securely manage the accounts and credentials used to login to these VMs. To protect your VMs from being compromised or used in unsanctioned ways, we are excited to announce General Availability of Azure AD login for Azure Windows 10 and Windows Server 2019 VMs. Additionally, we are also announcing an update to preview of Azure AD login for Azure Linux VMs. These features are now available in Azure Global and will be available in Azure Government and China clouds before the end of this month.
With the preview update for Azure Linux VMs, you can use either user or service principal-based Azure AD login with SSH certificate-based authentication for all major Linux distributions. As a result, you don’t need to worry about credential lifecycle management since you no longer need to provision local accounts or SSH keys. And with Azure RBAC, you can authorize who should have access to your VMs and whether they get administrator or standard user permissions.
Using Conditional Access, you can require MFA or managed devices and prevent risky sign-ins to your VMs. Additionally, you can deploy Azure Policies to require Azure AD login if it wasn’t enabled during VM creation. You can also audit existing VMs where Azure AD login isn’t enabled, and track VMs when a non-approved local account is detected on the machine.

We hope that these new Azure AD capabilities in Conditional Access and Azure make it even easier to secure your organization and unlock a new wave of scenarios for your organization.
As always, join the conversation in the Microsoft Tech Community and share your feedback and suggestions with us. We build the best products when we listen to our customers!
Best regards,
Alex Simons (@Alex_A_Simons)
Corporate VP of Program Management
Microsoft Identity Division
Learn more about Microsoft identity:
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Our team at EDGE Next has been developing with Azure Digital Twins since the platform’s inception and have made the Azure service a core component of our PropTech platform. From energy optimization to employee wellbeing, we’ve continued to innovate on top of Azure Digital Twins to provide our customers with a seamless smart buildings platform that puts sustainability and employment wellbeing front-and-center. We’ve upgraded our platform to take advantage of the latest Azure Digital Twins capabilities – like more flexible modeling and data integration options – that have equipped us to advance our goals of a reduced environmental footprint and increased workforce satisfaction. We’ve distilled some key learnings from our enhancements and we’d like to share our ideas with any team developing with Azure Digital twins – regardless of industry vertical.
The EDGE Next platform
EDGE Next is a PropTech company that was spun-off from EDGE, a real estate developer that shares our goal of connecting smart buildings that are both good for the environment and for the people in them.
Each EDGE project aims to raise the bar even higher to be the leader in the real estate market from a sustainability and wellbeing perspective. The EDGE Next platform provides a seamless way of ingesting massive amounts of IoT data, analyzing the data and providing actionable insights to serve both EDGE branded and non-EDGE branded (brownfield) buildings. EDGE Next currently has 13 buildings deployed, including Scout24, a tenant in the recently completed EDGE Grand Central Berlin building. We also have several pilots running, including with the Dutch Ministry of Foreign Affairs, IKEA and Panasonic.
At the heart of the EDGE Next platform is Azure Digital Twins, the hyperscale cloud IoT service that provides the “modeling backbone” for our platform. We leverage the Digital Twins Definition Language to define all aspects of our environment, from sensors to digital displays. Azure Digital Twins’ live execution environment is where we turn these model definitions into real buildings’ digital twins, which is brought to life by device telemetry. Finally, the latest data from these buildings is pushed to onsite digital signage and accessible via our platform. Azure Digital Twins played a vital role in enabling key capabilities of the EDGE Next platform, like allowing our implementation teams to onboard customer buildings to the platform without support from the EDGE Next development team (Self-Service Onboarding) and to integrate and manage customer devices in a (Bring Your Own Device). These capabilities are crucial to our platform’s onboarding experience and have brought the time it takes to onboard a customer’s building onto the platform down from weeks to just a couple of minutes.
One of the first buildings to use the platform was EDGE Next’s headquarters, EDGE Olympic in Amsterdam, the very first in a new generation of healthy and intelligent buildings. This hyper-modern structure is used as a living lab to help facilitate real scenarios for the team to materialize incubational ideas into concrete offerings. We leverage a host of sensors throughout the building that measure air quality, light intensity, noise levels and occupancy to create transparency around people counting, footfall traffic and social distancing metrics for COVID-19 scenarios.

EDGE Olympic building (Amsterdam, NL)
Data pathways in the platform
To give you an idea of how our platform works, we walk through the path of the data before and after it reaches Azure Digital Twins. In the diagram below, you can see how Azure Digital twins fits into our platform architecture, with emphasis on the data sources and destinations.

Data sources
The platform enables telemetry ingestion from a collection of IoT Hubs, but also allows messages to flow in from other clouds and APIs (like Azure Maps for outdoor conditions) in inter-cloud and intra-cloud integration scenarios. Given the wide range of different vendor specific APIs that the EDGE Next platform must cater to, our engineering team opted to implement a generic API connector – agnostic to the vendor implementation – and fully rely on a low-code, configuration-driven code base built on top of Azure Functions.

Once the data has been collected using the ingestion mechanisms, it passes through a mapping profile which transforms the raw telemetry messages to known typed messages based on the associated device twins inside the Azure Digital Twins instance. The process of mapping the incoming data is completely driven by low-code JSON Patch configurations, which enables Bring Your Own Device (BYOD) support without additional mapping code logic.

Each message that comes into the ingestion pipeline needs to contain specific fields or it will be rejected. The mapper consults a registry containing all data points in the system and their respective mapping profile configuration to be used for the transformation. The mapper not only transforms the values to the desired internal contract format, but also performs inline unit conversion functions (such as parts per billion to micrograms per cubic meter).
The messages are passed through our Filters stage (detailed below) and finally ingested into Azure Digital Twins.
Data destinations
Once Azure Digital Twins is updated with vendor data and sensor telemetry, the resulting events and twin graph state is accessible via a rich set of APIs that supports and enables multi-channel data delivery. The data is offered in three ways: A web-based portal for visualizations and actionable insights, a digital signage solution for narrowcasting onsite and a set of data APIs to allow our customers to pull their data to integrate with their custom solutions.
EDGE Next portal

The EDGE Next Portal is where most of our customers go to get actionable insights based on retrospective aggregated data, for example highlighting abnormal spikes in energy usage over the weekends where occupancy is at a minimum or suggest more optimized set-points for the HVAC to optimize energy usage. The portal is built on ASP.NET Core 3.1 and driven by reports and dashboards rendered from Power BI embedded. From the Azure Digital Twins instance, measurements are eventually sent to the Azure Data Lake storage, where a batch process is responsible for populating an enriched data model inside Power BI.
On-site digital signage

The digital signage solution provides a way to render data collected in rooms and areas in real-time on virtually any digital display. The solution is built with vanilla HTML and JavaScript and can run on any device that supports web pages. The mechanism that drives the delivery of the data, fed from the events generated from the Azure Digital Twins instance, and then uses Azure SignalR to push all the data in real-time to the displays. On our roadmap, we’re very excited to offer a Digital Signage SDK that will allow customers to build their own narrowcast experiences.
External Data APIs
The data APIs that we expose are the primary method for our customers to interact with their data on their terms. The Streaming API is responsible for pushing real-time telemetry to a wide variety of customer destinations (like Web Hook, Event Hub, Service Bus) and is often used to drive their custom solutions and dashboarding. The Data Extract API is used for ad-hoc data extract over a REST interface where customers can define entities in their environment and a timespan to receive a JSON payload with relevant data. Finally, the Data Share API allows customers to specify destination channels to receive bulk data transfers, powered by Azure Data Share.
Learnings from our journey
We’ve honed in on Azure Digital Twins to forward our goals of sustainability and employee well-being as the service offers our solution incredible flexibility. We’ve noted some key learnings in 3 major areas of the Azure Digital Twins development cycle which we hope the developer community can build off.
Optimizing our ontology for queries
To accomplish our goals of only utilizing necessary resources and building a cost-effective platform, we leveraged service metrics in the Azure Portal to monitor and understand our query and API operations usage. We learned that on average, a typical building running in production on the EDGE Next platform generated around two million telemetry messages per day, which resulted in almost sixty million daily API operations.
After assessing our topology at the time, we focused on reworking our digital twin to optimize for simplicity and reducing data usage. We reduced the amount of “hops” (or twin relationships to traverse) required in our most common queries first; JOINs add complexity to queries, so it’s most economical to keep related data fewer “hops” from each other. We also broke the larger twins into smaller, related twins to allow our queries to return only the data we need.

As you can imagine, the ontology design process is a big part of any digital twin solution, and it can be a time-consuming task to develop and maintain your own modeling foundation. To simplify this process, we referenced the open-source DTDL-based smart buildings ontology, based on the RealEstateCore standard, that Azure has released to help developers build on industry standards and best practices for their solutions. The great thing about using a standard framework is the flexibility to pick-and-choose only the components and concepts that are truly required for your solution. For example, we chose to utilize the room, asset and capability models in our ontology, but we haven’t yet implemented valves or fixtures. As our platform grows and requirements evolve, we’ll continue to cherry-pick critical concepts from the RealEstateCore ontology.
Streamlining our compute
At EDGE Next, we take sustainability very seriously. Solutions in the cloud need to be developed with mindfulness for the environment, and our engineers take great pride in the lightweight event-driven architecture that only lights up when needed and seamlessly scales as demand grows. With that said, it is important to pare down the massive amounts of data the buildings on our platform generate to limit unnecessary compute. Below, the diagram depicts how raw telemetry traffic is deliberately reduced through several different stages of the ingestion pipeline before it reaches the Azure Digital Twins instance. These steps are depicted in the “Data sources” diagram above as the Filters stage.

- Filtering – This stage ensures all duplicate messages are rejected and telemetry values within certain deviations are ignored. Due to the nature of the sources transmitting the messages, we do not have control on the throughput or what ends up on the IoT Hub, so we must rely on hashes and timestamps for detecting duplicate values as early in the pipeline as possible. AI-driven deviation filters validate incoming telemetry values against an expected range and drop those that don’t provide impact to current values.
- Caching – This stage includes smart caching mechanisms that reduce unnecessary GET calls to the Azure Digital Twins API by storing common existing relationships. This relationship cache is kept up to date by lifecycle events emitted by the Azure Digital Twins instance.
- Throttling – The throttling mechanism delays ingress logic to avoid spiky workloads by spreading the load out evenly over time. In scenarios where data ingress is delayed, we can see a backlog of unprocessed events that can cause huge activity spikes throughout the system. The throttling mechanism will kick in as a circuit breaker to ease the load and prevent overutilization of resources.
- Grouping – This stage recognizes messages that are targeting the same twin and combining them into minimal resulting API requests to reduce unnecessary updates and load.
Concentrating our query results
The Azure Digital Twins Query Language is used to express an SQL-like query to get live information about the twin graph. When building queries for sustainability and cost-effectiveness, it’s key to minimize the query complexity (quantified by Query Units in the service), which translates to reducing JOINs (query “hops”) and the amount of data the query must sift through. It’s also important to be intentional about how many API operations your request is consuming, meaning you should limit your query responses to only what’s critical for your solution.
A good example of the balance between Query Unit consumption and API Operation response sizes is the retrieval of information across multiple relationships in your twins graph. A scenario that we encountered multiple times during development was the retrieval of a parent with its children. You can write this into a “basic” query that would look like:
SELECT Parent, Child FROM digitaltwins Parent JOIN Child RELATED Parent.hasChild WHERE Parent.$dtId = ‘parent-id’
The “basic” query consumes 26 Query Units and 81 API Operations.
When using the response data, we discovered that retrieving all properties on the parent was unnecessary, which introduced excessive API consumption. In many scenarios it was better to execute two separate queries that projected only the properties that were required. This resulted in substantially fewer API Operations consumed, with a slight increase in Query Unit consumption. Our “optimized” query looks like:
SELECT valueA, valueB, valueC FROM digitaltins WHERE $dtId = ‘parent-id’ AND IS_PRIMITIVE(valueA) AND IS_PRIMITIVE(valueB) AND IS_PRIMITIVE(valueC)
The “optimized” query resulted in 4 Query Units and 1 API Operation.Implementing this operation resulted in an approximately 83% decrease in Query Units and 98% decrease in API Operations. In one of our processes, this change introduced an overall consumption reduction of 45%.
Moreover, you may be able to remove some queries altogether – Azure Digital Twins allows you to listen to lifecycle events and propagate resulting changes throughout your twins graph. If you capture the relevant lifecycle events, which carry information like updated properties and relationships in the payload, you can gather and react to the latest twin data without any queries at all. Our architecture that supports this optimization relies heavily on Azure Digital Twins’ eventing mechanism. Lookup caches in different forms and structures (like parent/child relationships, contextual metadata, etc.) are kept up to date by these events, allowing us to reduce API Operation consumption in the service.
EDGE Next + Azure Digital Twins
Azure Digital Twins gives us a head start in value proposition and time to market than our competitors. We’re able to deliver our customers with a seamless platform that offers quicker building onboarding times. Moreover, it offers us immense value by enabling development accelerators like our low-code ingestion pipeline, and endless integration possibilities with the API surface.
We are expecting to see a huge influx of building onboardings in the near future as our platform is already starting to gain massive commercial traction within the real estate and PropTech industries. Our platform is also constantly evolving with new features, and we look forward to leveraging cutting-edge Azure offerings like Azure Maps, Time Series Insights, IoT Hub and Azure Data Explorer to amplify the value proposition of our IoT Platform.
Learn more
Read about EDGE’s vital role in digital real estate
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Microsoft partners like contexxt.ai, Qunifi, and CoreStack deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:
 |
C.AI Adoption Bot: Cai, a chatbot from contexxt.ai, answers employees’ questions and helps them more effectively utilize the features of Microsoft Teams. Cai’s algorithm will predict important tips to share and deliver only relevant content based on individual skills and learning preferences. With Cai, businesses can drive Teams usage and reduce training costs. This app is available in German.
|
 |
Call2Teams for PBX: Qunifi’s Call2Teams global gateway provides a simple link between your existing PBX and Microsoft Teams. Teams users can make and receive calls just as they would on their desk phone. No hardware or software is required, and the cloud service can be set up in minutes. Bring all users under one platform by using Teams for collaboration, messaging, and voice.
Fuze Direct Routing: Use enterprise-grade calling services in Microsoft Teams with this offer from Qunifi. Customers can combine the native dial pad and calling features of Teams with Fuze global voice architecture, enabling Teams calling across all devices, including Teams clients on mobile devices. This integration does not require hardware or software deployment on any device.
|
 |
CoreStack Cloud Compliance and Governance: CoreStack, an AI-powered solution, governs operations, security, cost, access, and resources across multiple cloud platforms, empowering enterprises to rapidly achieve continuous and autonomous cloud governance at scale. Run lean and efficient cloud operations while achieving high availability and optimal performance.
|
|
by Scott Muniz | May 12, 2021 | Security
This article was originally posted by the FTC. See the original article here.
Unwanted calls are annoying. They can feel like a constant interruption — and many are from scammers. Unfortunately, technology makes it easy for scammers to make millions of calls a day. So this week, as part of Older Americans Month, we’re talking about how to block unwanted calls — for yourself, and for your friends and family. To get started, check out this video:

Some of the most common unwanted calls the FTC sees currently include pretend Social Security Administration, Medicare, and IRS calls, fake Amazon or Apple Computer support calls, and fake auto warranty and credit card calls.
But no matter what type of unwanted calls you get (and everyone is getting them) your best defense is a good offense. Here are three universal truths to live by:
Visit FTC.gov/calls to learn to block calls on your cell phone and home phone.
The FTC continues to go after the companies and scammers behind these calls, so please report unwanted calls at donotcall.gov. If you’ve lost money to a scam call, tell us at ReportFraud.ftc.gov. Your reports help us take action against scammers and illegal robocallers — just like we did in Operation Call It Quits. In this law enforcement sweep, the FTC and its state and federal partners brought 94 actions against illegal robocallers. But there’s more: we also take the phone numbers you report and release them publicly each business day. That helps phone carriers and other partners that are working on call-blocking and call-labeling solutions.
So share these videos and this call blocking news with your friends and family. Sharing will help protect someone you care about from a scam — and it’ll help them get fewer unwanted calls, too!
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Introduction
Customizable machine learning (ML) based anomalies for Azure Sentinel are now available for public preview. Security analysts can use anomalies to reduce investigation and hunting time as well as improve their detections. Typically, these benefits come at the cost of a high benign positive rate, but Azure Sentinel’s customizable anomaly models are tuned by our data science team and trained with the data in your Sentinel workspace to minimize the benign positive rate, providing out-of-the box value. If security analysts need to tune them further, however, the process is simple and requires no knowledge of machine learning.
In this blog, we will discuss what is an anomaly rule , what the results generated by the anomaly rules look like , how to customize those anomaly rules, and the typical use cases of anomalies.
A new analytics rule type: Anomaly
A new rule type called “Anomaly” has been added to Azure Sentinel’s Analytics blade. The customizable anomalies feature provides built-in anomaly templates for immediate value. Each anomaly template is backed by an ML model that can process millions of events in your Azure Sentinel workspace. You don’t need to worry about managing the ML run-time environment for anomalies because we take care of everything behind the scenes.
In public preview, all built-in anomaly rules are enabled by default in your workspace. Even though all anomaly rules are enabled, only those anomaly rules that have the required data in your workspace will fire anomalies. Once you onboard your data to your Sentinel workspace using data connectors, the anomaly rules monitor your environment and fire anomalies whenever they detect anomalous activities without any extra work on your side. You can disable an anomaly rule andor delete it in the same way as you do for a Scheduled rule. If you deleted an anomaly rule and decide to enable it again, go to the “Rule templates” tab and create a new anomaly rule. Figure 1 shows the anomaly rules on the “Analytics” blade.

Figure 1 – Anomaly rules
To learn the details of an anomaly rule, select the rule and you will see the following information in the details pane.
- Description explains how the anomaly model works and the ML model training period. Our data scientists pick the most optimal training period depending on the ML algorithm and the specific scenario. The anomaly model won’t fire any anomalies during the training period. For example, if you enable an anomaly rule on June 1, and the training period is 14 days, no anomalies will be fired until June 15.
- Data sources indicate the type of logs that need to be ingested in order to be analyzed.
- Tactics are the MITRE ATT&CK framework tactics covered by the anomaly.
- Parameters are the configurable attributes for the anomaly.
- Threshold is a configurable value that indicates the degree to which an event must be unusual before an anomaly is created.
- Rule frequency is how often the anomaly model runs.
- Anomaly version shows the version of the template that is used by a rule. Microsoft continuously improves the anomaly models. The version number will be updated when we release a new version of the anomaly model.
- Template last updated is the date the anomaly version was changed.
View anomalies identified by the anomaly rules
Assuming the required data is available and the ML model training period has passed, anomalies will be stored in the Anomalies table in the Logs blade of your Azure Sentinel workspace. To query all the anomalies in a certain time period, select “Logs” on the left pane, choose a time range, type “Anomalies”, and click the “Run” button, as shown in Figure 2.

Figure 2 – View all anomalies in a time range
To view the anomalies generated by a specific anomaly rule in a time range, go to “Active rules” tab on the “Analytics” blade, copy the rule name excluding the pre-fix “(Preview)”, then select “Logs” on the left pane, chose a time range, and type
Anomalies
| where AnomalyTemplateName contains “<anomaly rule name>”
Paste the rule name you copied from the “Active rules” tab in place of <anomaly rule name>, and click the “Run” button, as shown in Figure 3.

Figure 3 – View anomalies generated by a specific anomaly rule
You can expand an anomaly by clicking > to view the detail. A few important columns are highlighted in Figure 4

Figure 4 – Anomaly detail
- RuleStatus – an anomaly rule can run either in Production mode or in Flighting mode. RuleStatus tells you this anomaly is fired by the rule running in Production mode or by the rule running in Flighting mode. We will discuss the running modes in detail in the Customize anomaly rules section.
- Extended links – this is the query to retrieve the raw events that triggered the anomaly.
- UserName – this is the main entity responsible for the anomalous behavior. Depending on the scenario, it can be the user who performed the anomalous activity, the IP address that is either the source or destination of an anomalous activity, the host on which the anomalous activities happened, or another entity type.
- AnomalyReasons – this tells you why the anomaly fired. We will discuss the anomaly reasons more in the Customize anomaly rules section.
- Entities – in includes all the entities related to this anomaly.
Customize anomaly rules
Azure Sentinel customizable anomalies are specifically designed for security analysts and engineers and do not require any ML skill to tune. You can tweak the individual factors and/or threshold of an anomaly model, cutting down on noise and making sure that anomalies are detecting what’s relevant to your specific organization. To customize an anomaly rule, follow the steps below:
- Right click an anomaly rule, then click “Duplicate”, a new anomaly rule is created. The new anomaly rule name is hardcoded with a suffix “_Customized”.
- Select the customized rule, click “Edit.”
- On the “Configuration” tab, you can change the parameters and threshold. Each anomaly model has configurable parameters based on the ML algorithm and the scenario. Figure 5 shows that you can exclude certain file types from the anomaly rule “Unusual mass downgrade AIP label.” You can also prioritize specific file types. Prioritize means the ML algorithm adds more weight when it scores anomalous activities related to that file type.

Figure 5 – Configure an anomaly rule
Click on an “Anomaly ID” in the “Results preview” table, you will get the anomaly details, including why the anomaly is triggered. Figure 6 shows the details of an anomaly for a suspicious high volume of failed login attempts events (event 4625) observed on a device. The anomaly value is 66 failed logins on that device in the last 24 hours, the expected value is zero because there are zero failed logins on that device in the previous 21 days. This anomaly is an indication of a potential brute-force attack. The anomaly reason helps you to understand how an anomaly is generated, so you can decide which parameters to adjust and what new value you want to set to reduce the noise in your environment.

Figure 6 – Anomaly reasons
Once you have set the new value for a parameter or adjusted the threshold, you can compare the results of the customized rule with the results generated by the default rule to evaluate your change. The customized rule runs in Flighting mode by default, while the default rule runs in Production mode by default. Running a rule in Flighting mode when you want to test the rule. The Flighting feature allows you to run both the default rule and the customized rule in parallel on the same data for a time period, so you can evaluate the result of your change before committing to it.
There are two ways to compare the results:
- Use the “Results preview” table (refer to Figure 5)
Some changes don’t require the ML model to re-run, but some do. For the changes that don’t require the ML model to re-run, you can click Refresh to see the side-by-side comparison in the table. It shows you the added anomalies, the removed anomalies, and the anomaly score changes as a result of your changes to parameters and/or the threshold compared to the default rule running on the same data in the same time range. For the changes that require the ML model to re-run, you must save the change, and come back later to see the side-by-side comparison after the ML model completes its re-run.
- Query the results generated by both rules in “Logs”
You can run a query to get all the anomalies generated by the default rule and the customized rule (refer to Figure 3), compare them in the view, or export them and use your favorite tool to compare the results.
You can change the parameters in your customized rule multiple times until you are satisfied with the result. When you decide to replace the default rule with the customized rule, you switch the customized rule to run in Production mode. To switch an anomaly rule from Flighting mode to Production mode, go to the “General” tab, click “Production.” A confirmation message pops up, click “yes” to confirm. Your customized rule will run in Production mode and the default rule will switch to run in Flighting mode automatically. Only one rule of the same anomaly scenario can run in Production mode. You can disable the default rule at this point.

Figure 7 – Switch the running mode of an anomaly rule
Typical anomaly use cases
While anomalies don’t necessarily indicate malicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting:
- Additional signals to improve detection: Security analysts can use anomalies to detect new threats and make existing detections more effective. A single anomaly is not a strong signal of malicious behavior, but when combined with several anomalies that occur at different points on the kill chain, their cumulative effect is much stronger. Security analysts can enhance existing detections as well by making the unusual behavior identified by anomalies a condition for alerts to be fired.
- Evidence during investigations: Security analysts also can use anomalies during investigations to help confirm a breach, find new paths for investigating it, and assess its potential impact. For example, when investigating an incident that involves a user and an IP address, a security analyst can query the user and the IP address in the “Anomalies” table to find out other anomalous activities performed by that user and that happened on that IP address. These data help security analysts reduce the time spent on investigations.
- The start of proactive threat hunts: Threat hunters can use anomalies as context to help determine whether their queries have uncovered suspicious behavior. When the behavior is suspicious, the anomalies also point toward potential paths for further hunting. These clues provided by anomalies reduce both the time to detect a threat and its chance to cause harm.
In the next blog, we will do a deep-dive into how anomalies can be used in detections and hunting queries, as well as how to simulate anomalies in your workspace.
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Do you already have a solution in place to deal with insider risks? When it comes to remediating insider risks, many organizations either deploy a simple transactional, rules-based solution such as data loss prevention (DLP) or they deploy a much more complex and resource-intensive solution such as user and entity behavior analytics (UEBA). From our own experience and what we’ve learned from our Microsoft 365 customers, neither of these two approaches are effective when it comes to addressing insider risks.
While identifying insider risks can be complex, implementing a holistic solution that looks end-to-end will allow you to reduce the complexity and zero in on the relevant trends that lead to heightened risk. With privacy built-in, pseudonymization by default, and strong role-based access controls, Insider Risk Management is used by companies worldwide to identify insider risks and take action with integrated collaboration workflows.
To help organizations quickly get started in identifying and accelerating time to action on insider risks, we released a number of capabilities at Ignite. Today we are excited to announce the public preview of additional new features that further broaden the analytics and investigation capabilities already in the solution, making it easier to investigate and act on insider risks.
Enabling a richer and efficient investigation experience
Machine learning technology is amazing in that it can reason over and correlate millions of disparate signals to identify hidden risks. This is why we have several machine learning-based policy templates focused on specific risks, such as IP theft by departing employees and confidential data leakage by disgruntled employees, built into the Insider Risk Management solution.
However, what happens if you become aware of a potential insider risk through a tip? How do you efficiently investigate this tip without having to resort to manually trying to piece together various activities?
This is where the new User Activity report is valuable. This new capability provides the Investigator in Insider Risk Management with the ability to simply generate a report of the relevant activities of the user that they received a tip about, and quickly investigate those activities to understand the risk.
User Activity report
To make the alert review process more efficient, we have now enabled persistent filters. With this improved experience, selected filters on the alerts list, such as filters for policy, alert severity, alert status, and date range will persist throughout your alert review process. There is no need to reset filters to see your desired set of focused alerts as you move on to select the next alert to review. This provides a frictionless and efficient experience for analysts to quickly make progress through their queue of alerts.
Priority Content limits have also now been increased in policy from 15 to 50. This means you can select up to 50 SharePoint sites, 50 Microsoft Information Protection Labels, and 50 Sensitive Information Types to prioritize in your policy. This allows you to broaden the activity and content that you want to prioritize for risk scoring and also investigate the potential impact when risks are identified.
The Activity Explorer in Insider Risk Management has been very well received by customers as it provides comprehensive analytics and detailed information about alerts. With this release, we are making leveraging Activity Explorer for insider risk investigations even more efficient. Now, when activities are filtered to show only specific types of activities or workloads the columns associated with the activity of workload will dynamically update to show only the information which is most relevant.
Removable media mounted
File upload to cloud
Finally, we continue to further enrich our policy templates by making improvements to our Sensitive Information Type (SIT) classifications. SIT is used by the solution to provide an improved fidelity of matches for sensitive information within documents. In the past in order to leverage SIT in detecting whether someone is trying to exfiltrate sensitive information such as credit cards in email, you needed to have an associated DLP policy setup. With this release we are removing that requirement so that now all you have to do is opt-in to the Exchange Online indicator in the policy and the solution will automatically detect for SIT, no configuration or DLP policy needed.
Get started today
We have new videos showcasing how the new features in Insider Risk Management can help customers identify and remediate insider risks. We also have a new interactive guide to help you become familiar with the various capabilities in the solution.
The new features announced today will start rolling out to customers’ tenants in the coming days and weeks. Insider Risk Management is one of several products from Microsoft 365 E5, including Communication Compliance, Information Barriers, and Privileged Access Management, that helps organizations mitigate insider risks and policy violations. You can sign up for a trial of Microsoft 365 E5 or navigate to the Microsoft 365 compliance center to get started.
Learn more about Insider Risk Management, how to get started, and configure policies in your tenant in this supporting documentation. Keep a lookout for updates to the documentation with information on the new features over the coming weeks.
Finally, if you haven’t listened to our podcast “Uncovering Hidden Risks”, we encourage you to listen about the technologies used to detect insider risks and what is required to build and maintain an effective insider risk management program.
We are excited about all the new innovations coming out with this new release and look forward to hearing your feedback.
Thank you,
Talhah Mir, Principal Program Manager, Microsoft 365 Security and Compliance Engineering
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
This past year has put unprecedented strain on security teams, and many are preparing to “return to normal” with a new view of what it will take to protect their organizations. Environments are still increasingly distributed, threats are more difficult to catch through overwhelming noise, and security analysts need to efficiently work across remote teams.
Supporting security teams through these challenges was top of mind for the Azure Sentinel team as we continued to deliver new innovation in the product. Our latest releases for the RSA Conference 2021 are all about improving quality of life for security operations teams.
Today, we are announcing the launch of a solutions marketplace in Azure Sentinel, providing customers with an easy want to unlock new use cases. One of the most exciting new solutions is designed to help you monitor and respond to threats in your SAP environments, often home to some of your most business-critical data and applications. We’re also releasing new features to help you harness machine learning to efficiently detect threats through the noise. Plus, we’re making it easier for SecOps teams to work together with native Microsoft Teams collaboration integration.
Easily discover and deploy Solutions for Azure Sentinel
With the release of Azure Sentinel Solutions, now in public preview, we’re providing a new, easier way to discover and deploy use cases for Azure Sentinel. Solutions consolidate related connectors and content, and can include data connectors, analytics rules to power detections, interactive workbooks, automation playbooks, and more so you can easily enable new use cases for integrations, end-to-end, from a single package.

The Solutions marketplace features 32 solutions that includes:
- Palo Alto Prisma and Cloudflare solutions to give you visibility into your cloud workloads.
- Threat intelligence solutions from RiskIQ, and ReversingLabs to enrich your threat detection, hunting and response capabilities.
- Azure SQL and Oracle database audit solutions to monitor your database anomalies.
- And many more, like PingFederate for monitoring identity and access, Cisco Umbrella solution for threat protection, McAfee ePo solution for endpoint protection, Microsoft Teams solution for productivity workloads, and more.
Visit the in Azure Sentinel to see all available solutions. Partners and community members can build and contribute new solutions by following the guidelines on the Azure Sentinel GitHub.
Monitor your SAP applications and respond quickly to threats
One of the most highly anticipated solutions for Azure Sentinel is our SAP threat monitoring solution. Now, you can use Azure Sentinel to monitor your SAP applications across Azure, other clouds, and on-premises.

SAP handles massive amounts of extremely sensitive data. These increasingly-complex systems are business critical, and a security breach to them could be catastrophic. With the release of the Azure Sentinel continuous threat monitoring solution, now in Public Preview, Azure Sentinel provides continuous threat detection and analytics to identify real threats and malicious behaviors. The SAP threat monitoring solution provides connectors, analytics rules to power detections, interactive workbooks, and more to help organizations detect threats to their SAP environments and respond quickly.
Learn more in documentation for the SAP threat monitoring solution.
Tap into the power of ML with new easy-to-tune anomaly detections
With the release of customizable ML anomalies, now in Public Preview, we’re introducing a code-free experience to help security analysts get the most out of machine learning. These anomalies span the attack chain, today covering 11 of the 14 total MITRE ATT&CK tactics.

Security analysts can customize the parameters used in the ML model to tune anomalies to their specific needs, cutting down noise and ensuring that anomalies are detecting what’s relevant to your specific organization. For example, you can add or remove file extensions to prioritize/exclude document types for detecting mass downgrade AIP document sensitive labels, such as from High confidential to Public, or adjust the threshold of the number of documents that is considered a mass downgrade — customization that is especially useful given that different organizations often have different file sensitivity processes. In another example, you can also customize fourteen different parameters that affect the ML score of anomalous Azure Active Directory sign-in sessions, as well as the threshold for triggering these anomalies, so you can tailor the ML models based on your organization’s processes, priorities, and user behavior.
Learn more about customizable ML anomalies here.
Collaborate more efficiently with native Microsoft Teams integration
Many of our customers use Microsoft Teams to collaborate across security operations teams during investigations. Our new integration with Microsoft Teams, now in Public Preview, will make that easier than ever.

Azure Sentinel now integrates with Microsoft Teams so that, with a single click, you can create a Team for a particular incident. Use this Team as your central hub for investigation across members of the security team, all with easy access to the incident in a tab within it. When the incident is closed in Azure Sentinel, the Team is automatically archived, providing a record of the investigation should you need to reference it in the future.
Learn more about the Microsoft Teams collaboration integration here.
Improve SOC hunting visibility with a refreshed hunting dashboard
Now in public preview, a refreshed hunting query experience helps you find undetected threats more quickly and identify which hunting results are most relevant to your environment.

You can now run all your hunting queries, or a selected subset, in a single click. To contextualize your results, you can search for or filter results based on specific MITRE ATT&CK techniques. You can also identify spikes of activity with new “result deltas” to see which results have changed the most in the last 24 hours. As you filter through your queries and results, the new MITRE ATT&CK tactic bar dynamically updates to show which MITRE ATT&CK tactics apply.
Learn more about new hunting dashboard features here.
New IP entity page accelerates triage and investigation
Also in public preview, the IP entity page is the latest addition to Azure Sentinel’s User and Entity Behavior Analytics capabilities. Like the host and account pages, the IP page helps analysts quickly triage and investigate security incidents. The IP page aggregates information from multiple Microsoft and 3rd party data sources and includes insights like threat indicator data, network session data, host IP mappings, and first/last seen information – which is retained even if the earliest records age out of the retention period.

A key insight presented in IP entity pages is geolocation information, which is often used to assess the security relevance of an IP address. We provide geolocation enrichment data from the Microsoft Threat Intelligence service. This service combines data from Microsoft solutions with 3rd party vendors and partners. It will soon be available via REST API for security investigation scenarios to Azure Sentinel customers.
Learn more about the IP entity page here.
Detect advanced multistage attacks with new scheduled analytics rules in Fusion
Azure Sentinel leverages a machine learning technology called Fusion to automatically detect multistage attacks by identifying combinations of anomalous behaviors and suspicious activities that are observed at various stages of the kill-chain. There are currently 90 multistage attack scenarios detected by Azure over medium and low severity alerts from Microsoft threat protection services and third party sources, like Palo Alto Networks.
To help you discover threats tailored to your environment, we are releasing multistage attack scenarios that leverage a set of custom scheduled analytic rules to public preview. With these rules, Fusion can detect 32 new scenarios by combining alerts from the scheduled analytics rules, which detect specific events or sets of events across your environment, with alerts from Microsoft Cloud App Security or Azure Active Directory Identity Protection.
Learn more about the supported scenarios and how to configure your scheduled analytics rules here.
Next Steps
Learn more about these new innovations and see them in action in our upcoming webinar on May 26 at 8 AM Pacific. Register here.
For RSA, Microsoft released a number of innovations across security, compliance, and identity solutions. Learn more in the blog by CVP Security, Compliance, and Identity Vasu Jakkal. Join us for a live webinar on May 27 at 8 AM Pacific to learn more about new innovations across Microsoft security – register here.
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Today, we are announcing Azure Sentinel Solutions in public preview, featuring a vibrant gallery of 32 solutions for Microsoft and other products. Azure Sentinel solutions provide easier in-product discovery and single-step deployment of end-to-end product, domain, and industry vertical scenarios in Azure Sentinel. Solutions also enables Microsoft partners to deliver combined value for their integrations and productize their investments in Azure Sentinel. This experience is powered by Azure Marketplace for solutions’ discovery and deployment, and by Microsoft Partner Center for solutions’ authoring and publishing.
Azure Sentinel solutions currently include integrations as packaged content with a combination of one or many Azure Sentinel data connectors, workbooks, analytics, hunting queries, playbooks, and parsers (Kusto Functions) for delivering end-to-end product value or domain value or industry vertical value for your SOC requirements. All these solutions are available for you to use at no additional cost (regular data ingest or Azure Logic Apps cost may apply depending on usage of content in Azure Sentinel).
Few use cases of Azure Sentinel solutions are outlined as follows.
- On-demand out-of-the-box content: Solutions unlock the capability of getting rich Azure Sentinel content out-of-the-box for complete scenarios as per your needs via centralized discovery in Solutions gallery and single step deployment capability. Feel free to customize this content per your needs post deploy!
- Unlock complete product value: Discover and deploy a solution for not only onboarding the data for a certain product, but also monitor the data via workbooks, generate custom alerts via analytics in the solution package, use the queries to hunt for threats for that data source and run necessary automations as applicable for that product.
- Unlock domain value: Discover and deploy solutions for specific Threat Intelligence automation scenarios or zero-day vulnerability hunting, analytics, and response scenarios.
- Unlock industry vertical value: Get solutions for ERP scenarios or Healthcare or finance compliance needs in a single step.
- And more to unlock complete SIEM and SOAR capabilities in Azure Sentinel.
Get Started
Select from the rich set of 30+ Solutions to start working with the specific content set in Azure Sentinel immediately. Steps to discover and deploy Solutions is outlined as follows. Refer to the Azure Sentinel solutions documentation for further details.
Discover Solutions
- Select Solutions (Preview) from the Azure Sentinel Solutions navigation menu.
- This displays a searchable list of solutions for you to select from.
Azure Sentinel Solutions Blade
- Click Load more at the bottom of the page to see more solutions.
- Select solution of your choice and click on it to display the solutions details view.
- You can now view the Overview tab that includes important details of the solution and the content included in the solution package as illustrated in the diagram below.
Solution details
- The Plans tab covers information about the license terms. All the solutions included in the Solutions gallery are available at no additional cost to install.
- The Usage Information + Support tab includes information about the publisher details for each solution and also a direct link to the support contact for the respective solution.
Deploy Solutions
- Select the Create button in the solutions detail page to deploy the solution.
- You can now enter information in each tab of the solutions deployment flow and move to the next tab to enable deployment of this solution as illustrated in the following diagram.
Solution deploy
Finally select Review and create that will trigger the validation process and upon successful validation select Create to run solution deployment.
Visit the respective feature galleries to customize (as needed), configure, and enable the relevant content included in the Solution package. For e.g., if the Solution deploys a data connector, you’ll find the new data connector in the Data connector blade of Azure Sentinel from where you can follow the steps to configure and activate the data connector.
Partner Scenario: Deliver Solutions
Microsoft partners like ISVs, Managed Service Providers, System Integrators, etc. can follow the 3-step process outlined below to author and publish a solution to deliver product, domain, or vertical value for their products and offerings in Azure Sentinel. Refer to the guidance on Azure Sentinel GitHub for further details on each step.
Step 1. Create Azure Sentinel content for your product / domain / industry vertical scenarios and validate the content.
Step 2. Package content created in the step above. Use the new packaging tool that creates the package and also runs validations on it.
Step 3. Publish your Azure Sentinel solution by creating an offer in Microsoft Partner Center, uploading the package generated in the step above and sending in the offer for certification and final publish. Partners can track progress on their offer in Partner Center dashboard view as shown in the diagram below.
Solution build
New Azure Sentinel Solutions
The Azure Sentinel Solutions gallery showcases 32 new solutions covering depth and breadth of various product, domain, and industry vertical capabilities. These out-of-the-box content packages enable to get enhanced threat detection, hunting and response capabilities for cloud workloads, identity, threat protection, endpoint protection, email, communication systems, databases, file hosting, ERP systems and threat intelligence solutions for a plethora of Microsoft and other products and services.
SAP Continuous Threat Monitoring
Use the SAP continuous threat monitoring solution to monitor your SAP applications across Azure, other clouds, and on-premises. This solution package includes a data connector to ingest data, workbook to monitor threats and a rich set of 25+ analytic rules to protect your applications.
SAP Solution
Cisco Solutions
There are two solutions for Cisco Umbrella and Cisco Identity Services Engine (ISE). The Cisco Umbrella solution provides multiple security functions to enable protection of devices, users, and distributed locations everywhere. The Cisco ISE solution includes data connector, parser, analytics, and hunting queries to streamline security policy management and see users and devices controlling access across wired, wireless, and VPN connections to the corporate network.
PingFederate
PingFederate solution includes data connectors, analytics, and hunting queries to enable monitoring user identities and access in your enterprise.
CrowdStrike Falcon Protection Platform
The CrowdStrike solution includes two data connectors to ingest Falcon detections, incidents, audit events and rich Falcon event stream telemetry logs into Azure Sentinel. It also includes workbooks to monitor CrowdStrike detections and analytics and playbooks for automated detection and response scenarios in Azure Sentinel.
CrowdStrike Solution
McAfee ePolicy Orchestrator
McAfee ePolicy Orchestrator monitors and manages your network, detecting threats and protecting endpoints against these threats leveraging the data connector to ingest McAfee ePo logs and leveraging the analytics to alert on threats.
Palo Alto Prisma
Palo Alto Prisma solution includes data connector to ingest Palo Alto Cloud logs into Azure Sentinel. Leverage the analytics and hunting queries for out-of-the-box detections and threat hunting scenarios besides leveraging the workbooks for monitoring Palo Alto Prisma data in Azure Sentinel.
Corelight
Corelight provides a network detection and response (NDR) solution based on best-of-breed open-source technologies, Zeek and Suricata that enables network defenders to get broad visibility into their environments. The data connector enables ingestion of events from Zeek and Suricata via Corelight Sensors into Azure Sentinel. Corelight for Azure Sentinel also includes workbooks and dashboards, hunting queries, and analytic rules to help organizations drive efficient investigations and incident response with the combination of Corelight and Azure Sentinel.
Corelight Solution
Infoblox Cloud
BloxOne DDI enables you to centrally manage and automate DDI (DNS, DHCP and IPAM) from the cloud to any and all locations. BloxOne Threat Defense maximizes brand protection to protect your network and automatically extend security to your digital imperatives, including SD-WAN, IoT and the cloud. His Azure Sentinel solution powers security orchestration, automation, and response (SOAR) capabilities, and reduces the time to investigate and remediate cyberthreats.
RiskIQ Illuminate Security Intelligence
RiskIQ has created several Azure Sentinel playbooks that pre-package functionality in order to enrich, add context to and automatically action incidents based on RiskIQ Internet observations within the Azure Sentinel platform. These playbooks can be configured to run automatically on created incidents in order to speed up the triage process. When an incident contains a known indicator such as a domain or IP address, RiskIQ will enrich that value with what else it’s connected to on the Internet and if it may pose a threat. If a threat is identified, RiskIQ can action the incident including elevating its status and tagging with additional metadata for analysts to review.
RiskIQ Solution
vArmour Application Controller
Application Controller is an easy to deploy solution that delivers comprehensive real-time visibility and control of application relationships and dependencies, to improve operational decision-making, strengthen security posture, and reduce business risk across multi-cloud deployments. This solution includes data connector to ingest vArmour data and workbook to monitor application dependency and relationship mapping info along with user access and entitlement monitoring.
VMWare Carbon Black
Use this solution to monitor Carbon Black events, audit logs and notifications in Azure Sentinel and analytic rules on critical threats and malware detections to help you get started immediately.
Symantec Solutions
There are two solutions from Symantec. Symantec Endpoint protection solution enables anti-malware, intrusion prevention and firewall features of Symantec being available in Azure Sentinel and help prevent unapproved programs from running, and response actions to apply firewall policies that block or allow network traffic.
Symantec Proxy SG solution enables organizations to effectively monitor, control, and secure traffic to ensure a safe web and cloud experience by monitoring proxy traffic.
Microsoft Teams
Teams serves a central role in both communication and data sharing in the Microsoft 365 Cloud. Since the Teams service touches on so many underlying technologies in the Cloud, it can benefit from human and automated analysis not only when it comes to hunting in logs, but also in real-time monitoring of meetings in Azure Sentinel. The solution includes analytics rules, hunting queries, and playbooks.
Slack Audit
The Slack Audit solution provides ability to get Slack events which helps to examine potential security risks, analyze your organization’s use of collaboration, diagnose configuration problems and more. This solution includes data connector, workbooks, analytic rules and hunting queries to connect Slack with Azure Sentinel.
Azure Firewall
Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It’s a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. This Azure Firewall solution in Azure Sentinel provides built-in customizable threat detection on top of Azure Sentinel. The solution contains a workbook, detections, hunting queries and playbooks.
Sophos XG Firewall
Monitor the network traffic and firewall status using this solution for Sophos XG Firewall. Furthermore, enable the port scans and excessive denied connections analytic rules to create custom alerts and track as incidents for the ingested data.
Qualys VM
Monitor and detect vulnerabilities reported by Qualys in Azure Sentinel by leveraging the new solutions for Qualys VM.
Microsoft Dynamics 365
The Dynamics 365 continuous threat monitoring with Azure Sentinel solution provides you with ability to collect Dynamics 365 logs, gain visibility of activities within Dynamics 365 and analyze them to detect threats and malicious activities. The solution includes a data connector, workbooks, analytics rules, and hunting queries.
Cloudflare
This solution combines the value of Cloudflare in Azure Sentinel by providing information about the reliability of your external-facing resources such as websites, APIs, and applications. Use the detections and hunting queries to protect your internal resources such as behind-the-firewall applications, teams, and devices.
Contrast Protect
Contrast Protect empowers teams to defend their applications anywhere they run, by embedding an automated and accurate runtime protection capability within the application to continuously monitor and block attacks. Contrast Protect seamlessly integrates into Azure Sentinel so you can gain additional security risk visibility into the application layer.
Contrast Protect Solution
Check Point CloudGuard
This solution includes an Azure Logic App custom connector and playbooks for Check Point to offer enhanced integration with SOAR capabilities of Azure Sentinel. Enterprises can correlate and visualize these events on Azure Sentinel and configure SOAR playbooks to automatically trigger CloudGuard to remediate threats.
Senserva
Senserva, a Cloud Security Posture Management (CSPM) for Azure Sentinel, simplifies the management of Azure Active Directory security risks before they become problems by continually producing priority-based risk assessments. Senserva information includes a detailed security ranking for all the Azure objects Senserva manages, enabling customers to perform optimal discovery and remediation by fixing the most critical issues with the highest impact items first. All Senserva’s enriched information is sent to Azure Sentinel for processing by analytics, workbooks, and playbooks in this solution.
HYAS Insight
HYAS Insight is a threat and fraud investigation solution using exclusive data sources and non-traditional mechanisms that improves visibility and triples productivity for analysts and investigators while increasing accuracy. HYAS Insight connects attack instances and campaigns to billions of indicators of compromise to understand and counter adversary infrastructure and includes playbooks to enrich and add context to incidents within the Azure Sentinel platform.
Titanium Cloud File Enrichment from ReversingLabs
TitaniumCloud is a threat intelligence solution providing up-to-date file reputation services, threat classification and rich context on over 10 billion goodware and malware files. Files are processed using ReversingLabs File Decomposition Technology. A powerful set of REST API query and feed functions deliver targeted file and malware intelligence for threat identification, analysis, intelligence development, and threat hunting services in Azure Sentinel.
Proofpoint Solutions
Two Solutions for Proofpoint enables bringing in email protection capability into Azure Sentinel. Proofpoint OnDemand Email security (POD) classifies various types of email, while detecting and blocking threats that don’t involve malicious payload. Proofpoint Targeted Attack Protection (TAP) solution helps detect, mitigate and block advanced threats that target people through email in Azure Sentinel. This includes attacks that use malicious attachments and URLs to install malware or trick users into sharing passwords and sensitive information.
Azure SQL database
This solution provides built-in customizable threat detection for Azure SQL PaaS services in Azure Sentinel, based on SQL Audit log and with seamless integration to alerts from Azure Defender for SQL. This solution includes a guided investigation workbook with incorporated Azure Defender alerts. Furthermore, it includes analytics to detect SQL DB anomalies, audit evasion and threats based on the SQL Audit log, hunting queries to proactively hunt for threats in SQL DBs and a playbook to auto-turn SQL DB audit on.
Azure SQL Solution
Oracle Database
Oracle Database Unified Auditing enables selective and effective auditing inside the Oracle database using policies and conditions and brings these database audit capabilities in Azure Sentinel. This solution comes with a data connector to get the audit logs as well as workbook to monitor and a rich set of analytics and hunting queries to help with detecting database anomalies and enable threat hunting capabilities in Azure Sentinel.
Ubiquity UniFi
This solution includes data connector to ingest wireless and wired data communication logs into Azure Sentinel and enables to monitor firewall and other anomalies via the workbook and set of analytics and hunting queries.
Box
Box is a single, secure, easy-to-use platform built for the entire content lifecycle, from file creation and sharing, to co-editing, signature, classification, and retention. This solution delivers capabilities to monitor file and user activities for Box and integrates with data collection, workbook, analytics and hunting capabilities in Azure Sentinel.
Closing
Azure Sentinel Solutions is just one of several exciting announcements we’ve made for the RSA Conference 2021. Learn more about other new Azure Sentinel innovations in our announcements blog.
Discover and deploy solutions to get out-of-the-box and end-to-end value for your scenarios in Azure Sentinel. Let us know your feedback using any of the channels listed in the Resources.
We also invite partners to build and publish new solutions for Azure Sentinel. Get started now by joining the Azure Sentinel Threat Hunters GitHub community and follow the solutions build guidance.
by Contributed | May 12, 2021 | Technology
This article is contributed. See the original author and article here.
Trigger conditions is a trigger setting used to specify one or more conditional expressions which must be true for the trigger to fire.
Trigger conditions can be set using logic app designer or manually using code view; for schema reference for trigger conditions, check the following link: Schema reference for trigger and action types – Azure Logic Apps | Microsoft Docs
To set trigger conditions using logic app designer, follow the steps below:
1- Go to your logic app.
2- On the trigger, click on the three dots (…) at the top right corner.
3- Select Settings.
4- Scroll down to Trigger Conditions.
5- Specify your conditional expression.
6-Click Done.

You can set single or multiple conditions as follows:
Single Condition

In this example, we apply trigger filter @not(equals(triggerBody().Size,0)) on the When a blob is added or modified (properties only) trigger settings under Trigger Conditions, so that the logic app is not triggered with empty files.
Multiple conditions
You can define multiple conditions but the logic app will only fire if all conditions are met.
Two or more AND conditions

@not(empty(triggerBody())) and @contains(triggerBody()?[‘Name’],’.txt’)
This trigger will only fire if the trigger body is not empty and the file is a text file by checking if the name property contains ‘.txt’ extension inside the trigger body.
Two or more OR conditions

@or(greater(triggerBody()?[‘Age’],18),less(triggerBody()?[‘Age’],60))
This trigger will only fire if the age is greater than 18 or less than 60 by checking the age property inside the trigger body.
OR and AND Conditions together

@or(greater(triggerBody()?[‘Age’],18),and(less(triggerBody()?[‘Age’],60),equals(triggerBody()?[‘Fit’],true)))
This trigger will only fire if the age is greater than 18 or less than 60 and fit is true by checking the age and fit properties inside the trigger body.
For reference guide for logical comparison functions that can be used in conditional expressions, check the following link: Reference guide for functions in expressions – Azure Logic Apps | Microsoft Docs
Split On Behavior
If you apply it on a trigger with Split On enabled, the conditions are evaluated after the splitting which guarantees that only the array
elements that match the conditions are fired.
In this case, you will see one trigger event for each element in the trigger history, triggers that did not fire due to trigger conditions will be found in Trigger History.

Pros
- Simple logic app with less steps by moving logic that would usually be inside the logic app to define if the logic app can be executed or not to inside the trigger itself.
- Clean workflow run history that reduces operational burden by eliminating unwanted runs.
- Reduced costs by reducing the action calls made for checking the conditions within logic app.
Cons
- Trigger conditions are not visible which make it hard to troubleshoot; so, it is recommended to modify trigger name to indicate that there are trigger conditions set.
- Adding trigger conditions does not support intellisense; so, it is recommended to draft your expression within an initialize variable or compose action, note that in these actions, an @ is not needed at the beginning of the expression, but within trigger conditions it is needed
- Runs that did not meet the trigger conditions will not show in the runs history.
Recent Comments