by Scott Muniz | Aug 10, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
What is the built-in vulnerability assessment tool in Azure Security Center?
If you’re using Security Center’s standard tier for VMs, you can quickly deploy a vulnerability assessment solution powered by Qualys with no additional configuration or extra costs.

Qualys’s scanner is the leading tool for identifying vulnerabilities in your Azure virtual machines. Once this integration is enabled, Qualys continually assesses all the installed applications on a virtual machine to find vulnerabilities and presents its findings in the Security Center console. This offering is available to all commercial Azure customers that have enabled Azure Security Center standard pricing tier for VMs. In this post, I will focus on vulnerability scanning of virtual machines, although standard tier also offers scanning for both containers and container registries – learn more here.
How does the integration work?
Our integrated vulnerability scanner is based on 5 different stages: from discovery to findings.

[1] Discovery – To make this integration work, a policy named “vulnerability assessment should be enabled on virtual machines” which is part of the “ASC default” initiative must be enabled. Upon Azure Policy evaluation, we get the compliance data to identify potential and supported virtual machines which don’t have a vulnerability assessment solution deployed. Based on the result, we propagate the data into the recommendation so you can see all relevant virtual machines. Based on compliance data, we categorize the virtual machines as one of the following:
- “healthy” – VMs that have the extension installed and report data.
- “unhealthy” – VMs which could support the extension, but which currently don’t have it.
- “not applicable” – Where the OS type/image is not supported (for example, a virtual machine running Network Virtual Appliance (NVA), Databricks/AKS instances or Classic VMs).
[2] Deployment – This is the step where you can enable the integrated ASC vulnerability scanner by deploying the extension on your selected virtual machine/s either by using ASC console and quick fix button, or by using an automated method (see a reference below for deployment at scale).
Prerequisites for deploying the extension:
- Running VM with a supported operating system version as mentioned here.
- Azure VM agent installed and in healthy state.
- Log Analytics agent installed (formerly known as the Microsoft Monitoring Agent).
- To install using the quick fix option, you’ll need write permissions for any VM on which you want to deploy the extension. Like any other extension, this one runs on top of the Azure Virtual Machine agent.
Once all prerequisites are met, you should use our newly and consolidated recommendation “A vulnerability assessment solution should be enabled on your virtual machines”. In this recommendation, you can choose to deploy ASC integrated vulnerability scanner or 3rd party scanner (BYOL).

This recommendation installs the extension on unhealthy machines. Review the heathy and not applicable lists too.

Once the extension is deployed, you can see if it exists, by navigating to the VM page of the Azure portal, and selecting “extensions”:
- On Windows, the extension is called “WindowsAgent.AzureSecurityCenter” and the provider name is “Qualys”
- On Linux, the extension is called LinuxAgent.AzureSecurityCenter and the provider name is “Qualys”

Like the Log Analytics agent itself and all other Azure extensions, minor updates of the vulnerability scanner may automatically happen in the background; the VA agent is self-healing and self-updating to counter common issues. All agents and extensions are tested extensively before being automatically deployed. On a virtual machine (on Windows for example), you will see a process QualysAgent.exe and service “Qualys Cloud Agent” running:

When deploying a vulnerability assessment solution, Security Center previously performed a validation check before deploying. The check was to confirm a marketplace SKU of the destination virtual machine.
Recently, the check was removed and you can now deploy vulnerability assessment tools to ‘custom’ Windows and Linux machines. Custom images are ones that you’ve modified from the marketplace defaults.
[3] Scan – The gathered data collected by the agent includes many things for the baseline snapshot like network posture, operating system version, open ports, installed software, registry info, what patches are installed, environment variables, and metadata associated with files. The agent stores a snapshot on the agent host to quickly determine differences to the host metadata it collects. Such scans occur every 4 hours and are performed per VM, where artifacts are collected and sent for analysis to the Qualys Cloud service in the defined region. For virtual machines created within European regions, the gathered information is sent securely to Qualys Cloud Service in the Netherlands. For all non-EU resources, data is sent for processing in the Qualys Cloud Service in the US.
The sent artifacts are considered as metadata and the same as the ones collected by Qualys’ standalone cloud agent – Microsoft doesn’t share customer details or any sensitive data with Qualys.
[4] Analysis – Qualys analyzes the metadata, registry keys, and other information and builds the findings per VM. Findings are sent to Azure Security Center matching customer’s ID and are removed from the Qualys Cloud.
[5] Findings – You can monitor vulnerabilities on your virtual machines as discovered by the ASC vulnerability scanner using a recommendation named “Vulnerabilities in virtual machines should be remediated” found under the recommendations list. This recommendation is divided to the affected resources and security checks (also known as nested recommendations or sub-assessments).

On the affected resources section, you will find virtual machines categorized as unhealthy, healthy, and not applicable. The section named “Security Checks” shows the vulnerabilities found on the unhealthy resource. Findings are categorized by severity (high, medium, and low). Below, you can see the matching between ASC severity on the left and Qualys’ severities on the right:

If you are looking for a specific vulnerability, you can use the search field to filter the items based on ID or security check title. Selecting a security check, will open a window containing the vulnerability name, description, the impact on your resources, severity, if this could be resolved by applying patch, the CVSS base score (when the highest is the most severe one), relevant CVEs. Then, you will also find the threat, remediation steps, additional references (if applicable) and the affected resource. Once you remediate the vulnerability on the affected resource, it will be removed from the recommendation page.
Deployment at scale
If you have large number of virtual machines and would like to automate deployment at scale of the ASC integrated scanner, we’ve got you covered! There are several ways to accomplish such deployment based on your business requirements. Some customers prefer to automate deployment by executing an ARM template, others prefer automation using Azure Automation or Azure Logic Apps and others by using Azure Policy for both automation and compliance. For all these scenarios and even beyond, we encourage you to visit our ASC GitHub community repository. There, you can find scripts, automations and other useful resources you can leverage throughout your ASC deployment. Some of the methods will deploy the extension on new machines, others cover existing ones as well. There are other scenarios where customers prefer to make API calls to trigger an installation. This is also possible by executing a PUT call to one of our REST APIs, passing the resource ID to the URL. You can also decide to combine multiple approaches.
Troubleshooting
Below you will find a checklist for your initial troubleshooting if you experience issues related to the ASC vulnerability scanner:
- Are you running a supported OS version? Use the following list to quickly identify if your VMs are running a supported operating system version.
- Is the extension successfully deployed? Monitor VA extension health across subscriptions using Azure Resource Graph (ARG). ARG becomes handy if you want to validate the extension status across subscriptions is heathy for both Linux and Windows machines. Use the following query:
where type == "microsoft.compute/virtualmachines/extensions"
| where name matches regex "AzureSecurityCenter"
| extend ExtensionStatus = tostring(properties.provisioningState),
ExtensionVersion = properties.typeHandlerVersion,
ResourceId = id,
ExtensionName = name,
Region = location,
ResourceGroup = resourceGroup
| project ResourceId, ExtensionName, Region, ResourceGroup, ExtensionStatus, ExtensionVersion

Results can be exported into CSV or used to build an Azure Monitor workbook.
- Is the service running? On Windows VMs, make sure “Qualys Cloud Agent” is running. On Linux, run the command sudo service qualys-cloud-agent
- Unable to communicate with Qualys? To communicate with the Qualys Cloud, the agent host should reach the service platform over HTTPS port 443 for the following IP addresses:
- 64.39.104.113
- 154.59.121.74
Check network access and ensure to accept the platform URL listed.
- Looking for logs? Both agent and extension logs can be used during troubleshooting. However, Windows and Linux logs can be found in different places. Here are the paths:
Windows:
- Qualys extension:
- C:Qualys.WindowsAgent.AzureSecurityCenter
- C:WindowsAzureLogsPluginsQualys.WindowsAgent.AzureSecurityCenter
- Qualys agent:
- %ProgramData%QualysQualysAgent
Linux:
- Qualys extension:
- /var/log/azure/Qualys.LinuxAgent.AzureSecurityCenter
- Qualys agent:
- /var/log/qualys/qualys-cloud-agent.log
Advanced scenarios
Qualys assessment and sub-assessments (security checks) are stored and available for query in Azure Resource Graph (ARG) as well as through the API. A great example for that is available in this blog post. Moreover, you can also build and customize your own dashboards using Azure Monitor workbooks and create such dashboard for more insights. You can easily deploy a Qualys dashboard leveraging ARG queries and workbooks which is available . Soon, you will be able to use Continuous Export feature to send nested recommendations for Qualys into Event Hub or Log Analytics workspaces.
On the roadmap
- Availability for non-Azure virtual machines
- Support for proxy configuration
- Filtering vulnerability assessment findings by different criteria (e.g. exclude all low severity findings / exclude non-patchable findings / excluded by CVE / and more)
- More items are work in progress.
Frequently asked questions
Question: Does the built-in integration support both Azure VMs and non-Azure VMs?
Answer: Our current integration only supports Azure VMs. As mentioned in the roadmap section, we do have plans to support non-Azure virtual machines in the future.
Question: Does the built-in vulnerability assessment as part of standard pricing tier also integrate into the Qualys Dashboard offering?
Answer: Vulnerability assessments performed by our built-in integration is only available through Azure portal and Azure Resource Graph.
Question: Is it possible to initiate a manual/on-demand scan?
Answer: Scan on Demand is a single use execution that is initiated manually on the VM itself, using locally or remotely executed scripts or GPO, or from software distribution tools at the end of a patch deployment job. To do so, the following command will trigger an on-demand metadata sync:
REG ADD HKLMSOFTWAREQualysQualysAgentScanOnDemandVulnerability /v "ScanOnDemand" /t REG_DWORD /d "1" /f
Question: I purchased a separate Qualys/Rapid 7 license, which recommendation should I use?
Answer: We provide additional method for customers who have purchased VA scanner separately and do not use the integrated solution. To enable 3rd-party integration, use “A vulnerability assessment solution should be enabled on your virtual machines” – this recommendation appears for both standard and free tiers. Then, select “Configure a new third-party vulnerability scanner (BYOL – requires a separate license)”. For this kind of integration, you’ll need to purchase a license for your chosen solution separately. Supported solutions report vulnerability data to the partner’s management platform. In turn, that platform provides vulnerability and health monitoring data back to Security Center.
Question: Can I combine two Qualys installation approaches so that the same VM has both the integrated scanner and the BYOL agent installed?
Answer: No, this is not supported. You can’t combine additional deployment approaches of VA while using the built-in VA capabilities provided by ASC.
In the next blog posts, we will discuss on how you can leverage integration for container and container registry images. Stay tuned!
Reviewers:
- Melvyn Mildiner – Senior Content Developer
- Ben Kliger – Senior PM Manager
- Aviv Mor – Senior Program Manager
- Nomi Gorovoy – Software Engineer
by Scott Muniz | Aug 9, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Last time we look at how to get started with GraphQL on dotnet and we looked at the Azure App Service platform to host our GraphQL server. Today we’re going to have a look at a different approach, using Azure Functions to create run GraphQL in a Serverless model. We’ll also look at using JavaScript (or specifically, TypeScript) for this codebase, but there’s no reason you couldn’t deploy a dotnet GraphQL server on Azure Functions or deploy JavaScript to App Service.
Getting Started
For the server, we’ll use the tooling provided by Apollo, specifically their server integration with Azure Functions, which will make it place nicely together.
We’ll create a new project using Azure Functions, and scaffold it using the Azure Functions Core Tools:
func init graphql-functions --worker-runtime node --language typescript
cd graphql-functions
If you want JavaScript, not TypeScript, as the Functions language, change the --language flag to javascript.
Next, to host the GraphQL server we’ll need a Http Trigger, which will create a HTTP endpoint in which we can access our server via:
func new --template "Http Trigger" --name graphql
The --name can be anything you want, but let’s make it clear that it’s providing GraphQL.
Now, we need to add the Apollo server integration for Azure Functions, which we can do with npm:
npm install --save apollo-server-azure-functions
Note: if you are using TypeScript, you need to enable esModuleInterop in your tsconfig.json file.
Lastly, we need to configure the way the HTTP Trigger returns to work with the Apollo integration, so let’s open function.json within the graphql folder, and change the way the HTTP response is received from the Function. By default it’s using a property of the context called res, but we need to make it explicitly return be naming it $return:
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get", "post"]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
],
"scriptFile": "../dist/graphql/index.js"
}
Implementing a Server
We’ve got out endpoint ready, it’s time to start implementing the server, which will start in the graphql/index.ts file. Let’s replace it with this chunk:
import { ApolloServer, gql } from "apollo-server-azure-functions";
const typeDefs = gql`
type Query {
graphQLOnAzure: String!
}
`;
const resolvers = {
Query: {
graphQLOnAzure() {
return "GraphQL on Azure!";
}
}
};
const server = new ApolloServer({ typeDefs, resolvers });
export default server.createHandler();
Let’s talk about what we did here, first up we imported the ApolloServer which is the server that will handle the incoming requests on the HTTP Trigger, we use that as the very bottom by creating the instance and exporting the handler as the module export.
Next, we imported gql, which is a template literal that we use to write our GraphQL schema in. The schema we’ve created here is pretty basic, it only has a single type, Query on it that has a single member to output.
Lastly, we’re creating an object called resolvers, which are the functions that handle the request when it comes in. You’ll notice that this object mimics the structure of the schema we provided to gql, by having a Query property which then has a function matching the name of the available queryable values.
This is the minimum that needs to be done and if you fire up func start you can now query the GraphQL endpoint, either via the playground of from another app.
Implementing our Quiz
Let’s go about creating a more complex solution, we’ll implement the same Quiz that we did in dotnet.
We’ll start by defining the schema that we’ll have on our server:
const typeDefs = gql`
type Quiz {
id: String!
question: String!
correctAnswer: String!
incorrectAnswers: [String!]!
}
type TriviaQuery {
quizzes: [Quiz!]!
quiz(id: String!): Quiz!
}
schema {
query: TriviaQuery
}
`;
Now we have two types defined, Quiz and TriviaQuery, then we’ve added a root node to the schema using the schema keyword and then stating that the query is of type TriviaQuery.
With that done, we need to implement the resolvers to handle when we request data.
const resolvers = {
TriviaQuery: {}
};
This will compile and run, mostly because GraphQL doesn’t type check that the resolver functions are implemented, but you’ll get a bunch of errors, so instead we’ll need implement the quizzes and quiz resolver handlers.
Handling a request
Let’s implement the quizzes handler:
const resolvers = {
TriviaQuery: {
quizzes: (parent, args, context, info) => {
return null;
}
}
};
The function will receive 4 arguments, you’ll find them detailed on Apollo’s docs, but for this handler we really only need one of them, context, and that will be how we’ll get access to our backend data source.
For the purposes of this blog, I’m skipping over the implementation of the data source, but you’ll find it on my github.
const resolvers = {
TriviaQuery: {
quizzes: async (parent, args, context, info) => {
const questions = await context.dataStore.getQuestions();
return questions;
}
}
};
You might be wondering how the server knows about the data store and how it got on that context argument. This is another thing we can provide to Apollo server when we start it up:
const server = new ApolloServer({
typeDefs,
resolvers,
context: {
dataStore
}
});
Here, dataStore is something imported from another module.
Context gives us dependency injection like features for our handlers, so they don’t need to establish data connections themselves.
If we were to open the GraphQL playground and then execute a query like so:
query {
quizzes {
question
id
correctAnswer
incorrectAnswers
}
}
We’ll get an error back that Quiz.correctAnswer is a non-null field but we gave it null. The reason for this is that our storage type has a field called correct_answer, whereas our model expects it to be correctAnswer. To address this we’ll need to do some field mapping within our resolver so it knows how to resolve the field.
const resolvers = {
TriviaQuery: {
quizzes: async (parent, args, context, info) => {
const questions = await context.dataStore.getQuestions();
return questions;
}
},
Quiz: {
correctAnswer: (parent, args, context, info) => {
return parent.correct_answer;
},
incorrectAnswers: (parent, args, context, info) => {
return parent.incorrect_answers;
}
}
};
This is a resolver chain, it’s where we tell the resolvers how to handle sub-fields of an object and it acts just like a resolver itself, so we have access to the same context and if we needed to do another DB lookup, we could.
Note: These resolvers will only get called if the fields are requested from the client. This avoids loading data we don’t need.
You can go ahead and implement the quiz resolver handler yourself, as it’s now time to deploy to Azure.
Disabling GraphQL Playground
We probably don’t want the Playground shipping to production, so we’d need to disable that. That’s done by setting the playground property of the ApolloServer options to false. For that we can use an environment variable (and set it in the appropriate configs):
const server = new ApolloServer({
typeDefs,
resolvers,
context: {
dataStore
},
playground: process.env.NODE_ENV === "development"
});
For the sample on GitHub, I’ve left the playground enabled.
Deploying to Azure Functions
With all the code complete, let’s look at deploying it to Azure. For this, we’ll use a standard Azure Function running the latest Node.js runtime for Azure Functions (Node.js 12 at the time of writing). We don’t need to do anything special for the Functions, it’s already optimised to run a Node.js Function with a HTTP Trigger, which is all this really is. If we were using a different runtime, like .NET, we’d follow the standard setup for a .NET Function app.
To deploy, we’ll use GitHub Actions, and you’ll find docs on how to do that already written, and I’ve done a video on this as well. You’ll find the workflow file I’ve used in the GitHub repo.
With a workflow committed and pushed to GitHub and our App Service waiting, the Action will run and our application will be deployed. The demo I created is here.
Conclusion
Throughout this post we’ve taken a look at how we can create a GraphQL server running inside a JavaScript Azure Functions using the Apollo GraphQL server, before finally deploying it to Azure.
When it comes to the Azure side of things, there’s nothing different we have to to do run the GraphQL server in Azure Functions, it’s just treated as a HTTP Trigger function and Apollo has nice bindings to allow us to integrate the two platforms together.
Again, you’ll find the complete sample on my GitHub for you to play around with yourself.
This post was originally published on www.aaron-powell.com.
by Scott Muniz | Aug 8, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
This blog is authored and technically implemented by @Hesham Saad with hearty thanks to our collaborator and use-cases executive mind brain @yazanouf
Before we dig deep on monitoring TEAMS CallRecords Activity Logs, please have a look at “Protecting your Teams with Azure Sentinel” blog post by @Pete Bryan on how to ingest TEAMS management logs into Azure Sentinel via the O365 Management Activity API
Collecting TEAMS CallRecords Activity Data
This section we will go into details on how to ingest TEAMS CallRecords activity logs into Azure Sentinel via the Microsoft Graph API and mainly leveraging CallRecords API which is a Graph webhook API that will give access to the Calls activity logs. SOC team can subscribe to changes to CallRecords via Azure Sentinel and using the Microsoft Graph webhook subscriptions capability, allowing them to build near-real-time reports from the data or to alert on specific scenarios , use cases which mentioned above.
Technically you can use the call records APIs to subscribe to call records and look up call records by IDs, the call records API is defined in the OData sub-namespace, microsoft.graph.callRecords.
So, what are the key resources types returned by the API ?
| Resource |
Methods |
Description |
| CallRecord |
Get callRecord |
Represents a single peer-to-peer call or a group call between multiple participants |
| session |
Get callRecord List sessions |
A peer-to-peer call contains a single session between the two participants in the call. Group calls contain one or more session entities. In a group call, each session is between the participant and a service endpoint. |
| segment |
Get callRecord List sessions |
A segment represents a media link between two endpoints. |
The callRecord entity represents a single peer-to-peer call or a group call between multiple participants, sometimes referred to as an online meeting. A peer-to-peer call contains a single session between the two participants in the call. Group calls contain one or more session entities. In a group call, each session is between the participant and a service endpoint. Each session contains one or more segment entities. A segment represents a media link between two endpoints. For most calls, only one segment will be present for each session, however sometimes there may be one or more intermediate endpoints. For more details click here
Below is the main architecture diagram including the components to deploy Teams CallRecords Activity Logs Connector:

Deployment steps:
Register an App
Create and register Azure AD APP to handle the authentication and authorization to collect data from the Graph API. Here are the steps – navigate to the Azure Active Directory blade of your Azure portal and follow the steps below:
- Click on ‘App Registrations’
- Select ‘New Registration’
- Give it a name and click Register.
- Click ‘API Permissions’ blade.
- Click ‘Add a Permission’.
- Click ‘Microsoft Graph’.
- Click ‘Application Permissions’.
- Search for ‘CallRecords’, Check CallRecords.Read.All. Also, Search for ‘Directory’ and Check Directory.ReadWrite.All and ‘Click ‘Add permissions’.
- Click ‘grant admin consent’.
- Click ‘Certificates and Secrets’.
- Click ‘New Client Secret’
- Enter a description, select ‘never’. Click ‘Add’.
- Note– Click copy next to the new secret and store it somewhere temporarily. You cannot come back to get the secret once you leave the blade.
- Copy the client Id from the application properties and store it.
- Copy the tenant Id from the main Azure Active Directory blade and store it.




Deploy a Logic App
Last step is to collect the CallRecords activity data and ingest it into Azure Sentinel via a Logic App.
Navigate to Azure Sentinel workspace, click at Playbooks blade and follow the steps below:
- Click on ‘Add Playbook’
- Select ‘Resource Group’, type a name to your logic app for example ‘TeamsCalls-SecurityGraphAPI’ and toggle on ‘Log Analytics’
- Click ‘Review + Create’ then ‘Create’
- Open your new logic app ‘TeamsCalls-SecurityGraphAPI’
- Under ‘Logic app designer’, add the following steps:
- Add ‘Recurrence’ step and set the value to 10 minute for example
- Add ‘HTTP’ step to create CallRecords subscriptions, creating a subscriptions will subscribe a listener application to receive change notifications when the requested type of changes occur to the specified resource in Microsoft Graph, for more details on Create Subscriptions via Microsoft Graph API
- Method: POST
- URI: https://graph.microsoft.com/beta/subscriptions
- Body, note that you can edit ‘changeType’ value with ‘created,updated’ for example, ‘notificationUrl’ is the subscription notification endpoint for more details on notificationUrl
-
{
"changeType": "created",
"clientState": "secretClientValue",
"expirationDateTime": "2022-11-20T18:23:45.9356913Z",
"latestSupportedTlsVersion": "v1_2",
"notificationUrl": "https://outlook.office.com/webhook/3ec886e9-86ef-4c86-bfff-2d0321f3313e@2006d214-5f91-4166-8d92-95f5e3ad9ec6/IncomingWebhook/9c6e121ed--x-x-x-x99939f71721fcbcc7/03c99422-50b0-x-x-x-ea-a00e-2b0b-x-x-x-12d5",
"resource": "/communications/callRecords"
}
- Authentication Type: Active Directory OAuth
- Tenant: with Tenant ID copied above
- Audience: https://graph.microsoft.com
- Client ID: with Client ID copied above
- Credential Type: Secret
- Secret: with Secret value copied above
- Add ‘HTTP’ step to list all subscriptions:
- Method: GET
- URI: https://graph.microsoft.com/v1.0/subscriptions
- Authentication Type: Active Directory OAuth
- Tenant: with Tenant ID copied above
- Audience: https://graph.microsoft.com
- Client ID: with Client ID copied above
- Credential Type: Secret
- Secret: with Secret value copied above
- If you want to get all sessions details per specific call record session ID follow the below steps, noting that the below example is for a single CallRecord Session ID for the sake of demonstration and hence we added a variable item, you can simply add a loop step to get all sessions IDs from the created CallRecords subscription step:
- Method: GET
- URI: https://graph.microsoft.com/beta/communications/callRecords/@{variables(‘TEAMSCallRecordsID’)}/sessions
- Authentication Type: Active Directory OAuth
- Tenant: with Tenant ID copied above
- Audience: https://graph.microsoft.com
- Client ID: with Client ID copied above
- Credential Type: Secret
- Secret: with Secret value copied above
- Add ‘Send TEAMS CallRecords Data to Azure Sentinel LA-Workspace’ step, after doing the connection successfully via your Azure Sentinel Workspace ID & Primary key:
- JSON Request Body: Body
- Custom Log Name: TEAMSGraphCallRecords



The complete Playbook code view have been uploaded to github repo as well, please click here for more details and check out the readme section.
Monitoring TEAMS CallRecords Activity
When the Playbook run successfully, it will create a new custom log table ‘TEAMSGraphCallRecords_CL’ that will have the CallRecords activity logs, you might wait for a few minutes till the new CL table been created and the CallRecords activity logs been ingested.
Navigate to Azure Sentinel workspace, click at Logs blade and follow the steps below:
- Tables > Group by: Solution > Custom Logs: TEAMSGraphCallRecords_CL
- Below are the list of main attributes that have been ingested:
- TimeGenerated
- Type_s: groupCall
- modalities_s: Audio, Video, ScreenSharing, VideoBasedScreenSharing
- LastModifiedDateTime
- StartDateTime, endDateTime
- joinWebUrl_s
- organizer_user_displayname_s
- participants_s
- sessions_odata_context_s
- As you can see from the results below we get the complete TEAMS CallRecords activity logs.

Parsing the Data
Before building any detections or hunting queries on the ingested TEAMS CallRecords Activity data we can parse and normalize the data via a KQL Function to make it easier to use:

The parsing function have been uploaded as well to the github repo.
Part (2): we will share a couple of hunting queries and upload them to github, it’s worth to explore Microsoft Graph API as there are other TEAMS related APIs logs that can be ingested based on the requirements and use cases:
- TeamsActivity:
- Read all users’ teamwork activity feed
- TeamsAppInstallation:
- Read installed Teams apps for all chats
- Read installed Teams apps for all teams
- Read installed Teams apps for all users
- TeamsApp
- Read all users’ installed Teams apps
…etc

We will be continuing to develop detections and hunting queries for Microsoft 365 solutions data over time so make sure you keep an eye on GitHub. As always if you have your own ideas for queries or detections please feel free to contribute to the Azure Sentinel community.
by Scott Muniz | Aug 7, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Scenario:
Transport Layer Security (TLS) and its deprecated predecessor Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communications security over a computer network. SSL/TLS have some available versions to use, but the newer versions were created because of the security issues found on the previous ones.
It’s important to use the latest TLS version to make sure to have a secure way to exchanging keys, encrypt data and authenticate message integrity during all the communications..
This means the client and server should support the same SSL/TLS version.
Azure Cache for Redis can currently support TLS 1.0, 1.1 and 1.2, but there are some changes planned on TLS version and cypher Suites supported by Azure Cache for Redis:
- Azure Cache for Redis will stop supporting TLS versions 1.0 and 1.1.
After this change, your application will be required to use TLS 1.2 or later to communicate with your cache.
- Additionally, as a part of this change, Azure Cache for Redis will remove support for older, insecure cypher suites. The supported cypher suites will be restricted to the following when the cache is configured with a minimum TLS version of 1.2.
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
These changes were announced more than one year ago, and should have already occurred but were postponed due to COVID 19. Please be updated on theses changes in this link:
Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
Actions:
As the client and server should support the same SSL/TLS version, the client application will be required to use TLS 1.2 or later to communicate with your cache.
1. Changing client application to use TLS 1.2
In StackExchange.Redis and in most of other client libraries you may need to change your connection string and add ssl=true and sslprotocols=tls12 parameters, but this may be a little bit different on each one of client libraries. Also some other changes may be needed.
You can follow this documentation Configure your application to use TLS 1.2 to verify what changed are needed and if some other client environment changes are needed to use the latest TLS version in your client application.
.NET Framework: StackExchange.Redis, ServiceStack.Redis
.NET Core: all .NET Core clients
Java: Jedis, Lettuce, and Redisson
Node.js: Node Redis, IORedis
PHP: Predis, PhpRedis
Python: Redis-py
GO: Redigo
2. Changing Redis Minimum TLS version on Azure side
To disable old TLS versions on your Azure Redis instance, you may need to change the minimum TLS Version to 1.2.
This may take some minutes to be applied and you may use the Powershell script bellow to make sure the changes have been applied.
– Using Azure Portal :
– On Azure Portal, on your Azure Redis blade, choose Advanced Settings
– Change the minimum TLS Version to 1.2
– Save the changes

– Using PowerShell
You can do the same using PoweShell. You need the Az.RedisCache module already installed before run the command:
Set-AzRedisCache -Name <YourRedisName> -MinimumTlsVersion "1.2"
– Using CLI
Using CLI, the –minimum-tls-version are available only at Redis creation time and changing minimum-tls-version on an existing Azure Redis instance is not supported.
3. Check TLS versions supported by Redis endpoint
You can use this PowerShell script to verify what TLS versions are supported by your Azure Cache for Redis endpoint.
If your Redis instance have VNET integration implemented, you may need to run these PowerShell script from some VM inside your VNET, to have access to Azure Redis Instance:
param(
[Parameter(Mandatory=$true)]
[string]$redisCacheName,
[Parameter(Mandatory=$false)]
[string]$dnsSuffix = ".redis.cache.windows.net",
[Parameter(Mandatory=$false)]
[int]$connectionPort = 6380,
[Parameter(Mandatory=$false)]
[int]$timeoutMS = 2000
)
$redisEndpoint = "$redisCacheName$dnsSuffix"
$protocols = @(
[System.Security.Authentication.SslProtocols]::Tls,
[System.Security.Authentication.SslProtocols]::Tls11,
[System.Security.Authentication.SslProtocols]::Tls12
)
$protocols | % {
$ver = $_
$tcpClientSocket = New-Object Net.Sockets.TcpClient($redisEndpoint, $connectionPort )
if(!$tcpClientSocket)
{
Write-Error "$ver- Error Opening Connection: $port on $computername Unreachable"
exit 1;
}
else
{
$tcpstream = $tcpClientSocket.GetStream()
$sslStream = New-Object System.Net.Security.SslStream($tcpstream,$false)
$sslStream.ReadTimeout = $timeoutMS
$sslStream.WriteTimeout = $timeoutMS
try
{
$sslStream.AuthenticateAsClient($redisEndpoint, $null, $ver, $false)
Write-Host "$ver Enabled"
}
catch [System.IO.IOException]
{
Write-Host "$ver Disabled"
}
catch
{
Write-Error "Unexpected exception $_"
}
}
}
Conclusion:
Despite Azure Cache for Redis still currently support TLS 1.0, 1.1 and 1.2, it’s important to move only to TLS 1.2. Apart of the insecure TLS 1.0 and 1.1 versions, these versions will be deprecated soon from Azure Cache for Redis service.
For that reason is mandatory that all client applications can be adapted in advance to support TLS 1.2 on their Azure Cache for Redis connections, to avoid any downtime in the service.
Related documentation:
Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
PowerShell Az.RedisCache module
CLI Az Redis Create command
TLS security blog
I hope this can be useful !!!
by Scott Muniz | Aug 7, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Hi all, Zoheb here publishing on behalf of a guest author, Morne Naude. So without further ado…
Hi Everyone,
Morne here again and welcome to the first blog in our series on Azure Active Directory Security where we will be sharing all details on how we helped our SMC customers reduce the attack vector by enabling Identity Protection in Azure.
If you have not read our introductory blog covering the entire background on our SMC Delivery Methodology, please do give it a read now before continuing here.
How Microsoft Mission Critical Team Helped Secure AAD
If an Electron Can Be in Two Places at Once, Why Can’t You …
Well you can’t PERIOD
We refer to this as Atypical travel “Sign in from an atypical location based on the user’s recent sign-ins.“ or Unfamiliar sign-in properties “Sign in with properties we’ve not seen recently for the given user.”
Before we get on to more details on how we helped our SMC customer, here is some background information on Identity Protection & Risky Sign ins which may help you understand the subject better.
What is Azure Active Directory Identity Protection?
Identity Protection is a tool that allows organizations to accomplish three key tasks:
- Automate the detection and remediation of identity-based risks.
- Investigate risks using data in the portal.
- Export risk detection data to third-party utilities for further analysis.
Identity Protection uses the learnings Microsoft has acquired from their position in organizations with Azure AD, the consumer space with Microsoft Accounts, and in gaming with Xbox to protect your users. Microsoft analyses 6.5 trillion signals per day to identify and protect customers from threats.
This is but a few examples of risk types Azure identity protection use in its classifications.
Risk Classification
|
Risk detection type
|
Description
|
|
Atypical travel
|
Sign in from an atypical location based on the user’s recent sign-ins.
|
|
Anonymous IP address
|
Sign in from an anonymous IP address (for example: Tor browser, anonymizer VPNs).
|
|
Unfamiliar sign-in properties
|
Sign in with properties we’ve not seen recently for the given user.
|
|
Malware linked IP address
|
Sign in from a malware linked IP address
|
|
Leaked Credentials
|
This risk detection indicates that the user’s valid credentials have been leaked
|
|
Azure AD threat intelligence
|
Microsoft’s internal and external threat intelligence sources have identified a known attack pattern
|
Coming back to our customers’ pain areas, we were detecting a high number of Risky Sign ins every day across the organization, we spotted these during the Azure AD Assessments as well as observations from the Risky Sign in logs.
Working with the Mission Critical team gives our customers the ultimate personalized support experience from a designated team that:
- Knows you and knows what your solution means to your enterprise
- Works relentlessly to find every efficiency to help you get ahead and stay ahead
- Advocates for you and helps ensure get you the precise guidance you need.
Knowing the customer well helped us understand the extent of the problem, to work closely with their Identity team and recommend improvements to them.
There were various attack trends observed from Azure AD Connect Health related to Password spray, Breach replay, Phishing etc. on Azure and it was an urgent need of the hour to get into a better security posture.
After speaking with the messaging team, we realized that few of the Risky users had strange Mailbox rules created and were spamming multiple users in the organization (more details to come in one of our following blogs).
If you are interested to read more about Forms Injection Attacks on emails please see: https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-outlook-rules-forms-attack?view=o365-worldwide
Our customer had no policy/process configured to tackle this issue, they only had Multi Factor Authentication (MFA) in place for Global Admins.
Policies
We suggested to enable User Risk as well as Sign-in Risk policies for users deemed as “high-risk”, below are some details on how it was enabled for our customer.
Identity Protection analyses signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn’t performed by the user. Administrators can decide based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require multi-factor authentication.
We enabled Sign in risk Policy to force “MFA” for all “High Risk” users as per configuration below.

Identity Protection can calculate what it believes is normal for a user’s behaviour and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. Administrators can decide based on this risk score signal to enforce organizational requirements.
Considering the circumstances, we suggested to our customer to implement below User Risk policy, this policy would ensure that if there is any “High Risk” user he will be required to change Password as per configuration below.

So, these two policies ensured all the Risky Sign in users are forced to use MFA and change their passwords.
Notification for Risky users
Our customer has Azure AD P2 licensing, so we could leverage the full set of Identity protection features.
We configured the users at risk email in the Azure portal under Azure Active Directory > Security > Identity Protection > Users at risk detected alerts.
By default, recipients include all Global Admins. Global Admins can also add other Global Admins, Security Admins, Security Readers as recipients.
Optionally you can Add additional emails to receive alert notifications; this feature is a preview and users defined must have the appropriate permissions to view the linked reports in the Azure portal. We included members of the Identity and Security teams as well.

The weekly digest email contains a summary of new risk detections, such as:
- New risky users detected
- New risky sign-ins detected (in real-time)
- Links to the related reports in Identity Protection

This resulted in a drastic reduction in the number of risky users and risky sign-ins. Additionally we helped implement a process of investigation and remediation of these at- risk accounts from the service desk to the internal security department. Currently the business is in the process of including Medium based at-risk accounts into the above policies.
NOTE: The features and guidelines implemented in this case were specific to this customer’s requirements and environment, so this is not a “General” guideline to enable any of the mentioned features.
Hope this helps,
Morne
by Scott Muniz | Aug 7, 2020 | Alerts, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.

August is already a week over, can you believe it? That doesn’t stop this team they’re everywhere doing great things to make the experience better for you in the docs, product and services. Have some feedback? Comment on their posts, contact them on twitter or wherever you see them engaging.
Cloud Advocates List on Microsoft Docs
Azure Cloud Adviocates Twitter List
Follow @azureadvocates on Twitter
Content Round Up
WebAssembly, your browsers sandbox
Aaron Powell
We’ve been doing web development for 30+ years and in all that time have you ever stopped to think, “This SPA needs more C++”? Well thanks to the power of WebAssembly you can finally bring C, C++, Rust, Go and other high level languages to the browser.
So does this mean that we can replace our JavaScript with these other languages? Probably not, so what is the role that WebAssembly can play in building web applications? And most importantly, what does it look like as a web developer to try and incorporate these platforms that have traditionally been on the server?
MS Dev Twitch Stream: Build a Virtual Reality Snake Game with JavaScript! – Aug 5th
Cassie Breviu
Did you know that you can create Virtual Reality (VR) websites with JavaScript using BabylonJS? In this session you will get an introduction on how to build WebVR sites! We will discuss the main concepts used in building 3D experiences. Then go step-by-step to build out a basic version of the classic snake game. By the end of this session you should have a starting point to building your first VR game!
Azure Monitor for Azure Arc enabled servers
Thomas Maurer
As you know Azure Arc for servers is currently in preview and allows you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers, similar to how you manage native Azure virtual machines. With the new extensions which were introduced a couple of weeks ago, you can now also use Azure Monitor to not only monitor your servers in Azure but also servers running on-premises or at other cloud providers. This will provide you with cloud-native management for your Linux and Windows servers.
Docker, FROM scratch
Aaron Powell
Docker’s popularity has exploded over the last couple of years, especially in the DevOps space, but unless you’ve spent a lot of time in that area it can be a confusing technology to wrap your head around.
So let’s step back and start looking at Docker from, well, FROM scratch (and we’ll understand just what that means).
With minimal starting knowledge of Docker we’ll look into what it is, cover off all the core concepts from images to containers, volumes to networks, and how we can compose environments. We’ll also look at how to use Docker in Dev, not just DevOps and how containers can be useful tools without being something to run production infrastructure on.
HobbyHack Augmented Reality on the Web Workshop Video
Aysegul Yonet
Augmented Reality applications are becoming common user experiences on mobile devices. WebXR gives you a way to build for all mobile platforms as well as Augmented Reality headsets once. In this workshop you will learn: * The basic concepts of WebXR and 3D on the web. * Possible use cases for Augmented Reality mobile applications that solve real-world problems. * How to get started prototyping really fast.
Assess AWS VMs with Azure Migrate
Sarah Lean
Using Azure Migrate assess and understand what it would look like if you moved AWS VMs over to Azure.
ITOpsTalk Blog: Step-by-Step: Traditional Windows Cluster setup in Azure
Pierre Roman
How to deploy a windows cluster using Azure Shared Disks
Blog/ Setting-up Visual Studio Codespaces for .NET Core (EN)
Justin Yoo
Since April 2020 Visual Studio Codespaces has been generally available. In addition to that, GitHub Codespaces has been provided as a private preview. Both are very similar to each other in terms of their usage. There are differences between both, though, discussed from this post. Throughout this post, I’m going to focus on the .NET Core application development.
Add ISO DVD Drive to a Hyper-V VM using PowerShell
Thomas Maurer
Hyper-V offers the capability to add an ISO image to a virtual CD/DVD drive and you can use Hyper-V Manager to do that, or you can also use PowerShell. Here is how you can add an ISO to a Hyper-V virtual machine (VM) using PowerShell. There are two ways of doing it if you already have a virtual DVD drive attached to the VM or if you need to add a virtual DVD drive.
State of the Art in Automated Machine Learning
Francesca Lazzeri
- Automated Machine Learning (AutoML) is important because it allows data scientists to save time and resources, delivering business value faster and more efficiently
- AutoML is not likely to remove the need for a “human in the loop” for industry-specific knowledge and translating the business problem into a machine learning problem
- Some important research topics in the area are feature engineering, model transparency, and addressing bias
- There are several commercial and open-source AutoML solutions available now for automating different parts of the machine learning process
- Some limitations of AutoML are the amount of computational resources required and the needs of domain-specific applications
Building & Debugging Microservices faster using Kubernetes and Visual Studio
Shayne Boyer
Shayne walks through using Docker, Visual Studio 2019 and Kubernetes during his talk at .NET Conf 2019: Focus on Microservices. This demonstrations shows how you can take advantage of Visual Studio tools to build, debug and deploy microservices to Kubernetes faster.
Video: Assess AWS VMs with Azure Migrate
Sarah Lean
In this video Sarah Lean talks about how you can utilize Azure Migrate to assess your virtual machines hosted in AWS with a view to migrating them to Azure.
Let’s Talk About Azure AD and Microsoft Identity Platform
Matt Soucoup
Have you been following along with all the changes in Azure Active AD and the various Microsoft Identity branded things? The changes and new features are amazing … but change can be confusing. So, so many terms. And how do they all fit together? Let’s have a little chat…
Authenticate a ASP.NET Core Web App With Microsoft.Identity.Web
Matt Soucoup
So you want to authenticate your ASP.NET Core web app with Azure AD. It sounds daunting, but with a little help from the `Microsoft.Identity.Web` SDK and knowing how it fits in with the pieces of Azure AD’s Applications, it’s not too bad.
The intention of this article is straightforward. Configure Azure AD to allow users of your application to sign-in. And create an ASP.NET Core web application that signs in.
Recent Comments