Microsoft Endpoint Protection (MD ATP), Commonly Used Queries and Examples.

Microsoft Endpoint Protection (MD ATP), Commonly Used Queries and Examples.

This article is contributed. See the original author and article here.

 


Hello IT Pros,  


I have collected the Microsoft Endpoint Protection (Microsoft Defender ATP) advanced hunting queries from mdemo, Microsoft Demo and Github for your convenient reference. As we knew, you or your InfoSec Team may need to run a few queries in your daily security monitoring task.  


To save the query 



  • In Securitycenter.windows.com 

  • go to Advanced hunting and create the query 

  • copy and paste the content, 

  • save them for future re-use 


TanTran_0-1603104923361.png


 


 

















































































Query Name 



Content 



Note 



Search Device Events by IP address 



DeviceNetworkEvents 


| where RemoteIP == “52.176.49.76”  


 



 



List Devices with Schedule Task created by Virus 



DeviceProcessEvents  


| where FolderPath endswith “schtasks.exe” and ProcessCommandLine has ” /create ” and AccountName != “system” 


 



 



List Device contained Virus File Name 



DeviceFileEvents 


| where  FileName == ‘Invoice.pdf.exe’ 


 



 



List Devices with Phising File extension (double extension) as .pdf.exe, .docx.exe, .doc.exe, .mp3.exe  



DeviceProcessEvents  


| where Timestamp > ago(7d) 


| where FileName endswith “.pdf.exe” 


    or FileName endswith “.doc.exe” 


    or FileName contains “.docx.exe” 


    or FileName contains “.mp3.exe” 


| project Timestamp, DeviceName, FileName, AccountSid, AccountName, AccountDomain 


| top 100 by Timestamp 


 



 



List Device blocked by Windows Defender ExploitGuard 



DeviceEvents 


| where  ActionType  =~ “ExploitGuardNetworkProtectionBlocked” 


| summarize count(RemoteUrl) by InitiatingProcessFileName, RemoteUrl,Audit_Only=tostring(parse_json(AdditionalFields).IsAudit) 


| sort by count_RemoteUrl desc 


 



 



List All Files Create during the last hour 



DeviceFileEvents 


| where Timestamp > ago(1h) 


| project FileName, FolderPath, SHA1, DeviceName, Timestamp 


| limit 1000 


 



 



List Device who has a specific File Hash 



DeviceFileEvents 


| where SHA1 == “4aa9deb33c936c0087fb05e312ca1f09369acd27”  


 



 



List IP address blocked by FW rule 



DeviceEvents 


| where ActionType in (“FirewallOutboundConnectionBlocked”, “FirewallInboundConnectionBlocked”, “FirewallInboundConnectionToAppBlocked”) 


| project DeviceId , Timestamp , InitiatingProcessFileName , InitiatingProcessParentFileName, RemoteIP, RemotePort, LocalIP, LocalPort 


| summarize MachineCount=dcount(DeviceId) by RemoteIP 


| top 100 by MachineCount desc 


 



 



Look for public the IP addresses of devices that failed to logon multiple times, using multiple accounts, and eventually succeeded. 


 



DeviceLogonEvents 


| where isnotempty(RemoteIP)  


    and AccountName !endswith “$” 


    and RemoteIPType == “Public” 


| extend Account=strcat(AccountDomain, “”, AccountName) 


| summarize  


    Successful=countif(ActionType == “LogonSuccess”), 


    Failed = countif(ActionType == “LogonFailed”), 


    FailedAccountsCount = dcountif(Account, ActionType == “LogonFailed”), 


    SuccessfulAccountsCount = dcountif(Account, ActionType == “LogonSuccess”), 


    FailedAccounts = makeset(iff(ActionType == “LogonFailed”, Account, “”), 5), 


    SuccessfulAccounts = makeset(iff(ActionType == “LogonSuccess”, Account, “”), 5) 


    by DeviceName, RemoteIP, RemoteIPType 


| where Failed > 10 and Successful > 0 and FailedAccountsCount > 2 and SuccessfulAccountsCount == 1 


 


 



From WD ATP  


Demo 



Look for machines failing to log-on to multiple machines or using multiple accounts 



// Note – RemoteDeviceName is not available in all remote logon attempts 


DeviceLogonEvents 


| where isnotempty(RemoteDeviceName) 


| extend Account=strcat(AccountDomain, “”, AccountName) 


| summarize  


    Successful=countif(ActionType == “LogonSuccess”), 


    Failed = countif(ActionType == “LogonFailed”), 


    FailedAccountsCount = dcountif(Account, ActionType == “LogonFailed”), 


    SuccessfulAccountsCount = dcountif(Account, ActionType == “LogonSuccess”), 


    FailedComputerCount = dcountif(DeviceName, ActionType == “LogonFailed”), 


    SuccessfulComputerCount = dcountif(DeviceName, ActionType == “LogonSuccess”) 


    by RemoteDeviceName 


| where 


    Successful > 0 and 


    ((FailedComputerCount > 100 and FailedComputerCount > SuccessfulComputerCount) or 


        (FailedAccountsCount > 100 and FailedAccountsCount > SuccessfulAccountsCount)) 


 



From WD ATP 


Demo 



List all devices named start with prefix FC- 



DeviceInfo   


| where  DeviceName startswith “FC-“ 


 



 



List Windows Defender Scan Actions completed or Cancelled 



DeviceEvents 


| where ActionType in (“AntivirusScanCompleted”, “AntivirusScanCancelled”) 


| extend A=parse_json(AdditionalFields)  


| project Timestamp, DeviceName, ActionType,ScanType = A.ScanTypeIndex, StartedBy= A.User 


| sort by Timestamp desc  


 


 



 



List Devices access to bad URL 



DeviceNetworkEvents 


| where RemoteUrl == “www.advertising.com” 


| project Timestamp, DeviceName, ActionType, RemoteIP, RemoteUrl, InitiatingProcessFileName, InitiatingProcessCommandLine 


 



 



List All URL access by a Device named contained the word FC-DC 



DeviceNetworkEvents 


| where RemoteUrl != “www.advertising.com” and DeviceName contains “fc-dc” 


| project Timestamp, DeviceName, ActionType, RemoteIP, RemoteUrl, InitiatingProcessFileName, InitiatingProcessCommandLine 


 



 



 


Github Advanced Hunting Cheat Sheet: 



















































Purpose 



Query Content 



Notes 



Find endpoints communicating to a specific domain. 


 Author: @maarten_goet 



let Domain = “http://domainxxx.com”; DeviceNetworkEvents | where Timestamp > ago(7d) and RemoteUrl contains Domain | project Timestamp, DeviceName, RemotePort, RemoteUrl | top 100 by Timestamp desc 


 



let is the command to introduce variables.  


Variable name: “Domain” 


with value: http://domainxxx.com 



Finds PowerShell execution events that could involve a download. 


Author: @MicrosoftMTP  



union DeviceProcessEvents, DeviceNetworkEvents | where Timestamp > ago(7d) | where FileName in~ (“powershell.exe”, “powershell_ise.exe”) | where ProcessCommandLine has_any(“WebClient”, “DownloadFile”, “DownloadData”, “DownloadString”, “WebRequest”, “Shellcode”, “http”, “https”) | project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine, FileName, ProcessCommandLine, RemoteIP, RemoteUrl, RemotePort, RemoteIPType | top 100 by Timestamp 



“union” is the command to combine multiple Device Query Tables 



Find scheduled tasks created by a non-system account 


Author: @maarten_goet 



DeviceProcessEvents 


| where FolderPath endswith “schtasks.exe” and ProcessCommandLine has “/create” and AccountName != “system” 


| where Timestamp > ago(7d) 



 



 



 



 



Find possible clear text passwords in Windows registry.  


Author: @MicrosoftMTP 



DeviceRegistryEvents  


| where ActionType == “RegistryValueSet”  


| where RegistryValueName == “DefaultPassword”  


| where RegistryKey has @”SOFTWAREMicrosoftWindows NTCurrentVersionWinlogon” 


| project Timestamp, DeviceName, RegistryKey | top 100 by Timestamp 



 



Lookup process executed from binary hidden in Base64 encoded file.  


Author: @MicrosoftMTP 



DeviceProcessEvents 


| where Timestamp > ago(14d) 


| where ProcessCommandLine contains “.decode(‘base64’)” or ProcessCommandLine contains “base64 –decode” or ProcessCommandLine contains “.decode64(“ 


| project Timestamp , DeviceName , FileName , FolderPath , ProcessCommandLine , InitiatingProcessCommandLine  


| top 100 by Timestamp 



 



Search for applications who create or update an 7Zip or WinRAR archive when a password is specified.  


Author: @PowershellPoet 



DeviceProcessEvents | where ProcessCommandLine matches regex @”s[aukfAUKF]s.*s-p”  


| extend SplitLaunchString = split(ProcessCommandLine, ‘ ‘) 


 | where array_length(SplitLaunchString) >= 5 and SplitLaunchString[1] in~ (‘a’,’u’,’k’,’f’)  


| mv-expand SplitLaunchString  


| where SplitLaunchString startswith “-p”  


| extend ArchivePassword = substring(SplitLaunchString, 2, strlen(SplitLaunchString)) 


 | project-reorder ProcessCommandLine, ArchivePassword  



 


 


 


 


 


 


 


 


-p is the password switch and is immediately followed by a password without a space  


 



 



 



 



 


Reference: 


https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/agofunction 


https://docs.microsoft.com/en-us/windows/security/threat-protection/microsoft-defender-atp/advanced-hunting-query-language 


https://github.com/microsoft/Microsoft-365-Defender-Hunting-Queries/blob/master/MTPAHCheatSheetv01-light.pdf 


 


 


Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service.
The sample scripts are provided AS IS without warranty of any kind.
Microsoft further disclaims all implied warranties including, without limitation,
any implied warranties of merchantability or of fitness for a particular purpose.
The entire risk arising out of the use or performance of the sample scripts and documentation
remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation,
production, or delivery of the scripts be liable for any damages whatsoever (including,
without limitation, damages for loss of business profits, business interruption,
loss of business information, or other pecuniary loss) arising out of the use of or inability
to use the sample scripts or documentation, even if Microsoft has been advised of the possibility
of such damages.

 

Microsoft Endpoint Protection (MD ATP), Commonly Used Queries and Examples.

Microsoft Defender ATP, Commonly Used Queries and Examples.

This article is contributed. See the original author and article here.

 


Hello IT Pros,  


I have collected the Microsoft Endpoint Protection (Windows Defender ATP) advanced hunting queries from mdemo, Microsoft Demo and Github for your convenient reference. As we knew, you or your InfoSec Team may need to run a few queries in your daily security monitoring task.  


To save the query 



  • In Securitycenter.windows.com 

  • go to Advanced hunting and create the query 

  • copy and paste the content, 

  • save them for future re-use 


TanTran_0-1603104923361.png


 


 

















































































Query Name 



Content 



Note 



Search Device Events by IP address 



DeviceNetworkEvents 


| where RemoteIP == “52.176.49.76”  


 



 



List Devices with Schedule Task created by Virus 



DeviceProcessEvents  


| where FolderPath endswith “schtasks.exe” and ProcessCommandLine has ” /create ” and AccountName != “system” 


 



 



List Device contained Virus File Name 



DeviceFileEvents 


| where  FileName == ‘Invoice.pdf.exe’ 


 



 



List Devices with Phising File extension (double extension) as .pdf.exe, .docx.exe, .doc.exe, .mp3.exe  



DeviceProcessEvents  


| where Timestamp > ago(7d) 


| where FileName endswith “.pdf.exe” 


    or FileName endswith “.doc.exe” 


    or FileName contains “.docx.exe” 


    or FileName contains “.mp3.exe” 


| project Timestamp, DeviceName, FileName, AccountSid, AccountName, AccountDomain 


| top 100 by Timestamp 


 



 



List Device blocked by Windows Defender ExploitGuard 



DeviceEvents 


| where  ActionType  =~ “ExploitGuardNetworkProtectionBlocked” 


| summarize count(RemoteUrl) by InitiatingProcessFileName, RemoteUrl,Audit_Only=tostring(parse_json(AdditionalFields).IsAudit) 


| sort by count_RemoteUrl desc 


 



 



List All Files Create during the last hour 



DeviceFileEvents 


| where Timestamp > ago(1h) 


| project FileName, FolderPath, SHA1, DeviceName, Timestamp 


| limit 1000 


 



 



List Device who has a specific File Hash 



DeviceFileEvents 


| where SHA1 == “4aa9deb33c936c0087fb05e312ca1f09369acd27”  


 



 



List IP address blocked by FW rule 



DeviceEvents 


| where ActionType in (“FirewallOutboundConnectionBlocked”, “FirewallInboundConnectionBlocked”, “FirewallInboundConnectionToAppBlocked”) 


| project DeviceId , Timestamp , InitiatingProcessFileName , InitiatingProcessParentFileName, RemoteIP, RemotePort, LocalIP, LocalPort 


| summarize MachineCount=dcount(DeviceId) by RemoteIP 


| top 100 by MachineCount desc 


 



 



Look for public the IP addresses of devices that failed to logon multiple times, using multiple accounts, and eventually succeeded. 


 



DeviceLogonEvents 


| where isnotempty(RemoteIP)  


    and AccountName !endswith “$” 


    and RemoteIPType == “Public” 


| extend Account=strcat(AccountDomain, “”, AccountName) 


| summarize  


    Successful=countif(ActionType == “LogonSuccess”), 


    Failed = countif(ActionType == “LogonFailed”), 


    FailedAccountsCount = dcountif(Account, ActionType == “LogonFailed”), 


    SuccessfulAccountsCount = dcountif(Account, ActionType == “LogonSuccess”), 


    FailedAccounts = makeset(iff(ActionType == “LogonFailed”, Account, “”), 5), 


    SuccessfulAccounts = makeset(iff(ActionType == “LogonSuccess”, Account, “”), 5) 


    by DeviceName, RemoteIP, RemoteIPType 


| where Failed > 10 and Successful > 0 and FailedAccountsCount > 2 and SuccessfulAccountsCount == 1 


 


 



From WD ATP  


Demo 



Look for machines failing to log-on to multiple machines or using multiple accounts 



// Note – RemoteDeviceName is not available in all remote logon attempts 


DeviceLogonEvents 


| where isnotempty(RemoteDeviceName) 


| extend Account=strcat(AccountDomain, “”, AccountName) 


| summarize  


    Successful=countif(ActionType == “LogonSuccess”), 


    Failed = countif(ActionType == “LogonFailed”), 


    FailedAccountsCount = dcountif(Account, ActionType == “LogonFailed”), 


    SuccessfulAccountsCount = dcountif(Account, ActionType == “LogonSuccess”), 


    FailedComputerCount = dcountif(DeviceName, ActionType == “LogonFailed”), 


    SuccessfulComputerCount = dcountif(DeviceName, ActionType == “LogonSuccess”) 


    by RemoteDeviceName 


| where 


    Successful > 0 and 


    ((FailedComputerCount > 100 and FailedComputerCount > SuccessfulComputerCount) or 


        (FailedAccountsCount > 100 and FailedAccountsCount > SuccessfulAccountsCount)) 


 



From WD ATP 


Demo 



List all devices named start with prefix FC- 



DeviceInfo   


| where  DeviceName startswith “FC-“ 


 



 



List Windows Defender Scan Actions completed or Cancelled 



DeviceEvents 


| where ActionType in (“AntivirusScanCompleted”, “AntivirusScanCancelled”) 


| extend A=parse_json(AdditionalFields)  


| project Timestamp, DeviceName, ActionType,ScanType = A.ScanTypeIndex, StartedBy= A.User 


| sort by Timestamp desc  


 


 



 



List Devices access to bad URL 



DeviceNetworkEvents 


| where RemoteUrl == “www.advertising.com” 


| project Timestamp, DeviceName, ActionType, RemoteIP, RemoteUrl, InitiatingProcessFileName, InitiatingProcessCommandLine 


 



 



List All URL access by a Device named contained the word FC-DC 



DeviceNetworkEvents 


| where RemoteUrl != “www.advertising.com” and DeviceName contains “fc-dc” 


| project Timestamp, DeviceName, ActionType, RemoteIP, RemoteUrl, InitiatingProcessFileName, InitiatingProcessCommandLine 


 



 



 


Github Advanced Hunting Cheat Sheet: 



















































Purpose 



Query Content 



Notes 



Find endpoints communicating to a specific domain. 


 Author: @maarten_goet 



let Domain = “http://domainxxx.com”; DeviceNetworkEvents | where Timestamp > ago(7d) and RemoteUrl contains Domain | project Timestamp, DeviceName, RemotePort, RemoteUrl | top 100 by Timestamp desc 


 



let is the command to introduce variables.  


Variable name: “Domain” 


with value: http://domainxxx.com 



Finds PowerShell execution events that could involve a download. 


Author: @MicrosoftMTP  



union DeviceProcessEvents, DeviceNetworkEvents | where Timestamp > ago(7d) | where FileName in~ (“powershell.exe”, “powershell_ise.exe”) | where ProcessCommandLine has_any(“WebClient”, “DownloadFile”, “DownloadData”, “DownloadString”, “WebRequest”, “Shellcode”, “http”, “https”) | project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine, FileName, ProcessCommandLine, RemoteIP, RemoteUrl, RemotePort, RemoteIPType | top 100 by Timestamp 



“union” is the command to combine multiple Device Query Tables 



Find scheduled tasks created by a non-system account 


Author: @maarten_goet 



DeviceProcessEvents 


| where FolderPath endswith “schtasks.exe” and ProcessCommandLine has “/create” and AccountName != “system” 


| where Timestamp > ago(7d) 



 



 



 



 



Find possible clear text passwords in Windows registry.  


Author: @MicrosoftMTP 



DeviceRegistryEvents  


| where ActionType == “RegistryValueSet”  


| where RegistryValueName == “DefaultPassword”  


| where RegistryKey has @”SOFTWAREMicrosoftWindows NTCurrentVersionWinlogon” 


| project Timestamp, DeviceName, RegistryKey | top 100 by Timestamp 



 



Lookup process executed from binary hidden in Base64 encoded file.  


Author: @MicrosoftMTP 



DeviceProcessEvents 


| where Timestamp > ago(14d) 


| where ProcessCommandLine contains “.decode(‘base64’)” or ProcessCommandLine contains “base64 –decode” or ProcessCommandLine contains “.decode64(“ 


| project Timestamp , DeviceName , FileName , FolderPath , ProcessCommandLine , InitiatingProcessCommandLine  


| top 100 by Timestamp 



 



Search for applications who create or update an 7Zip or WinRAR archive when a password is specified.  


Author: @PowershellPoet 



DeviceProcessEvents | where ProcessCommandLine matches regex @”s[aukfAUKF]s.*s-p”  


| extend SplitLaunchString = split(ProcessCommandLine, ‘ ‘) 


 | where array_length(SplitLaunchString) >= 5 and SplitLaunchString[1] in~ (‘a’,’u’,’k’,’f’)  


| mv-expand SplitLaunchString  


| where SplitLaunchString startswith “-p”  


| extend ArchivePassword = substring(SplitLaunchString, 2, strlen(SplitLaunchString)) 


 | project-reorder ProcessCommandLine, ArchivePassword  



 


 


 


 


 


 


 


 


-p is the password switch and is immediately followed by a password without a space  


 



 



 



 



 


Reference: 


https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/agofunction 


https://docs.microsoft.com/en-us/windows/security/threat-protection/microsoft-defender-atp/advanced-hunting-query-language 


https://github.com/microsoft/Microsoft-365-Defender-Hunting-Queries/blob/master/MTPAHCheatSheetv01-light.pdf 


 


 


Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service.
The sample scripts are provided AS IS without warranty of any kind.
Microsoft further disclaims all implied warranties including, without limitation,
any implied warranties of merchantability or of fitness for a particular purpose.
The entire risk arising out of the use or performance of the sample scripts and documentation
remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation,
production, or delivery of the scripts be liable for any damages whatsoever (including,
without limitation, damages for loss of business profits, business interruption,
loss of business information, or other pecuniary loss) arising out of the use of or inability
to use the sample scripts or documentation, even if Microsoft has been advised of the possibility
of such damages.

 

Azure portal – Announcements and videos

Azure portal – Announcements and videos

This article is contributed. See the original author and article here.

Like any cloud service, the Azure portal itself also gets functionality updates and changes. So how do you keep up with what’s new?


 


Azure portal update blog


The Azure portal product team maintain a blog where, on a monthly basis, they post a summary of what’s in the latest update. Visit the Azure portal blog and follow them for notifications via Tech Community or via RSS feed. 


 


Did you know that Cosmos DB now has a Serverless capacity model, for consumption based billing (in public preview)? That was in the October 2020 update!


 


Azure portal “How To” Video Series


The Azure portal team also publishes short “how to” videos on YouTube, under the Microsoft Azure channel.


Azure Portal How To.png


Here are some of my favorites:


 


Improvements to the Linux Virtual Machine experience





 


How to monitor Azure Functions





 


How to connect to a storage account using private link





 


 


Learn more:



 


 

Learn TV Live Session – Develop secure IoT Solutions for Azure Sphere with IoT Central 20th Oct 2020

Learn TV Live Session – Develop secure IoT Solutions for Azure Sphere with IoT Central 20th Oct 2020

This article is contributed. See the original author and article here.

On the 20th October at 1PM PDT, 9PM BST, Mustafa Saifee, a Microsoft Learn Student Ambassador from SVKM Institute of Technology, India and Dave Glover, a Cloud Advocate from Microsoft will livestream an in-depth walkthrough of how to develop a secure IoT solution with Azure Sphere and IoT Central n Learn TV.


 


The content will be based on a module on Microsoft Learn, our hands-on, self guided learning platform, and you can follow along at https://docs.microsoft.com/en-us/learn/modules/develop-secure-iot-solutions-azure-sphere-iot-central/ 


 


You can follow along with us live on October 20th, or join the Microsoft IoT Cloud Advocates in our IoT TechCommunity throughout October to ask your questions about IoT Edge development.


 


Meet the presenters


 

 

mustafasaifee.jpg


Mustafa Saifee


Microsoft Learn Student Ambassador


SVKM Institute of Technology


 

dave-glover.png


Dave Glover 
Senior Cloud Advocate, Microsoft


IoT and Cloud Specialist


 


Session details



In this session Dave and Mustafa will deploy an Azure Sphere application to monitor ambient conditions for laboratory conditions. The application will monitor the room environment conditions, connect to IoT Hub, and send telemetry data from the device to the cloud. You’ll control cloud to device communications and undertake actions as needed.



Learning Objectives



In this module, you will:



  • Create an IoT Central Application

  • Configure your Azure Sphere application to IoT Central

  • Build and deploy the Azure Sphere application

  • Display the environment telemetry in the IoT Central Dashboard

  • Control an Azure Sphere application using Azure IoT Central properties and commands



 


Ready to go


Our Livestream will be shown live on this page and at Microsoft Learn TV on Tuesday 20th October 2020 or early morning of Wednesday 21th October in APAC time zone. 



 

This is a Global event and can be viewed LIVE at Microsoft Learn TV https://docs.microsoft.com/en-us/learn/tv/ at these times:



 

USA 1pm PDT
USA 4pm EDT
UK 9pm BST
EU 10pm CEST

India 1.30am (21th October) IST






How to reconcile Orphan stretch databases

How to reconcile Orphan stretch databases

This article is contributed. See the original author and article here.

Stretch databases where introduced in SQL server 2016 to allow store your cold data un Azure  and access them transparently and securely without any change in queries or applications.


 


https://docs.microsoft.com/en-us/sql/sql-server/stretch-database/stretch-database?view=sql-server-ver15


 


Today I have been working on case were customer was disabled and enabled stretch on a table several times, and result had been that he had his cold data   distributed on several stretch databases and only one of them was replicating cold data.


Remember, If you need stop movement of cold data to your stretch database temporary , the option to use is “ Pause”.


 


Palomag_MSFT_0-1603091014622.png


 


 


 


The way to reconcile all of them on a single stretch database is not difficult but you will need download orphan data to your on-premises  using “Linked servers”  to orphan databases and using INSERT INTO  to merge all of them or your on-premises main database.


 


 


1.- If stretch is enable you will need identify stretch database that is active and receiving data.


 


Select Tasks | Stretch | Monitor for a database in SQL Server Management Studio to open  “Stretch Database Monitor”


 


The top portion of the monitor displays general information about both the Stretch-enabled SQL Server database and the remote Azure database.


 


The bottom portion of the monitor displays the status of data migration for each Stretch-enabled table in the database.


 


 


Palomag_MSFT_1-1603091014667.png


 


 


 


 


https://docs.microsoft.com/en-us/sql/sql-server/stretch-database/monitor-and-troubleshoot-data-migration-stretch-database?view=sql-server-ver15


 


 


2.- Create one linked server per each orphan database.


 


Palomag_MSFT_2-1603091014674.png


 


 


Sample:


Palomag_MSFT_0-1603094545059.png


 


3.- Insert data from each Linked server on target on-premises table using INSERT INTO


 


  insert into [<target_database_name].[dbo].[<target_table_name>]


  select * from <linked_server_name>.dbo.<source_database_name>.dbo.[<table_name>]


 


 


4.- Let stretch replication upload data to the active stretch database.


 


 


See you soon!

Let's Encrypt SSL Certificate to Azure Functions

Let's Encrypt SSL Certificate to Azure Functions

This article is contributed. See the original author and article here.

Throughout this series, I’m going to show how an Azure Functions instance can map APEX domains, add an SSL certificate and update its public inbound IP address to DNS.


 



  • APEX Domains to Azure Functions in 3 Ways

  • Let’s Encrypt SSL Certificate to Azure Functions

  • Updating DNS A Record for Azure Functions Automatically

  • Deploying Azure Functions via GitHub Actions without Publish Profile


 


In my previous post, I discussed how to map a root domain or APEX domain with an Azure Functions instance. Let’s bind an SSL certificate to the custom domain, which is generated by Let’s Encrypt so that we can enable HTTPS connection through the custom domain.


 


Let’s Encrypt


 


Let’s Encrypt is a non-profit organisation that issues free SSL certificate. Although it’s free, it’s widely accepted and backed by many tech companies. There are a few limitations, though. It’s valid only for three months. In other words, we MUST renew the SSL certificate issued by Let’s Encrypt for every three months. But you know, we’ve got automation! So, don’t worry about the certificate renewal as long as we’ve got the automation process for it.


 


Azure App Service Site Extension


 


Azure App Service provides the site extension feature. One of the extensions is the Let’s Encrypt Site Extension. It’s written as the Azure WebJob style so that the WebJob runs every three months to renew the certificate automatically. It’s a pretty useful extension.


 



 


However, this extension has a few critical drawbacks as well.


 



  • It only runs on Windows-based App Service instances (including Azure Functions) because WebJob basically relies on the Windows platform. No Linux-based App Service, unfortunately.

  • It shares the runtime environment with the App Service instance. Therefore, whenever we deploy a new App Service instance, we MUST always deploy the extension and configure it.

  • If we deploy an application with the “delete all files before deployment” option, the WebJob will get deleted.


 


It doesn’t seem to be a way for production use. What else can we take to bind the SSL certificate for free?


 


Azure Functions App Only for SSL Certificate Management


 


We’re lucky enough to have Shibayan who publishes an excellent Azure Function app that manages Let’s Encrypt SSL Certificate with no dependency on the App Service instances. Through the application, we can quickly generate and renew as many SSL certificates as we can and store them to Azure Key Vault. The stored SSL certificates are directly bound to Azure Functions instances. How fantastic!


 


First of all, run the ARM template below to provision an Azure Functions app and Key Vault instance. But, if you like, you can write your own ARM template and run it.


 



 


The provisioned Azure Functions app instance got the Managed Identity feature enabled so the app can directly access to the Key Vault instance to store SSL certificates. Once all relevant resources are provisioned, follow the process below.


 



Let’s say the Azure Functions app instance for the SSL certificate management as https://ssl-management.azurewebsites.net.



 


Authentication / Authorisation


 


The provisioned Azure Functions app includes an admin UI which is only accessible through authentication. Therefore, activate the Authentiation / Authorisation feature like below:


 



 


Then, configure the Azure Active Directory for authentication. We use the account registered to Azure Active Directory. Set the management mode to Express and put the app name. The default value of the app name is the Function app name. We don’t need to change it.


 



 


Now, we got the Azure Functions app configured for SSL certificate management.


 


Azure DNS Configuration


 


I’m assuming that we use Azure DNS for domain management. Go to the resource group where the Azure DNS instance is provisioned and select Access control (IAM) blade, then assign a role to the Azure Functions app for SSL certificate management.


 



 



  • Role: DNS Zone Contributor

  • Assign access to: Function App

  • Selected members: Azure Functions app for SSL certificate management. Only apps that Managed Identity feature enabled appear here.


 


SSL Certificate Generation


 


Open a web browser and access to the admin UI for the SSL certificate management, by accessing to https://ssl-management.azurewebsites.net/add-certificate. If it’s the first time for you to access, you’ll be asked to log-in.


 



 


Once logged-in, the admin UI appears. For APEX domain, enter nothing to the Record name field then click the Add button. If you want to issue the certificate for subdomains, add the subdomain to the Record name field. You can also issue one certificate for as many domains as you want. Here we generate one certificate for both cnts.com and dev.cnts.com.


 



 



If you prefer to creating a separate certificate for each domain, cnts.com and dev.cnts.com, then run the registration twice.



 


Once completed, the pop-up appears like:


 



 


Let’s go to the Azure Key Vault instance to check whether the SSL certificate has been generated or not.


 



 


SSL Certificate Binding to APEX Custom Domain on Azure Functions


 


We’ve got the custom APEX domain, mapped from the previous post. Now, it’s time to bind the certificate with the domain. Go to the Azure Functions instance that I want to attach the certificate and select the TLS/SSL settings blade. Click the Private Key Certificates (.pfx) tab then Import Key Vault Certificate button to import the one stored in our Key Vault instance.


 



 


Once imported, you can see the screen below. As we generated one certificate for both cnts.com and dev.cnts.com, it’s normal to see both domain names.


 



 


Let’s select the Custom domains blade. The domain is still not bound with the SSL certificate that we just imported. Click the Add binding link, choose cnts.com for the Custom domain field, cnts.com,dev.cnts.com for the Private Certificate Thumbprint field. And finally, choose SNI SSL for the TLS/SSL Type field.


 



 


Now we can see the SSL certificate is properly bound with the custom APEX domain.


 



 




 


So far, we’ve walked through how Let’s Encrypt SSL certificate can be bound with a Custom APEX domain on Azure Functions instance. In the next post, I’ll discuss how the inbound IP of the Azure Functions instance is automatically updated to the A Record of Azure DNS.


 


This article was originally published on Dev Kimchi.

Monitoring O365 Service Status with Azure Sentinel

Monitoring O365 Service Status with Azure Sentinel

This article is contributed. See the original author and article here.

During the past few weeks Microsoft has experienced some unfortunate outages in our cloud services.  These outages led to a number of organizations I support reaching out and asking, “How can I better proactively monitor the status of Office 365?”.  This gave me an idea……but before we get to that, let’s discuss where you can find service status information for Office 365 and Azure.


 


Office 365 Service Status


The primary location to find the status of Office 365 Services is inside the Admin Portal using the Service Health Dashboard (https://portal.office.com/Adminportal/Home#/servicehealth) .


 

 

In addition to this portal, if you are a Twitter user you can follow Microsoft 365 Status (@MSFT365Status) to get notifications of incidents within Microsoft 365:


O365 Status Twitter.png


If you are interested in the status of Microsoft Azure, you can leverage the Service Health Blade (https://aka.ms/azureservicehealth :(


Azure Service Health.jpg


These are all very effective methods of tracking service status, but what if I am leveraging Azure Sentinel as my SIEM and I want to track the Office 365 Service Status?   Well that was the question that got me started on this article.  I find it is easiest to learn new technology by having a problem to resolve or an actual goal to achieve.  So I decided this was a good use case to learn more about how to get custom data, in this case REST API data, into Azure Sentinel, use that data to alert on service degradation and then create a new workbook to visualize it.  A pretty lofty goal for a guy with almost zero coding experience.  Let’s see how it worked out…..


 


Step One:  Getting Office 365 Service Status via API


As with just about every other component of the Microsoft Cloud, Office 365 Service Status can be accessed via the Office 365 Management API ( https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-service-communications-api-reference ).  I decided the most effective way to pull this data and send it to Azure Sentinel was to use an Azure Logic App.  If you are not familiar with Azure Logic Apps, it is a low code/no code cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.  Azure Logic Apps are a sibling to Microsoft Power Automate that is part of Office 365, so learning one of these services translates to the other.  This was very helpful because a Microsoft MVP in the UK, Lee Ford, had written a blog post in 2019 on accessing the Service Status via Power Automate (which was called Flow at the time):  https://www.lee-ford.co.uk/get-latest-office-365-service-status-with-flow-or-powershell/ .  I built on Lee’s idea to create my Logic App:


 


I started by creating a new Logic App that runs on a schedule and connects to the Office 365 Management API to get the Service Status via an “HTTP” Action.  I chose every 4 hours, you can decide how often you want to pull the data for your use case.


Logic Apps 1.png


 


The first thing that probably stands out to you is that the “Security Guy” hard coded the authentication secrets into the Logic App.  I only did this for ease of development.  If I were building this application for a production environment I would leverage Azure Key Vault to securely store this information (https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-azure-resource-manager-templates-overview#best-practices—workflow-definition-parameters ). 


 


Next I used a “Parse JSON” action to manipulate the returned information from the HTTP Get.  I used the schema from Lee Ford’s Blog post as my sample payload.


Logic Apps 2.png


 

Now the last step is a little tricky.  We need to take the returned JSON payload and send it to Azure Sentinel.  This payload is an array, so it must be iterated through.  Luckily, Logic Apps is built for people with minimal coding experience and helps guide you through the experience.  Since we want to send this data to Azure Sentinel, which is built on Azure Log Analytics, we choose the “Send Data to Log Analytics” Action.  When I click in the box for “JSON Request body” I am provided a pick list of returned information to choose from.  However, the item we need to use is not shown, so you need to click the “see more” option in the pick list.  This will expose the “value” item, which is what we need.


Logic Apps 3.png


 

When we finish filling in the required parameters, Logic Apps will automatically recognize this is an array and create a For Each container to iterate through the values…pretty cool!


Logic Apps 4.png


 

We are not finished yet.  We don’t actually want “value” in the JSON Request Body field.  We want whatever is the “Current Item” in the loop.  So, delete “value” in the Send Data action and go back to the bottom of your pick list and choose Current Item.


Logic Apps 5.png


 

And that’s it!  You have now ingested Office 365 Service Status to Azure Sentinel.  One thing I forgot to point out, Azure Log Analytics will automatically create the custom log the first time the Logic App runs.  It will add a table called “yourname_CL”. 


Sentinel Custom Log Table.png


 

Step Two:  Making use of the data


Now that we have ingested the service status data into Azure Sentinel, let’s do something with it.


 


First let’s write a simple KQL (Kusto Query Language) query to pull out the basic data we need:


Simple KQL Query.png


 


Now let’s create a scheduled query analytics rule that will create an incident when a service is degraded:


Incident 1.png


Incident 2.png


Incident 3.png


 

 

One of the cool new features in Azure Sentinel that you will notice above is where we can get a preview of what this query will produce.  Based on the settings I have chosen; this will create 1 Alert per day.  You don’t want to create an alert flood, but you do want to be notified appropriately.  So, change the Query Scheduling for what makes sense for your organization. 


Incident 4.png


 

I’m just going to use the defaults for Incident Settings.


Incident 5.png


 

You can even use an Azure Logic App playbook to take some automated action based on the Incident.


Incident 6.png


 

Done!  Now we will see an incident generated if there is a service degradation in Office 365.  See below:


Incident Generated.png


 

For a production environment, I would probably want to be a little more detailed in my incident generation, getting down to individual services, but hopefully this has shown you the “Art of the Possible” and you can take it further.


 


Step Three:  Bonus Step!  Let’s create a workbook in Azure Sentinel to display some of the information we have gathered.


Workbooks (https://docs.microsoft.com/en-us/azure/sentinel/tutorial-monitor-your-data ) provide a way to visualize your data in a custom dashboard experience. 


 


Let’s see what we can come up with.  First we need to create a new workbook:


Create Workbook 1.png


 

This will get a workbook populated with some sample data to start with, let’s edit it:


Create Workbook 2.png


 

Let’s start off by just making a simple grid of the query we already built to show degraded services in the past 4 hours:


Create Workbook 3.png


 

That will get us a simple workbook like this (I also edited the title before I captured the screenshot):


Create Workbook 4.png


 


That’s not very exciting, so let’s add another Query section and try to build a graph:


Create Workbook 5.png


 


We are going to build a “honey comb” graph that will show which services are operational and which are degraded:


Create Workbook 6.png


 

Instead of creating multiple screenshots I have highlighted in green the items I changed.  Also I used a query that returns all service status, not just degraded. (see above)


 


When you click “Done Editing” you will get this visualization which can be zoomed into and out of, as well as moved around. Not perfect, but it only took a few minutes to build.  I’m sure you can come up with an even better one!


Final Workbook.png


 

Thank you for getting this far in my post…..It went a little long :smiling_face_with_smiling_eyes:.  I hope you found this useful and that you can use it to build something for your organization.  Please post comments or questions below.


Thanks, Tony!

Azure Data Explorer Online Event October 14 – Event Summary

Azure Data Explorer Online Event October 14 – Event Summary

This article is contributed. See the original author and article here.

View the recording of Azure Data Explorer online event recording  to hear all about the great new features, announcements, and collaborations for Azure Data Explorer – Azure’s fast, fully- service for real-time analysis of telemetry big data streaming from apps, websites, IoT devices, and more.


 


Screen Shot 2020-10-02 at 11.54.55.jpeg


 


One of Azure’s most used services and the foundation of Microsoft’s telemetry platform, Azure Data Explorer , combines broad data exploration and powerful analytical queries with lightning-fast interactivity.


 


Use Azure Data Explorer to:



  • Monitor mission-critical systems.

  • Analyze IoT data from thousands of devices.

  • Explore and identify trends and anomalies in your data.

  • Tune up customer experience.

  • And many more exciting capabilities!


Read Azure Data Explorer – Reimagine Telemetry Analytics  to learn about the latest groundbreaking innovations, new features, and exciting collaborations.


The event includes a keynote by Rohan Kumar, CVP, Azure Data and fascinating content by the product group team members, delivering sessions on various topics. See the full agenda below.


1.JPG


List the announcements & Azure Updates:


 



5.JPG


Agenda








































































Name



Description



Speakers



Duration (Min)



Opening Session



Opening words, brief overview of the agenda and service



Oded Sacher, Partner Group Manager


Uri Barash, Principal Group Program Manager



15



Reimagine Telemetry Analytics, with Rohan Kumar



Join us to hear from Rohan Kumar, Corporate Vice President of Azure Data, about the exciting developments with Azure Data Explorer, Microsoft’s telemetry analytics platform that is powering Microsoft’s internal and external business



CVP, Azure Data, Rohan Kumar



30


  Daimler trucks north America

 Lutz BeckCIO


Doug Murphy Data intelligence manager


Sammi Li, data analysts 


 
  Checkpoint  Itai Greenberg, VP Product & Product marketing  

What’s new with ADX



Updates on the latest and greatest in ADX ingestion, query, dashboards and more



Gabi Lehner, Program Manager
Tzvia Gitlin Troyna, Program Manager



30



Powering Engineering Excellence With Azure Data Explorer



Taboola on AzureDataExplorer “It’s magic, interactive & intuitive. My users are in love”



Ariel Pisetzky, VP Information Technology & Cyber at Taboola.



15



Start Fast and Accelerate! 


The next generation of the Kusto engine


 



Azure Data Explorer engine enhancements.



Evgeney Ryzhyk, Partner Software Engineer


Alexander Sloutsky, Principal Engineering Manager


Avner Aharoni, Principal Program Manager



 


 


15



AMD



Azure Data Explorer now supports AMD based SKUs



Vlad Rozanovich, CVP Datacenters and Cloud sales



 



 



Customers’ Stories 




  1. Bosch – Andreas Mauer, VP CTO Integrator Business 

  2. Siemens Health – Thomas Zeiser, Product Owner, Philipp Guendisch, Operation Engineer, Henri Benoit, System Engineering Lead and Emilian Ertel,  Operation Engineer

  3. Zoomd – Niv Sharoni, CTO

  4. BASF – Ringo Fodisch,  Senior Automation Engineer



 



Buhler 



 



Cedric Menzi, Solution Architect Samuel Ochsner, Lead Software Developer IoT bei Bühler Group



15



 


4.JPG


Azure Data Explorer


 

Remote Working – Azure WVD Important Announcements

This article is contributed. See the original author and article here.

Abstract

Azure WVD has played a tremendous role in enabling many organizations to allow their employees to work from home/remote locations.

By looking at the number of WVD adoption across organization, I would say “Year 2020 is WVD year for Azure”.

 

With so many users using WVD on regular basis Microsoft received lot of feedback to improve the WVD offering and from Ignite 2020 announcement I would certainly say; feedback is being addressed at rapid pace.

 

Let us look at important announcement with respect to Azure WVD and what problems those announcements will solve.

Microsoft Endpoint Manager Integration

Far far years ago in the year 2019, people used to get their own laptops, which was a physical device, and it was controlled through endpoint manager. Companies registers the laptop physical device to Intune just like your mobile devices.

 

Now in this current year 2020, people are using magical service of WVD where users have their own virtual machine. So, companies were demanding that, as WVD machine is just like physical laptop for my employees, so have them also controlled through Intune and endpoint manager. Earlier it was not there and now this is announced.

 

We will be able to configure WVD using endpoint manager and manage it centrally.

WVD is the only platform that supports Windows 10 multisession for remote working enabling organizations to save lot of their cost. Windows 10 multisession OS will also be allowed to register in Intune as per announcement. So, this is an important update.

Attach MSIX App directly from Azure Portal.

MSIX is a packaging tool that enables you to repackage your existing desktop app to MSIX format, without any code change. There are still many desktop application companies are using which are legacy. They don’t support modern feature like touch. You repackage them and install with MSIX, then touch feature will be supported. Plus, you can send the update to the application packaged as a part of your OS patches. IT management becomes easy and they are deployed like store apps rather than desktop apps.

 

Now from WVD context, the general approach is to create golden image per application / group of users/ department base. Then if there are other set of applications that need to be exposed then create a new golden image. MSIX can help you to make single golden image for your WVD and attach app to respective Host Pools. So, no need to create app specific golden image.

 

This is significant improvement and simplifies lot of headache involved in golden image preparation and management.

Disable Screen Capture

This was number one ask from security teams. Screen capture is still a possible way of data leakage. With disablement of screen capture this concern is also addressed.

Direct RDP to WVD Hosts

Azure WVD work on reverse connect technology and the connection made to WVD common URL or control plain is over the internet. Customer was asking if WVD user is already on a trusted network then why do we use reverse connection?

 

Going forward as per announcement users will be able to connect to WVD session host directly over RDP, if they are already on trusted network. This will result in significant reduction on hops and hence ultimately improving connectivity experience of WVD user.

Conclusion

Above announcements are impressive and will definitely increase the adoption of Azure WVD multifold.

Azure Lighthouse – Step by step guidance – Onboard customer to Lighthouse using sample template

Azure Lighthouse – Step by step guidance – Onboard customer to Lighthouse using sample template

This article is contributed. See the original author and article here.

This blog explains how a Server Provider can onboard Customer to Azure Lighthouse by sample templates in Azure Portal.


Pre-requirements:


Before we start, please read this document about what is Azure Lighthouse.


Azure Lighthouse can enable cross and Multi-tenant management, allow for higher automation, scalability, and enhanced governance across resources and tenants.


Concepts:


Service Provider: the one to manage delegated resources.


Customer: the delegated resources (subscription and/or resources group) can be accessed and managed through service provider’s Azure Active Directory tenant.


 


To onboard the Customer, at first we need to gather Server Provider’s Tenant ID and Principal ID.


 


Gather Server Provider’s Tenant ID and Principal ID



  1. Tenant ID:


In Azure portal, search for “Azure Active Directory”, you can find the Tenant ID in Overview.


It also can get Tenant ID through Azure Powershell or Azure CLI in local Poweshell (need to login first) or Cloud shell in Azure Portal.


lighthotenant_new.png


For example, in Azure Poweshell use command “Select-AzSubscription <subscriptionId>”


lighthosepwoershell.png


 



  1. Principal ID:


This principal Id should be the User or Security AAD group who needs to manage customer’s resources.


In Azure portal you can search for “Azure AD roles “ or Click “Role and administrator” in the first image (marked 3). Then click find the role you want to onboard Azure Lighthouse.


lighthouseroles.PNG


 


Select “Profile”, you can find the Object ID there. It’s the principal ID need to keep.


lighthoseobject.png


Define roles and permission


As a service provider, you may want to perform multiple tasks for a single customer, requiring different access for different scopes. You can define as many authorizations as you need in order to assign the appropriate role-based access control (RBAC) built-in roles to users in your tenant.


You can get all the roles definition ID from role-based access control (RBAC) built-in roles.


https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles


If you know which role should assign, you also can use Azure Powershell or Azure CLI to get the role definition ID.


For example, use this command “(Get-AzRoleDefinition -Name ‘<roleName>’).id” in Azure Powershell. Here the example in below shows role definition ID for “Contributor”.


Contributor2.PNG


 


Note: Some roles are not supported for Azure Lighthouse(Like Owner role), pls check the details here https://docs.microsoft.com/en-us/azure/lighthouse/concepts/tenants-users-roles#role-support-for-azure-lighthouse


 


Onboard Customer delegation


After these preparation work, now let’s start to onboard the Customer delegation.


You can select one template you want to deploy for subscription or Resource group for Azure Lighthouse samples


Note: This deployment must be done by a non-guest account in the customer’s tenant who has the Owner built-in role for the subscription being onboarded (or which contains the resource groups that are being onboarded).


If the subscription was created through the Cloud Solution Provider (CSP) program, any user who has the Admin Agent role in your service provider tenant can perform the deployment.


Click one for the Azure button, it directly goes to the Azure portal custom deployment page.


lighthousedeploy.png


Then select “Edit parameter”.


editprameter.png


Put TenantID, PrincipalID and Role definitions found before. And click “Save”.


lgithousepricle.png


The deployment may take several minutes to complete.


After the deployment succeeds, it may take about 15 mins to allow us see it from portal.


 In Customer Azure Portal, search for “Service Provider” and click “Service provider offers”.


lighthouesserverporvider.png


In Service Provider portal, search for “My customers”, select “Customer”.


lighthouecustomer.png


As I applied for “Contributor” role, you can find it in directory and subscription in  Service Provider side.


lighthoudirect.png


What can we do in Azure Lighthouse delegation?


After on board Lighthouse successfully. you can use Server Provider account to manage Customer resources without switch tenant. 


If Service Provider has  Contributor role, it can update, delete and create resources in Customer’s subscription.


Below image shows Storage account can be created in Customer Resource group from Server provider.


lighthousestoragepng.png


 


To conclude, Azure Lighthouse provide benefits for managing Customers’ Azure resources securely, without having to switch context and control planes.


Reference: https://docs.microsoft.com/en-us/azure/lighthouse/how-to/onboard-customer