Security Controls in Azure Security Center: Apply adaptive application control

Security Controls in Azure Security Center: Apply adaptive application control

This article is contributed. See the original author and article here.

As part of our recent Azure Security Center (ASC) Blog Series, we are diving into the different controls within ASC’s Secure Score.  In this post we will be discussing the control of “Apply Adaptive Application control”.  


 


This security control contains up to recommendations, depending on the resources you have deployed within your environment, and it is worth maximum of 1 point (2%) that counts towards your overall Secure Score. To understand about Azure Security Center’s secure score make sure you read this articleThese recommendations are meant to keep your resources safe and improve your security hygiene.


 


Apply adaptive application control contains the following 7 recommendations, depending on your environment:



  • Log Analytics agent should be installed on your virtual machine 

  • Monitoring agent should be installed on your machines

  • Log Analytics agent should be installed on your Windows-based Azure Arc machines

  • Log Analytics agent should be installed on your Linux-based Azure Arc machines 

  • Log Analytics agent health issues should be resolved on your machines 

  • Adaptive application controls for defining safe applications should be enabled on your machines 

  • Allowlist rules in your adaptive application control policy should be updated 


The example screenshot below shows an environment in which only 6 of those recommendations are within the scope of Apply adaptive application control security control, because the recommendations which do not apply to any resource within your environment do not appear.  


Image 1 – Recommendations within the Apply adaptive application controlImage 1 – Recommendations within the Apply adaptive application control


Like the rest of the Secure Score controls, all these recommendations must be considered in order to get the full points and drive up your Secure Score (you can review all of the recommendations here). Also, some might have a “Quick Fix!” button as well!  No excuses not to enable those, it simplifies remediation and enables you to quickly increase your secure score, improving your environment’s security. To understand how Quick Fix works, please make sure to visit here  


 


Category #1: Log Analytics agent should be installed on your virtual machine


To monitor for security vulnerabilities and threats, Azure Security Center depends on the Log Analytics Agent. The agent collects various security-related configuration details and event logs from connected machines, and then copies the data to your Log Analytics workspace for further analysis. Without the agent, Security Center will not be able to collect security data from the VM and some security 


recommendations and alerts will be unavailable and within 24hrs, Security Center will determine that the VM is missing the extension and recommends you to install it via this security control. You could manually install the agent with the help of this recommendation or If you have auto-provisioning turned on, when Security Center identifies missing agent, it installs the extension automatically which in-turn reduces management overhead. Refer to this article to understand deployment options. Several questions arise at this point for scenarios like, how auto provisioning works in cases where there is already an agent installed and to understand that please read this information.


The following recommendations belong to this category:



  • Monitoring agent should be installed on your machines.

  • Log Analytics agent should be installed on your Windows-based Azure Arc machines. This recommendation applies to Windows-based Azure Arc machines

  • Log Analytics agent should be installed on your Linux-based Azure Arc machines. This recommendation applies to Linux-based Azure Arc machines


Alternatively, to fix this recommendation, you can visit our Github Repository and leverage the automations we have published there.  


 


Category #2: Log Analytics agent health issues should be resolved on your machines


You’ll notice this recommendation when Azure Security Center finds Log Analytics agent unhealthy which means, a VM is unmonitored by Security Center since the VM does not have healthy Log Analytics agent extension. This could be due to several reasons, one of it could be the agents are not able to connect to and register with Security Center due to no access to the network resources. Read more about this scenario here. To fully benefit from all of Security Center’s capabilities, the Log Analytics agent extension is required.


For more information about the reasons Security Center is unable to successfully monitor VMs and computers initialized for automatic provisioning, see Monitoring agent health issues.


 


NOTE: The above recommendations (Category #1 and #2) to install the agent and recommendation about agent health issues are pre-requisites. You might observe these recommendations also show up in a different security control, and if they were remediated there, it will not appear here in this Security control.


 


Category #3: Adaptive application controls for defining safe applications should be enabled on your machines


Application allowlist is not necessarily a new concept. One of the biggest challenges of dealing with the application allowlist is how to maintain that list. The traditional approach of using AppLocker in Windows is a good solution, but still has the overhead of keeping up with the applications and making the initial baseline work properly for our needs.


 


Adaptive application controls is one of the advanced protection features you can benefit from, when you upgrade to Azure Defender ON, this falls under the cloud Workload Platform Protection (CWPP).


Adaptive application controls help to harden your VMs against malware by making it easier to control which applications can run on your Azure VMs. Azure Defender has built-in intelligence that allows you to apply allowlist rules based on machine learning. This intelligence analyzes the processes that are running in your VMs, creates a baseline of applications, and groups the virtual machines. From here, recommendations are provided that allow you to automatically apply the appropriate allowlist rules. The use of machine learning intelligence makes it super simple to configure and maintain application the allowlist.


 


With this feature, you’re able to alert on or audit . These can even be malicious applications that might otherwise be missed by endpoint protection solutions, or applications with known vulnerabilities. By default, Azure Defender enables application control in Audit mode. No enforcement options are available at this time of writing.


 


Adaptive Application Control do not support Windows machines for which AppLocker policy is already enabled by either group policy objects (GPOs) or Local Security policy.


Hope this helps you understand why it is super important for you to enable them. Learning about Adaptive Application Control is essential for anyone looking to gain more granular control and security within their environment, so make sure to read our documentation.


 


Category #4: Allowlist rules in your adaptive application control policy should be updated


This recommendation will be displayed when Azure Defender’s machine learning identifies potentially legitimate behavior that hasn’t previously been allowed. This recommendation suggests you to add new rules to the existing policy to reduce the number of false positives in adaptive application controls violation alerts. To edit the application control policy please refer to this for more information.


 


Next Steps


As with all security controls, you need to make sure to remediate all recommendations within the control that apply to a particular resource in order to gain credit towards your secure score.


 


I hope you enjoyed reading this blog post as much as I enjoyed writing it and learned how this specific control can assist you to strengthen your Azure security posture.



  • The main blog post to this series (found here)

  • The DOCs article about Secure Score (this one


Reviewer


Special Thanks to @Yuri Diogenes, Principal Program Manager in the CxE ASC Team for reviewing this article.


 


 

Getting Started with DevOps for Azure SQL | Data Exposed

This article is contributed. See the original author and article here.

“Databases-as-Code” is an important principle in improving predictability in developing, delivering, and operating Azure SQL databases. In the first part of this two-part series with Arvind Shyamsundar, we quickly survey the different tools and methodologies available and then show you how to get started with GitHub Actions for a simple CI/CD pipeline deploying changes to an Azure SQL DB.

Azure Unblogged – GitHub

This article is contributed. See the original author and article here.

Today, I am pleased to share with you a new episode of Azure Unblogged.  I chat to Martin Woodward, Director of Developer Relations at GitHub.  Martin and I discuss why GitHub is something that IT Pros and System Administrators should look at learning GitHub.  The new features GitHub Actions and GitHub Codespaces and how they integrate with Azure as well as the forthcoming GitHub Universe


 


You can watch the full video here or on Microsoft Channel 9


 

I hope you enjoyed the video if you have any questions feel free to leave a comment and if you want to check out some of the resources Martin mentioned please check out the links below:


Azure Sphere OS version 20.12 is now available for evaluation

This article is contributed. See the original author and article here.

The Azure Sphere OS version 20.12 is now available for evaluation in the Retail Eval feed. The retail evaluation period provides 14 days for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release before it is deployed broadly via the Retail feed. The Retail feed will continue to deliver OS version 20.10 until we publish 20.12 in two weeks. For more information on retail evaluation see our blog post, The most important testing you’ll do: Azure Sphere Retail Evaluation.


 


Azure Sphere OS version 20.12


The 20.12 release includes the following bug fixes and enhancements in the Azure Sphere OS. It does not include an updated SDK. 



  • Reduced the maximum transmission unit (MTU) from 1500 bytes to 1420 bytes.

  • Improved device update in congested networks.

  • Fixed an issue wherein the Wi-Fi module stops scanning but does not respond with a completion event if a background scan is running and the active Wi-Fi network is deleted.

  • Fixed a bug wherein I2CMaster_Write() returns EBUSY when re-sideloading the app interrupts operation.


 


Azure Sphere SDK version 20.11


On Nov 30, we released version 20.11 of the Azure Sphere SDK. The 20.11 SDK introduces the first Beta release of the azsphere command line interface (CLI) v2. The CLI v2 Beta is installed alongside the existing CLI on both Windows and Linux, and it works with both the 20.10 and 20.12 versions of the OS. For the purpose of retail evaluation, continue to use the CLI v1. For more information on the v2 CLI and a complete list of additional features, see Azure Sphere CLI v2 Beta.


 


For more information on Azure Sphere OS feeds and setting up an evaluation device group, see Azure Sphere OS feeds. 


 


For self-help technical inquiries, please visit Microsoft Q&A or Stack Overflow. If you require technical support and have a support plan, please submit a support ticket in Microsoft Azure Support or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the Azure support plans.

Azure Service Fabric 7.2 Fourth Refresh Release

This article is contributed. See the original author and article here.

The Azure Service Fabric 7.2 fourth refresh release includes stability fixes for standalone, and Azure environments and has started rolling out to the various Azure regions. The updates for .NET SDK, Java SDK and Service Fabric Runtime will be available through Web Platform Installer, NuGet packages and Maven repositories in 7-10 days within all regions.


 


You will be able to update to the 7.2 fourth refresh release through a manual upgrade on the Azure Portal or via an Azure Resource Manager deployment. Due to customer feedback on releases around the holiday period we will not begin automatically updating clusters set to receive automatic upgrades.


 



  • Service Fabric Runtime


    • Windows – 7.2.445.9590

    • Service Fabric for Windows Server Service Fabric Standalone Installer Package – 7.2.445.9590




  • .NET SDK


    • Windows .NET SDK –  4.2.445

    • Microsoft.ServiceFabric –  7.2.445

    • Reliable Services and Reliable Actors –  4.2.445

    • ASP.NET Core Service Fabric integration –  4.2.432


  • Java SDK –  1.0.6


 


Key Announcements



  • .NET 5 apps for Windows on Service Fabric are now supported as a preview. Look out for the GA announcement of .NET 5 apps for Windows on Service Fabric in the coming weeks.

  • .NET 5 apps for Linux on Service Fabric will be added in the Service Fabric 8.0 release (Spring 2021).

  • Windows Server 20H2 is now supported as of the 7.2 CU4 release.


For more details, please read the release notes.  

Deploying a LoRaWAN network server on Azure

Deploying a LoRaWAN network server on Azure

This article is contributed. See the original author and article here.


 







There is something oddly fascinating about radio waves, radio communications, and the sheer amount of innovations they’ve enabled since the end of the 19th century.


What I find even more fascinating is that it is now very easy for anyone to get hands-on experience with radio technologies such as LPWAN (Low-Power Wide Area Network, a technology that allows connecting pieces of equipment over a low-power, long-range, secure radio network) in the context of building connected products.






 




It’s of no use whatsoever […] this is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there.


— Heinrich Hertz, about the practical importance of his radio wave experiments

Nowadays, not only is there a wide variety of hardware developer kits, gateways, and radio modules to help you with the hardware/radio aspect of LPWAN radio communications, but there is also open-source software that allows you to build and operate your very own network. Read on as I will be giving you some insights into what it takes to set up a full-blown LoRaWAN network server in the cloud!

 


A quick refresher on LoRaWAN


 


LoRaWAN is a low-power wide-area network (LPWAN) technology that uses the LoRa radio protocol to allow long-range transmissions between IoT devices and the Internet. LoRa itself uses a form of chirp spread spectrum modulation which, combined with error correction techniques, allows for very high link budgets—in other terms: the ability to cover very long ranges!


Data sent by LoRaWAN end devices gets picked up by gateways nearby and is then routed to a so-called network server. The network server de-duplicates packets (several gateways may have “seen” and forwarded the same radio packet), performs security checks, and eventually routes the information to its actual destination, i.e. the application the devices are sending data to.




 




LoRaWAN end nodes are usually pretty “dumb”, battery-powered, devices (ex. soil moisture sensor, parking occupancy, …), that have very limited knowledge of their radio environment. For example, a node may be in close proximity to a gateway, and yet transmit radio packets with much more transmission power than necessary, wasting precious battery energy in the process. Therefore, one of the duties of a LoRaWAN network server is to consolidate various metrics collected from the field gateways to optimize the network. If a gateway is telling the network server it is getting a really strong signal from a sensor, it might make sense to send a downlink packet to that device so that it can try using slightly less power for future transmissions.


As LoRa uses an unlicensed spectrum and granted one follows their local radio regulations, anyone can freely connect LoRa devices, or even operate their own network.


 


My private LoRaWAN server, why?


 


The LoRaWAN specification puts a really strong focus on security, and by no means do I want to make you think that rolling out your own networking infrastructure is mandatory to make your LoRaWAN solution secure. In fact, LoRaWAN has a pretty elegant way of securing communications, while keeping the protocol lightweight. There is a lot of literature on the topic that I encourage you to read but, in a nutshell, the protocol makes it almost impossible for malicious actors to impersonate your devices (messages are signed and protected against replay attacks) or access your data (your application data is seen by the network server as an opaque, ciphered, payload).


So why should you bother about rolling your ow LoRaWAN network server anyway?


Coverage where you need it


 


In most cases, relying on a public network operator means being dependant on their coverage. While some operators might allow a hybrid model where you can attach your own gateways to their network, and hence extend the coverage right where you need it, oftentimes you don’t get to decide how well a particular geographical area will be covered by a given operator.


When rolling out your own network server, you end up managing your own fleet of gateways, bringing you more flexibility in terms of coverage, network redundancy, etc.


 


Data ownership


 


While operating your own server will not necessarily add a lot in terms of pure security (after all, your LoRaWAN packets are hanging in the open air a good chunk of their lifetime anyway!), being your own operator definitely brings you more flexibility to know and control what happens to your data once it’s reached the Internet.


 


What about the downsides?


 


It goes without saying that operating your network is no small feat, and you should obviously do your due diligence with regards to the potential challenges, risks, and costs associated with keeping your network up and running.


Anyway, it is now high time I tell you how you’d go about rolling out your own LoRaWAN network, right?


 


The Things Stack on Azure


 


The Things Stack is an open-source LoRaWAN network server that supports all versions of the LoRaWAN specification and operation modes. It is actively being maintained by The Things Industries and is the underlying core of their commercial offerings.


A typical/minimal deployment of The Things Stack network server relies on roughly three pillars:



  • A Redis in-memory data store for supporting the operation of the network ;

  • An SQL database (PostgreSQL or CockroachDB are supported) for storing information regarding the gateways, devices, and users of thje network ;

  • The actual stack, running the different services that power the web console, the network server itself, etc.


The deployment model recommended for someone interested in quickly testing out The Things Stack is to use their Docker Compose configuration. It fires up all the services mentioned above as Docker containers on the same machine. Pretty cool for testing, but not so much for a production environment: who is going to keep those Redis and PostgreSQL services available 24/7, properly backed up, etc.?


I have put together a set of instructions and a deployment template that aim at showing how a LoRaWAN server based on The Things Stack and running in Azure could look like.


 



 


The instructions in the GitHub repository linked below should be all you need to get your very own server up and running!


In fact, you only have a handful of parameters to tweak (what fancy nickname to give your server, credentials for the admin user, …) and the deployment template will do the rest!



OK, I deployed my network server in Azure, now what?


 


Just to enumerate a few, here are some of the things that having your own network server, running in your own Azure subscription, will enable. Some will sound oddly specific if you don’t have a lot of experience with LoRaWAN yet, but they are important nevertheless. You can:



  • benefit from managed Redis and PostgreSQL services, and not have to worry about potential security fixes that would need to be rolled out, or about performing regular backups, etc. ;

  • control what LoRaWAN gateways can connect to your network server, as you can tweak your Network Security Group to only allow specific IPs to connect to the UDP packet forwarder endpoint of your network server ;

  • completely isolate the internals of your network server from the public Internet (including the Application Server if you which so), putting you in a better position to control and secure your business data ;

  • scale your infrastructure up or down as the size and complexity of the fleet that you are managing evolves ;

  • … and there is probably so much more. I’m actually curious to hear in the comments below about other benefits (or downsides, for that matter) you’d see.


I started to put together an FAQ in the GitHub repository so, hopefully, your most obvious questions are already answered there. However, there is one that I thought was worth calling out in this post, which is: How big of a fleet can I connect?.


It turns out that even a reasonably small VM like the one used in the deployment template—2 vCPUs, 4GB of RAM—can already handle thousands of nodes, and hundreds of gateways. You may find this LoRaWAN traffic simulation tool that I wrote helpful in case you’d want to conduct your own stress testing experiments.


 


What’s next?


 


You should definitely expect more from me when it comes to other LoRaWAN related articles in the future. From leveraging DTDL for simplifying end application development and interoperability with other solutions, to integrating with Azure IoT services, there’s definitely a lot more to cover. Stay tuned, and please let me know in the comments of other related topics you’d like to see covered!