by Contributed | May 10, 2023 | Technology
This article is contributed. See the original author and article here.
Azure Virtual Machines are an excellent solution for hosting both new and legacy applications. However, as your services and workloads become more complex and demand increases, your costs may also rise. Azure provides a range of pricing models, services, and tools that can help you optimize the allocation of your cloud budget and get the most value for your money.
Let’s explore Azure’s various cost-optimization options to see how they can significantly reduce your Azure compute costs.
The major Azure cost optimization options can be grouped into three categories: VM services, pricing models and programs, and cost analysis tools.
Let’s have a quick overview of these 3 categories:
VM services – Several VM services give you various options to save, depending on the nature of your workloads. These can include things like dynamically autoscaling VMs according to demand or utilizing spare Azure capacity at up to 90% discount versus pay-as-you-go rates.
Pricing models and programs – Azure also offers various pricing models and programs that you can take advantage of depending on your needs and desires of how you plan to spend your Azure costs. For example, committing to purchase compute capacity for a certain time period can lower your average costs per VM by up to 72%.
Cost analysis tools – This category of optimization features various tools available for you to calculate, track, and monitor costs of your Azure spend. This deep insight and data into your spending allows you to make better decisions about where your compute costs are being spent and how to allocate them in a way that best suits your needs.
When it comes to VMs, the various VMs services are probably the first place you want to start when looking to save cost. While this blog will focus mostly on VM services, stay tuned for blogs about pricing models & programs and cost analysis tools!
Spot Virtual Machines
Spot Virtual Machines provide compute capacity at drastically reduced costs by leveraging compute capacity that isn’t being currently used. While it’s possible to have your workloads evicted, this compute capacity is charged at a greatly reduced price, up to 90%. This makes Spot Virtual Machines ideal for workloads that are interruptible and non-time sensitive, like machine learning model training, financial modeling, or CI/CD.
Incorporating Spot VMs can undoubtedly play a key role in your cost savings strategy. Azure provides significant pricing incentives to utilize any current spare capacity. The opportunity to leverage Spot VMs should be evaluated for every appropriate workload to maximize cost savings. Let’s learn more about how Spot Virtual Machines work and if they are right for you.
Deployment Scenarios
There are a variety of cases in which Spot VMs can be ideal for, let’s look at some examples:
- CI/CD – CI/CD is one of the easiest places to get started with Spot Virtual Machines. The temporary nature of many development and test environments makes them suited for Spot VMs. The difference in time of a couple minutes to a couple hours when testing an application is often not business-critical. Thus, deploying CI/CD workloads and build environments with Spot VMs can drastically lower the cost of operating your CI/CD pipeline. Customer story
- Financial modeling – creating financial models is also compute resource intensive, but often transient in nature. Researchers often struggle to test all the hypotheses they want with non-flexible infrastructure. But with Spot VMs, they add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources, creating more and better models faster. Customer story
- Media rendering – media rendering jobs like video encoding and 3D modeling can require lots of computing resources but may not necessarily demand resources consistently throughout the day. These workloads are also often computationally similar, not dependent on each other, and not requiring immediate responses. These attributes make it another ideal case for Spot VMs. For rendering infrastructure often at capacity, Spot VMs are also a great way to add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources to meet capacity, lowering overall TCO of running a render farm. Customer story
Generally speaking, if the workload is stateless, scalable, or time, location, and hardware-flexible, then they may be a good fit for Spot VMs. While Spot VMs can offer significant cost savings, they are not suitable for all workloads. Workloads that require high availability, consistent performance, or long-running tasks may not be a good fit for Spot VMs.
Features & Considerations
Now that you have learned more about Spot VMs and may be considering using them for your workloads, let’s talk a bit more about how Spot VMs work and the controls available to you to optimize cost savings even further.
Spot VMs are priced according to demand. With this flexible pricing model, Spot VMs also give you the ability to set a price limit for the Spot VMs that you’ll use. If the demand is high enough that the price for a Spot VM exceeds what you’re willing to pay, you can simply use this limit to opt to not run your workloads at that time and wait for demand to decrease. If you anticipate the Spot VMs you want to use are in a region that will have high utilization rates a time of day or month, you may want to choose another region, or plan for creating higher price limits for workloads that occur during higher demand times. If the time when the workload runs isn’t important, you may opt to set the price limit low, such that your workloads only run during periods that Spot capacity is the cheapest to minimize your Spot VM costs.
While using Spot VMs with price limits, we also must look at the different eviction types and policies, which are options you can set in place to determine what happens to your Spot VMs when they are to be reclaimed by a pay-as-you-go customer. To maximize cost savings, it’s best to prioritize the delete eviction policy first. VMs can be redeployed faster, meaning less downtime waiting for Spot capacity, and not having to pay for disk storage. However, if your workload is region or size specific, and requires some level of persistent data in the event of an eviction, then the Deallocate policy will be a better option.
These things may only be a small slice of all the considerations to best utilize Spot VMs. Learn more about best practices for building apps with Spot VMs here.
So how can we actually deploy and manage Spot VMs at scale? Using Virtual Machine Scale Sets is likely your best option. Virtual Machine Scale Sets, in addition to Spot VMs, offer a plethora of cost savings features and options for your VM deployments and easily allow you to deploy your Spot VMs in conjunction with standard VMs. In our next section, we’ll look at some of these features in Virtual Machine Scale Sets and how we can use them to deploy Spot VMs at scale.
Virtual Machine Scale Sets
Virtual Machine Scale Sets enable you to manage and deploy groups of VMs at scale with a variety of load balancing, resource autoscaling, and resiliency features. While a variety of these features can indirectly save costs like making deployments simpler to manage or easier to achieve high availability, some of these features contribute directly to reducing costs, namely autoscaling and Spot Mix. Let’s dive deeper into how these two features can optimize costs.
Autoscaling
Autoscaling is a critical feature included within Virtual Machine Scale Sets that give you the ability to dynamically increase or decrease the number of virtual machines running within the scale set. This allows you to scale out your infrastructure to meet demand when it is required, and scale it in when compute demand lowers, reducing the likelihood that you’ll be paying to have extra VMs running when you don’t have to.
VMs can be autoscaled according to rules that you can define yourself from a variety of metrics. These rules can be based off host-based metrics available from your VM like CPU usage or memory demand or application-level metrics like session counts and page load performance. This flexibility gives you the option to scale in or out your workload to very specific requirements, and it is with this specificity that you can control your infrastructure scaling to optimally meet your compute demand without extra overhead.
You can also scale in or out according to a schedule, for cases in which you can anticipate cyclical changes to VM demand throughout certain times of the day, month, or year. For example, you can automatically scale out your workload at the beginning of the workday when application usage increases, and then scale in the number of VM instances to minimize resource costs overnight when application usage lowers. It’s also possible to scale out on certain days when events occur such as a holiday sale or marketing launch. Additionally, for more complex workloads, Virtual Machines Scale Sets also provides the option to leverage machine learning to predictively autoscale workloads according to historical CPU usage patterns.
These autoscaling policies make it easy to adapt your infrastructure usage to many variables and leveraging autoscale rules to best fit your application demand will be critical to reducing cost.
Spot Mix
With Spot Mix in Virtual Machine Scale Sets, you can configure your scale in or scale out policy to specify a ratio of standard to Spot VMs to maintain as VMs increase or decrease. Say if you specify a ratio of 50%, then for every 10 new VMs the scale out policy adds to the scale set, 5 of the machines will be standard VMs, while the other 5 will be Spot. To maximize cost savings, you may want to have a low ratio standard to Spot VMs, meaning more Spot VMs will be deployed instead of standard VMs as the scale set grows. This can work well for workloads that don’t need much guaranteed capacity at larger scales. However, for workloads that need greater resiliency at scale, then you may want to increase the ratio to ensure adequate baseline standard capacity.
You can learn more about choosing which VM families and sizes might be right for you with the VM selector and the Spot Advisor, which we will cover more in depth a later blog of this VM cost optimization blog series.
Wrapping up
We’ve learned how Spot VMs and Virtual Machines Scale Sets, especially when combined, equip you with various features and options to control how your VMs behave and how you can use those controls in a manner to maximize your cost savings.
Next time, we’ll go in depth the various pricing models and programs available in Azure that can even further optimize your cost, allowing you to do more with less with Azure VMs. Stay tuned for more blogs!
by Contributed | May 8, 2023 | Technology
This article is contributed. See the original author and article here.
This blog post has been co-authored by Microsoft and Dhiraj Sehgal, Reza Ramezanpur from Tigera.
Container orchestration pushes the boundaries of containerized applications by preparing the necessary foundation to run containers at scale. Today, customers can run Linux and Windows containerized applications in a container orchestration solution, such as Azure Kubernetes Service (AKS).
This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to rule a hybrid environment.
Container orchestration at scale with AKS
After creating a container image, you will need a container orchestrator to deploy it at scale. Kubernetes is a modular container orchestration software that will manage the mundane parts of running such workloads, and AKS abstracts the infrastructure on which Kubernetes runs, so you can focus on deploying and running your workloads.
In this blog post, we will share all the commands required to set up a mixed Kubernetes cluster (Windows and Linux nodes) in AKS – you can open up your Azure Cloud Shell window from the Azure Portal and run the commands if you want to follow along.
If you don’t have an Azure account with a paid subscription, don’t worry—you can sign up for a free Azure account to complete the following steps.
Resource group
To run a Kubernetes cluster in Azure, you must create multiple resources that share the same lifespan and assign them to a resource group. A resource group is a way to group related resources in Azure for easier management and accessibility. Keep in mind that each resource group must have a unique name.
The following command creates a resource group named calico-win-container in the australiaeast location. Feel free to adjust the location to a different zone.
az group create --name calico-win-container --location australiaeast
Cluster deployment
Note: Azure free accounts cannot create any resources in busy locations. Feel free to adjust your location if you face this problem.
A Linux control plane is necessary to run the Kubernetes system workloads, and Windows nodes can only join a cluster as participating worker nodes.
az aks create --resource-group calico-win-container --name CalicoAKSCluster --node-count 1 --node-vm-size Standard_B2s --network-plugin azure --network-policy calico --generate-ssh-keys --windows-admin-username
Windows node pool
Now that we have a running control plane, it is time to add a Windows node pool to our AKS cluster.
Note: Use `windows` as the value for the ‘–os-type’ argument.
az aks nodepool add --resource-group calico-win-container --cluster-name CalicoAKSCluster --os-type Windows --name calico --node-vm-size Standard_B2s --node-count 1
Calico for Windows
Calico for Windows is officially integrated into the Azure platform. Every time you add a Windows node in AKS, it will come with a preinstalled version of Calico. To check this, use the following command to ensure EnableAKSWindowsCalico is in a Registered state:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"
Expected output:
Name State
------------------------------------------------- ----------
Microsoft.ContainerService/EnableAKSWindowsCalico Registered
If your query returns a Not Registered state or no items, use the following command to enable AKS Calico integration for your account:
az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"
After EnableAKSWindowsCalico becomes registered, you can use the following command to add the Calico integration to your subscription:
az provider register --namespace Microsoft.ContainerService
Exporting the cluster key
Kubernetes implements an API Server that provides a REST interface to maintain and manage cluster resources. Usually, to authenticate with the API server, you must present a certificate, username, and password. The Azure command-line interface (Azure CLI) can export these cluster credentials for an AKS deployment.
Use the following command to export the credentials:
az aks get-credentials --resource-group calico-win-container --name CalicoAKSCluster
After exporting the credential file, we can use the kubectl binary to manage and maintain cluster resources. For example, we can check which operating system is running on our nodes by using the OS labels.
kubectl get nodes -L kubernetes.io/os
You should see a similar result to:
NAME STATUS ROLES AGE VERSION OS
aks-nodepool1-64517604-vmss000000 Ready agent 6h8m v1.22.6 linux
akscalico000000 Ready agent 5h57m v1.22.6 windows
Windows workloads
If you recall, Kubernetes API Server is the interface that we can use to manage or maintain our workloads.
We can use the same syntax to create a deployment, pod, service, or Kubernetes resource for our new Windows nodes. For example, we can use the same OS selector that we previously used for our deployments to ensure Windows and Linux workloads are deployed to their respective nodes:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/00_deployment.yaml
Since our workload is a web server created by Microsoft’s .NET technology, the deployment YAML file also packages a service load balancer to expose the HTTP port to the Internet.
Use the following command to verify that the load balancer successfully acquired an external IP address:
kubectl get svc win-container-service -n win-web-demo
You should see a similar result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
win-container-service LoadBalancer 10.0.203.176 20.200.73.50 80:32442/TCP 141m
Use the “EXTERNAL-IP” value in a browser, and you should see a page with the following message:

Perfect! Our pod can communicate with the Internet.
Securing Windows workloads with Calico
The default security behavior for the Kubernetes NetworkPolicy resource permits all traffic. While this is a great way to set up a lab environment in a real-world scenario, it can severely impact your cluster’s security.
First, use the following manifest to enable the API server:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/01_apiserver.yaml
Use the following command to get the API Server deployment status:
kubectl get tigerastatus
You should see a similar result to:
NAME AVAILABLE PROGRESSING DEGRADED SINCE
apiserver True False False 10h
calico True False False 10h
Calico offers two security policy resources that can cover every corner of your cluster. We will implement a global policy since it can restrict Internet addresses without the daunting procedure of explicitly writing every IP/CIDR in a policy.
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/02_default-deny.yaml
If you go back to your browser and click the Try again button, you will see that the container is isolated and cannot initiate communication to the Internet.

Note: The source code for the workload is available here.
Clean up
If you have been following this blog post and did the lab section in Azure, please make sure that you delete the resources, as cloud providers will charge you based on usage.
Use the following command to delete the resource group:
Conclusion
While network policy is not relevant for lab scenarios, production workloads have a different level of security requirements to meet. Calico offers a simple and integrated way to apply network policies to Windows workloads on Azure Kubernetes Service. In this blog post, we covered the basics for implementing a network policy to a simple web server. You can check out more information on how Calico works with Windows on AKS in our documentation page.
Additional links:
by Contributed | May 6, 2023 | Technology
This article is contributed. See the original author and article here.
We are pleased to announce the security review for Microsoft Edge, version 113!
We have reviewed the new settings in Microsoft Edge version 113 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 112 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.
Microsoft Edge version 113 introduced 3 new computer settings and 3 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.
Please continue to give us feedback through the Security Baselines Discussion site or this post.
by Contributed | May 6, 2023 | Technology
This article is contributed. See the original author and article here.

DAX is now your Friend

Learning and understanding DAX in Power BI can come with some challenges especially for Beginners. What if you can write DAX with just natural language, Isn’t that awesome?
Yes, DAX is now your friend.
Let’s analyze the magic happening in the image below.
1. Write what you want to achieve in natural language and the AI automatically generate the DAX function to achieve it
2. If you notice, I intentionally misspelt “Total” by writing “Toal” yet, it understood what I am trying to do.
Now imagine what you will learn going through this live session with us.

This session focuses on helping you to improve your DAX knowledge and skill.
We will do this by working on a full Power BI Report project and use the new AI capabilities in DAX to get this done.
About the Session
Are you ready to witness the latest and greatest capabilities of Power BI’s DAX language, now infused with Artificial Intelligence? In this session, we will take you through an exhilarating journey of building a complete Power BI report project, utilizing the powerful DAX language and its new AI capabilities.
Our expert presenters will showcase how to leverage the DAX suggestions feature to optimize your data model and make your report building process faster and more efficient. You will learn how to use DAX to create custom calculations and measure your data, while also harnessing the power of AI to enhance the accuracy and intelligence of your reports.
Throughout the session, you will get an inside look at how DAX suggestions can simplify and streamline your data analysis process, allowing you to focus on creating valuable insights and visualizations for your audience.
Whether you are a seasoned Power BI user or just starting out, this live session will provide you with valuable insights and practical tips to help you master the art of building end-to-end Power BI projects with DAX suggestions.
Join us for an exciting and informative Power BI live session that is sure to leave you inspired and equipped with the latest tools and techniques to take your data analysis and reporting to the next level.
Register
Event Date: May 18th, 2023
Time: 2PM (GMT+1)
To register, kindly click on the link here https://aka.ms/PowerBIDAXSuggestion
Additional Resources
Start Learning About DAX Suggestion Here
by Contributed | May 5, 2023 | Technology
This article is contributed. See the original author and article here.
Azure Logic Apps recently announced the public preview release of the new Data Mapper extension. If you haven’t had a chance to learn about this exciting new tool, check out our announcement. Already had the opportunity to test out Data Mapper? Consider meeting with the team as we are looking for feedback on this new extension.
Call for Feedback
We want to hear from you about your experiences thus far with the new Data Mapper extension. Your time and thoughts are appreciated and important to us in ensuring the best future for our product. We’re focused on hearing from developers and including your thoughts in our next upcoming release.
If you are interested in providing your insight to our team, please fill out this form so we can schedule a time to meet with you.
Recent Comments