Cost Optimization Considerations for Azure VMs – Part 1: VM services

This article is contributed. See the original author and article here.

Azure Virtual Machines are an excellent solution for hosting both new and legacy applications. However, as your services and workloads become more complex and demand increases, your costs may also rise. Azure provides a range of pricing models, services, and tools that can help you optimize the allocation of your cloud budget and get the most value for your money.


 


Let’s explore Azure’s various cost-optimization options to see how they can significantly reduce your Azure compute costs.


The major Azure cost optimization options can be grouped into three categories: VM services, pricing models and programs, and cost analysis tools. 


 


Let’s have a quick overview of these 3 categories:


 


VM services – Several VM services give you various options to save, depending on the nature of your workloads.  These can include things like dynamically autoscaling VMs according to demand or utilizing spare Azure capacity at up to 90% discount versus pay-as-you-go rates.


 


Pricing models and programs – Azure also offers various pricing models and programs that you can take advantage of depending on your needs and desires of how you plan to spend your Azure costs.  For example, committing to purchase compute capacity for a certain time period can lower your average costs per VM by up to 72%.


 


Cost analysis tools – This category of optimization features various tools available for you to calculate, track, and monitor costs of your Azure spend.  This deep insight and data into your spending allows you to make better decisions about where your compute costs are being spent and how to allocate them in a way that best suits your needs.


 


When it comes to VMs, the various VMs services are probably the first place you want to start when looking to save cost.  While this blog will focus mostly on VM services, stay tuned for blogs about pricing models & programs and cost analysis tools!


 


Spot Virtual Machines


 


Spot Virtual Machines provide compute capacity at drastically reduced costs by leveraging compute capacity that isn’t being currently used.  While it’s possible to have your workloads evicted, this compute capacity is charged at a greatly reduced price, up to 90%.  This makes Spot Virtual Machines ideal for workloads that are interruptible and non-time sensitive, like machine learning model training, financial modeling, or CI/CD.


 


Incorporating Spot VMs can undoubtedly play a key role in your cost savings strategy. Azure provides significant pricing incentives to utilize any current spare capacity.  The opportunity to leverage Spot VMs should be evaluated for every appropriate workload to maximize cost savings.  Let’s learn more about how Spot Virtual Machines work and if they are right for you.


 


Deployment Scenarios


There are a variety of cases in which Spot VMs can be ideal for, let’s look at some examples:


 



  • CI/CD – CI/CD is one of the easiest places to get started with Spot Virtual Machines. The temporary nature of many development and test environments makes them suited for Spot VMs.  The difference in time of a couple minutes to a couple hours when testing an application is often not business-critical.  Thus, deploying CI/CD workloads and build environments with Spot VMs can drastically lower the cost of operating your CI/CD pipeline. Customer story

  • Financial modeling – creating financial models is also compute resource intensive, but often transient in nature.  Researchers often struggle to test all the hypotheses they want with non-flexible infrastructure.  But with Spot VMs, they add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources, creating more and better models faster. Customer story

  • Media rendering – media rendering jobs like video encoding and 3D modeling can require lots of computing resources but may not necessarily demand resources consistently throughout the day.  These workloads are also often computationally similar, not dependent on each other, and not requiring immediate responses.  These attributes make it another ideal case for Spot VMs. For rendering infrastructure often at capacity, Spot VMs are also a great way to add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources to meet capacity, lowering overall TCO of running a render farm. Customer story


 


Generally speaking, if the workload is stateless, scalable, or time, location, and hardware-flexible, then they may be a good fit for Spot VMs.  While Spot VMs can offer significant cost savings, they are not suitable for all workloads. Workloads that require high availability, consistent performance, or long-running tasks may not be a good fit for Spot VMs. 


 


Features & Considerations


Now that you have learned more about Spot VMs and may be considering using them for your workloads, let’s talk a bit more about how Spot VMs work and the controls available to you to optimize cost savings even further.


 


Spot VMs are priced according to demand.  With this flexible pricing model, Spot VMs also give you the ability to set a price limit for the Spot VMs that you’ll use.  If the demand is high enough that the price for a Spot VM exceeds what you’re willing to pay, you can simply use this limit to opt to not run your workloads at that time and wait for demand to decrease.  If you anticipate the Spot VMs you want to use are in a region that will have high utilization rates a time of day or month, you may want to choose another region, or plan for creating higher price limits for workloads that occur during higher demand times.  If the time when the workload runs isn’t important, you may opt to set the price limit low, such that your workloads only run during periods that Spot capacity is the cheapest to minimize your Spot VM costs.


 


While using Spot VMs with price limits, we also must look at the different eviction types and policies, which are options you can set in place to determine what happens to your Spot VMs when they are to be reclaimed by a pay-as-you-go customer.   To maximize cost savings, it’s best to prioritize the delete eviction policy first.  VMs can be redeployed faster, meaning less downtime waiting for Spot capacity, and not having to pay for disk storage.  However, if your workload is region or size specific, and requires some level of persistent data in the event of an eviction, then the Deallocate policy will be a better option. 


 


These things may only be a small slice of all the considerations to best utilize Spot VMs.  Learn more about best practices for building apps with Spot VMs here.


 


So how can we actually deploy and manage Spot VMs at scale? Using Virtual Machine Scale Sets is likely your best option. Virtual Machine Scale Sets, in addition to Spot VMs, offer a plethora of cost savings features and options for your VM deployments and easily allow you to deploy your Spot VMs in conjunction with standard VMs.  In our next section, we’ll look at some of these features in Virtual Machine Scale Sets and how we can use them to deploy Spot VMs at scale.


 


Virtual Machine Scale Sets


 


Virtual Machine Scale Sets enable you to manage and deploy groups of VMs at scale with a variety of load balancing, resource autoscaling, and resiliency features.  While a variety of these features can indirectly save costs like making deployments simpler to manage or easier to achieve high availability, some of these features contribute directly to reducing costs, namely autoscaling and Spot Mix.  Let’s dive deeper into how these two features can optimize costs.


 


Autoscaling


Autoscaling is a critical feature included within Virtual Machine Scale Sets that give you the ability to dynamically increase or decrease the number of virtual machines running within the scale set. This allows you to scale out your infrastructure to meet demand when it is required, and scale it in when compute demand lowers, reducing the likelihood that you’ll be paying to have extra VMs running when you don’t have to.


 


VMs can be autoscaled according to rules that you can define yourself from a variety of metrics.  These rules can be based off host-based metrics available from your VM like CPU usage or memory demand or application-level metrics like session counts and page load performance.  This flexibility gives you the option to scale in or out your workload to very specific requirements, and it is with this specificity that you can control your infrastructure scaling to optimally meet your compute demand without extra overhead.


You can also scale in or out according to a schedule, for cases in which you can anticipate cyclical changes to VM demand throughout certain times of the day, month, or year.  For example, you can automatically scale out your workload at the beginning of the workday when application usage increases, and then scale in the number of VM instances to minimize resource costs overnight when application usage lowers.  It’s also possible to scale out on certain days when events occur such as a holiday sale or marketing launch.  Additionally, for more complex workloads, Virtual Machines Scale Sets also provides the option to leverage machine learning to predictively autoscale workloads according to historical CPU usage patterns. 


 


These autoscaling policies make it easy to adapt your infrastructure usage to many variables and leveraging autoscale rules to best fit your application demand will be critical to reducing cost.


 


Spot Mix


With Spot Mix in Virtual Machine Scale Sets, you can configure your scale in or scale out policy to specify a ratio of standard to Spot VMs to maintain as VMs increase or decrease.  Say if you specify a ratio of 50%, then for every 10 new VMs the scale out policy adds to the scale set, 5 of the machines will be standard VMs, while the other 5 will be Spot.  To maximize cost savings, you may want to have a low ratio standard to Spot VMs, meaning more Spot VMs will be deployed instead of standard VMs as the scale set grows.  This can work well for workloads that don’t need much guaranteed capacity at larger scales.  However, for workloads that need greater resiliency at scale, then you may want to increase the ratio to ensure adequate baseline standard capacity.


 


You can learn more about choosing which VM families and sizes might be right for you with the VM selector and the Spot Advisor, which we will cover more in depth a later blog of this VM cost optimization blog series. 


 


Wrapping up


 


We’ve learned how Spot VMs and Virtual Machines Scale Sets, especially when combined, equip you with various features and options to control how your VMs behave and how you can use those controls in a manner to maximize your cost savings. 


Next time, we’ll go in depth the various pricing models and programs available in Azure that can even further optimize your cost, allowing you to do more with less with Azure VMs.  Stay tuned for more blogs!

Lesson Learned #347: String or binary data would be truncated applying batch file in DataSync.

This article is contributed. See the original author and article here.

Today, we got a service request that our customer using DataSync to transfer data from OnPremise to Azure SQL Database they got the following error message: Sync failed with the exception ‘An unexpected error occurred when applying batch file sync_aaaabbbbcccddddaaaaa-bbbb-dddd-cccc-8825f4397b31.batch.


 



  • See the inner exception for more details.Inner exception: Failed to execute the command ‘BulkUpdateCommand‘ for table ‘dbo.Table1’; the transaction was rolled back.

  • Ensure that the command syntax is correct.Inner exception: SqlException Error Code: -2146232060 – SqlError Number:2629, Message: String or binary data would be truncated in object ID ‘-nnnnn’. Truncated value: ”.

  • SqlError Number:8061, Message: The data for table-valued parameter ‘@changeTable’ doesn’t conform to the table type of the parameter. SQL Server error is: 2629, state: 1 SqlError Number:3621, Message: The statement has been terminated. 


 


We reviewed the object ID exposed in the error and we found that a column that belongs to the table1 in OnPremise has been changed of data type from NCHAR(100) to NVARCHAR(255). Once the sync started again there is not possible to update the data in the subscribers of DataSync. 



In this case, our recomendations was: 
     1. Remove the affected table from the sync group.
     2. Trigger a sync.
     3. Re-add the affected table to the sync group.
     4. Trigger a sync.
     5. The sync of Step 2 would remove the metadata for the affected table, and would re-add it correctly on Step 4.


 


Regards, 

Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot 

Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot 

This article is contributed. See the original author and article here.

In March, we introduced Microsoft 365 Copilot—your copilot for work. Today, we’re announcing that we’re bringing Microsoft 365 Copilot to more customers with an expanded preview and new capabilities.

The post Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot  appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Securing Windows workloads on Azure Kubernetes Service with Calico

Securing Windows workloads on Azure Kubernetes Service with Calico

This article is contributed. See the original author and article here.

This blog post has been co-authored by Microsoft and Dhiraj Sehgal, Reza Ramezanpur from Tigera.


 


Container orchestration pushes the boundaries of containerized applications by preparing the necessary foundation to run containers at scale. Today, customers can run Linux and Windows containerized applications in a container orchestration solution, such as Azure Kubernetes Service (AKS).


 


This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to rule a hybrid environment.


 


Container orchestration at scale with AKS


After creating a container image, you will need a container orchestrator to deploy it at scale. Kubernetes is a modular container orchestration software that will manage the mundane parts of running such workloads, and AKS abstracts the infrastructure on which Kubernetes runs, so you can focus on deploying and running your workloads.


 


In this blog post, we will share all the commands required to set up a mixed Kubernetes cluster (Windows and Linux nodes) in AKS – you can open up your Azure Cloud Shell window from the Azure Portal and run the commands if you want to follow along.


 


If you don’t have an Azure account with a paid subscription, don’t worry—you can sign up for a free Azure account to complete the following steps.


 


Resource group


To run a Kubernetes cluster in Azure, you must create multiple resources that share the same lifespan and assign them to a resource group. A resource group is a way to group related resources in Azure for easier management and accessibility. Keep in mind that each resource group must have a unique name.


 


The following command creates a resource group named calico-win-container in the australiaeast location. Feel free to adjust the location to a different zone.


 

az group create --name calico-win-container --location australiaeast

 


 


Cluster deployment


Note: Azure free accounts cannot create any resources in busy locations. Feel free to adjust your location if you face this problem.


 


A Linux control plane is necessary to run the Kubernetes system workloads, and Windows nodes can only join a cluster as participating worker nodes.


 

az aks create --resource-group calico-win-container --name CalicoAKSCluster --node-count 1 --node-vm-size Standard_B2s --network-plugin azure --network-policy calico --generate-ssh-keys --windows-admin-username 

 


 


Windows node pool


Now that we have a running control plane, it is time to add a Windows node pool to our AKS cluster.


 


Note: Use `windows` as the value for the ‘–os-type’ argument.


 

az aks nodepool add --resource-group calico-win-container --cluster-name CalicoAKSCluster --os-type Windows --name calico --node-vm-size Standard_B2s --node-count 1

 


 


Calico for Windows


Calico for Windows is officially integrated into the Azure platform. Every time you add a Windows node in AKS, it will come with a preinstalled version of Calico. To check this, use the following command to ensure EnableAKSWindowsCalico is in a Registered state:


 

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"

 


 


Expected output:


 

Name                                               State
-------------------------------------------------  ----------
Microsoft.ContainerService/EnableAKSWindowsCalico  Registered

 


 


If your query returns a Not Registered state or no items, use the following command to enable AKS Calico integration for your account:


 

az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"

 


 


After EnableAKSWindowsCalico becomes registered, you can use the following command to add the Calico integration to your subscription:


 

az provider register --namespace Microsoft.ContainerService

 


 


Exporting the cluster key


Kubernetes implements an API Server that provides a REST interface to maintain and manage cluster resources. Usually, to authenticate with the API server, you must present a certificate, username, and password. The Azure command-line interface (Azure CLI) can export these cluster credentials for an AKS deployment.


 


Use the following command to export the credentials:


 

az aks get-credentials --resource-group calico-win-container --name CalicoAKSCluster

 


 


 


After exporting the credential file, we can use the kubectl binary to manage and maintain cluster resources. For example, we can check which operating system is running on our nodes by using the OS labels.


 

kubectl get nodes -L kubernetes.io/os

 


 


You should see a similar result to:


 

NAME                                STATUS   ROLES   AGE     VERSION   OS
aks-nodepool1-64517604-vmss000000   Ready    agent   6h8m    v1.22.6   linux
akscalico000000                     Ready    agent   5h57m   v1.22.6   windows

 


 


Windows workloads


If you recall, Kubernetes API Server is the interface that we can use to manage or maintain our workloads.


 


We can use the same syntax to create a deployment, pod, service, or Kubernetes resource for our new Windows nodes. For example, we can use the same OS selector that we previously used for our deployments to ensure Windows and Linux workloads are deployed to their respective nodes:


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/00_deployment.yaml

 


 


Since our workload is a web server created by Microsoft’s .NET technology, the deployment YAML file also packages a service load balancer to expose the HTTP port to the Internet.


 


Use the following command to verify that the load balancer successfully acquired an external IP address:


 

kubectl get svc win-container-service -n win-web-demo

 


 


You should see a similar result:


 


 

NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
win-container-service   LoadBalancer   10.0.203.176   20.200.73.50   80:32442/TCP   141m

 


 


 


Use the “EXTERNAL-IP” value in a browser, and you should see a page with the following message:


Picture1.png


 


Perfect! Our pod can communicate with the Internet.


 


Securing Windows workloads with Calico


The default security behavior for the Kubernetes NetworkPolicy resource permits all traffic. While this is a great way to set up a lab environment in a real-world scenario, it can severely impact your cluster’s security.


 


First, use the following manifest to enable the API server:


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/01_apiserver.yaml

 


 


Use the following command to get the API Server deployment status:


 

kubectl get tigerastatus

 


 


You should see a similar result to:


 

NAME        AVAILABLE   PROGRESSING   DEGRADED   SINCE
apiserver   True        False         False      10h
calico      True        False         False      10h

 


 


 


Calico offers two security policy resources that can cover every corner of your cluster. We will implement a global policy since it can restrict Internet addresses without the daunting procedure of explicitly writing every IP/CIDR in a policy.


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/02_default-deny.yaml

 


 


If you go back to your browser and click the Try again button, you will see that the container is isolated and cannot initiate communication to the Internet.


Picture2.png


 


Note: The source code for the workload is available here.


 


Clean up
If you have been following this blog post and did the lab section in Azure, please make sure that you delete the resources, as cloud providers will charge you based on usage.


Use the following command to delete the resource group:


 


Conclusion


While network policy is not relevant for lab scenarios, production workloads have a different level of security requirements to meet. Calico offers a simple and integrated way to apply network policies to Windows workloads on Azure Kubernetes Service. In this blog post, we covered the basics for implementing a network policy to a simple web server. You can check out more information on how Calico works with Windows on AKS in our documentation page.


 


Additional links:


Security baseline for Microsoft Edge version 113

This article is contributed. See the original author and article here.

We are pleased to announce the security review for Microsoft Edge, version 113!


 


We have reviewed the new settings in Microsoft Edge version 113 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 112 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.


 


Microsoft Edge version 113 introduced 3 new computer settings and 3 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.


 


As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.


 


Please continue to give us feedback through the Security Baselines Discussion site or this post.

How to Build an End-to-End Power BI Project with DAX Suggestion

How to Build an End-to-End Power BI Project with DAX Suggestion

This article is contributed. See the original author and article here.

End to End Power BI Project with DAX Suggestion.png


 


DAX is now your Friend 


theoyinbooke_0-1683043546428.gif


Learning and understanding DAX in Power BI can come with some challenges especially for Beginners. What if you can write DAX with just natural language, Isn’t that awesome?


 


Yes, DAX is now your friend.


Let’s analyze the magic happening in the image below.


 


1. Write what you want to achieve in natural language and the AI automatically generate the DAX function to achieve it


2. If you notice, I intentionally misspelt “Total” by writing “Toal” yet, it understood what I am trying to do.


 


Now imagine what you will learn going through this live session with us. 


 


DAXSugg.gif


 


This session focuses on helping you to improve your DAX knowledge and skill.


We will do this by working on a full Power BI Report project and use the new AI capabilities in DAX to get this done.


 


About the Session


Are you ready to witness the latest and greatest capabilities of Power BI’s DAX language, now infused with Artificial Intelligence? In this session, we will take you through an exhilarating journey of building a complete Power BI report project, utilizing the powerful DAX language and its new AI capabilities.


 


Our expert presenters will showcase how to leverage the DAX suggestions feature to optimize your data model and make your report building process faster and more efficient. You will learn how to use DAX to create custom calculations and measure your data, while also harnessing the power of AI to enhance the accuracy and intelligence of your reports.


 


Throughout the session, you will get an inside look at how DAX suggestions can simplify and streamline your data analysis process, allowing you to focus on creating valuable insights and visualizations for your audience.


 


Whether you are a seasoned Power BI user or just starting out, this live session will provide you with valuable insights and practical tips to help you master the art of building end-to-end Power BI projects with DAX suggestions.


 


Join us for an exciting and informative Power BI live session that is sure to leave you inspired and equipped with the latest tools and techniques to take your data analysis and reporting to the next level.


 


Register


Event Date: May 18th, 2023


Time: 2PM (GMT+1)


To register, kindly click on the link here https://aka.ms/PowerBIDAXSuggestion


 


Additional Resources


Start Learning About DAX Suggestion Here


 


 


 

Get quick access to your most-used knowledge articles

Get quick access to your most-used knowledge articles

This article is contributed. See the original author and article here.

Are you tired of constantly searching for the same knowledge articles in Dynamics 365 Customer Service? Say goodbye to wasting precious time and hello to the new favorite feature! 

We are excited to announce a new knowledge management feature in Dynamics 365 Customer Service that allows you to mark knowledge articles as favorites. 

With this new feature, you can easily save up to 50 knowledge articles that you frequently use as favorites. This allows you to access them quickly while you’re working on a case, without having to search for them every time. 

To use this feature, your administrator will need to enable it and provide privileges to specific roles. Once enabled, you can easily mark an article as a favorite by selecting the heart icon next to it in the search results. 

Knowledge article on the agent home page with list of favorite knowledge articles in the left pane

Using your favorite knowledge articles

When administrators enable favorites, agents can 

  • Select an article from the list of search results. 
  • Click on the favorite (heart) icon to add the article to your favorites. 
  • Remove an article from the favorites list by clearing the favorite (heart) icon. 

All of your saved articles will appear in the My favorites tab. You can access this tab from various places within the app, including the app side pane, standalone search control, form-embedded control, and the reference pane. 

The best part? The article you last marked as a favorite will appear first in your list of favorites. When you delete an article, it will no longer appear on your favorites list. Additionally, your favorite articles are saved in the language in which you viewed them when you marked them as a favorite. If you view a translated version of a favorite article, it won’t appear as a favorite. Also, when a favorite article has multiple versions, the new version appears as a favorite and replaces the earlier version. 

If you’re using the Customer Service workspace or Omnichannel for Customer Service, selecting a favorite article will open it in an app tab. 

We hope this feature makes it easier to access the knowledge articles you use most in Dynamics 365 Customer Service.

Learn more

Watch a quick video introduction.

To learn more about enabling and using favorites, read the documentation:

The post Get quick access to your most-used knowledge articles appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Seeking Feedback for new Data Mapper

This article is contributed. See the original author and article here.

Azure Logic Apps recently announced the public preview release of the new Data Mapper extension. If you haven’t had a chance to learn about this exciting new tool, check out our announcement. Already had the opportunity to test out Data Mapper? Consider meeting with the team as we are looking for feedback on this new extension.


 


Call for Feedback


We want to hear from you about your experiences thus far with the new Data Mapper extension. Your time and thoughts are appreciated and important to us in ensuring the best future for our product. We’re focused on hearing from developers and including your thoughts in our next upcoming release.


If you are interested in providing your insight to our team, please fill out this form so we can schedule a time to meet with you.

Migrating your Human Resources environment! 

Migrating your Human Resources environment! 

This article is contributed. See the original author and article here.

Have you tried migrating your Dynamics 365 Human Resources (D365 HR) environment yet? We announced in early December the general availability (GA) release of the automated tooling, for the lift and shift migration of your standalone Human Resources environment.  

Albert Einstein said, “The measure of intelligence is the ability to change” and that is how we feel about the merged and improved Human Resources on the Dynamics 365 Finance and Operations infrastructure.  

As part of the infrastructure merge, all capabilities of the Human Resources application have been made available in Dynamics 365 Finance and Operations (Finance and Operations) environments. Customers can migrate their Human Resources environments using the migration tooling that is available in Microsoft Dynamics Lifecycle Services (LCS). They can also optionally merge their data with their existing  Finance and Operations environment. 

Benefits of migrating to the merged infrastructure 

Did you know that we were able to close approximately twenty-five ideas with over 500+ votes as a result of leveraging the existing platform functionality on the Finance infrastructure? Moving to new infrastructure can help you with the following:  

  • You get one set of human resources capabilities within Dynamics 365 including: 
    • All the previous capabilities and enhancements on the standalone Human Resources application are now in the merged Human Resources.  
    • All the new functionalities that have been added since August 2022, which includes enhancements in Personnel management, Leave and Absence management, Benefits managements, Learning management, Dual-write enhancements, Resource management integration, Process automation, etc.   

Picture describing the list of features the product team has been up to including 500+ idea votes

  • All planned roadmap functionality will only be available in the merged Human Resources environments on the Finance infrastructure.

Image detailing wave 1 investments including organization agility, employee experiences, and optimizing of HR programs

  • The extensibility options and the experience is improved through Dynamics and the Finance and Operations platform ecosystem including:
    • Enhanced Power Platform capabilities 
    • Consistent platform maintenance including: deployment, updates, Application Lifecycle Management, Lifecycle Services, Geographic availability, etc.  

Customer Experience 

We have had many customers complete their sandbox and production migrations successfully on the Finance and Operations platform.  Additionally, we have had a number of customers merge (consolidate) their Human Resources production environment with an existing Finance and Operations environment.  

One of our customers, TecAlliance, migrated their global implementation of their standalone Human Resources environments (used in 22 countries), using the automated migration tooling, in a very short span.  Here’s what they had to say about their experience:

“We were able to migrate to the Finance and Operations infrastructure seamlessly; after intensive testing, we encountered only one bug and were able to get it resolved very quickly. From the initial implementation of Dynamics 365 Human Resources to the migration of Human Resources on to the new infrastructure, it took less than 12 months thanks to the amazing support from Microsoft.” 

-TecAlliance

Call to Action 

Migrate your standalone Human Resources environment to the Finance and Operations infrastructure using the automated tooling available in LCS.   

If you have complexities around integrations and/or working to merge or consolidate with another Finance and Operations application, we recommend working with your Dynamics partner to determine the right approach.  We also recommend working with your designated Solution Architect if you are part of the FastTrack or the ACE program.  

Learn more:  

  • To learn more about the infrastructure merge here  
  • For detailed information on new capabilities in Human Resources, see Release plans: 2022 Release Wave 2, 2023 Release Wave 1 
  • Join the Human Resource Customers and Partners Yammer group for collaboration, feedback and to stay informed on product related announcements.  

The post Migrating your Human Resources environment!  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Improve Sales process efficiency using sequence insights

Improve Sales process efficiency using sequence insights

This article is contributed. See the original author and article here.

Sequences in Dynamics 365 Sales is a valuable sales engagement tool for businesses looking to drive efficiency via standardized sales processes. When sellers engage with sequences, it’s important to understand if they are supporting the process effectively. This is where sequence insights come into play.

Sequence insights provide data on the performance of sequences and their steps. Businesses can use sequence insights to review and analyze the effectiveness of their current sequences by identifying bottlenecks, and take corrective measures. Also, they can update the wait times and steps to ensure that the overall process is efficient and working as intended.

Additionally, sequence insights can help businesses identify best practices that are working well across the sales team. By analyzing data on successful sales sequences, businesses can identify common patterns and strategies that are driving success.

Reporting and insights are available at two levels:

  • Sequence level analytics that provide aggregate data for the entire sequence.
  • Step level analytics that provide data for the selected step.

Let’s take a quick tour to see how you can use them to optimize and improve sequences.

Gain insights with sequence reporting

First up is the “Sales Acceleration Report” (available to sales managers/operations role only). Here, a dashboard is available to review how sequences are performing all-up:

Sequence reporting dashboard

You can review all sequences together for a team-wide view or set filters to select specific sequences for performance evaluation. Not only do you get common success measures such as conversion rate and opportunity to win ratio, but you also get success rates of each sequence, along with distribution of “status” for different customer segments.

This report and the view are optimized for business level analysis and when needed, drill down to a few selected sequences.  

You can also get an all-up summary report at the sequences page that provides a list view of all sequences. There you will find the “Sequence stats” tab that shows progress of each active sequence in terms of % completion for connected records, success rates, and average duration. This is a great view for operational excellence to keep an eye on sequences whose execution or success rate is out of the ordinary.

Sequence stats

Improve efficiency with sequence insights

You can get even more insights at a sequence and step level. For this, open the desired active sequence. You will see sequence insights and analytics right away the sequence itself is shown as a Sankey graph where each line shows how many records (leads, opportunities, etc.) were processed through the part of the sequence.

This is a quick and easy way to check overall performance of the sequence and spot points in the sequences where a larger than expected number of records is getting stuck or leaving the sequence.

Sequence analytics

Select the starting step of the sequence to get additional insights for it. Things to look for in this view:

  1. Is there a particular stage in the sequence where the amount of incoming volume significantly surpasses the number of qualified records that move on to the next stage? This is a common occurrence if a step is designed to filter out unqualified prospects. However, it’s not necessarily expected in other steps. By examining the report, you can identify these drop-off points and conduct a deeper analysis. This will help you determine why most customers are not progressing past this stage, and whether the drop-off rate is in line with expectations.
  2. Is the average number of days for completing this sequence within the expected range? If not, you can use step level insights to diagnose where most of the time is spent, and to identify what improvements are needed.
  3. Review disconnection reasons for any abnormality (e.g., are sellers disconnecting sequences at a high rate?).

Deep dive for more analytics

Analytics is not limited to overall sequence only you can select any step in the sequence and get detailed insights to understand how sellers are executing that step. Here is an example for the sending email step:

Step analytics

Email insights (and insights for other communication steps) are focused on measuring engagement did the prospect receive the message, open it, and interact with its content?

All step level insights include details of their execution status. For example, how many records have completed this step, how many are in progress, or how many skipped, along with numbers and reasons for disconnection (e.g., manually disconnected, record inactivated, etc.). Finally, the “time taken” information is presented as a histogram this view is great as it quickly lets you figure out averages as well as outlier buckets.

Things to look for in a step analytics:

  1. Are the average time taken for completing the sequence and the completion rate within the expected ranges?
  2. Are there any steps that are outliers for disconnections or “in-progress”?
  3. If steps are about communicating with customers (e.g., email, phone call), are prospects engaging as expected?

Record level information

If you need to see progress for any specific record, navigate to the “Connected ” tab (e.g., “Connected leads” if the sequence is for leads). You will see a list of all connected records and their progress in terms of steps completed, current step, days elapsed and which salesperson owns this record. This view enables you to quickly spot any outliers (e.g., records that are taking longer than others or a salesperson seems blocked on a step and requires assistance).

Record level information

Final thoughts

Sequences can be an efficient and straightforward sales engagement tool for businesses that seek to provide guidance to their sales representatives. By incorporating automation (e.g., automated emails), sequences can reduce manual effort, making the sales process more streamlined and effective. Furthermore, with our new sequence insights, sales managers and operations teams can better understand how sequences are working for their sellers, including the identification of bottlenecks within the process. This data can be used to benchmark overall sequence execution velocity and success rates, allowing for more informed decision-making and ultimately improving the sales process.

Next steps

Don’t have Dynamics 365 Sales yet? Try it out now: Sales Overview Dynamics Sales Solutions | Microsoft Dynamics 365

Learn more about sequences and how to create them:
Sequences in sales accelerator | Microsoft Learn

Explore our getting started templates to quickly create sequences and try them for yourself:
Sequence templates | Microsoft Learn

Learn more about sequence analytics:
View and understand sales acceleration reporting | Microsoft Learn
Understand the sequence stats page | Microsoft Learn

The post Improve Sales process efficiency using sequence insights appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.