This article is contributed. See the original author and article here.
In today’s data-driven world, businesses rely on customer data to fuel their marketing strategies. They need to access, analyze, and act on this data to power personalized experiences that drive return on marketing investments. However, this comes with the challenges of (1) configuring systems like a Customer Data Platform correctly and (2) ensuring high data quality within these systems.
A Gartner research study1 reported that high quality data provides “better leads, better understanding of customers, and better customer relationships” and that “every year, poor quality data costs organizations an average of $12.9 million.” This is why it is crucial to understand the current configuration state of your Customer Insights – Data environment and the quality of your data; addressing these challenges is the key to unlocking the most relevant and impactful insights about your customers.
We recently shipped generative-AI powered features in D365 Customer Insights – Data to help organizations improve data quality and configuration with Copilot so they can empower business users with the best insights to deliver highly personalized customer experiences.
This blog post will share more information on how you can improve data quality and configuration with Copilot. With these features you can:
Review the current status of your Customer Insights – Data environment,
Understand the overall health of your data,
Consult which insights can be generated successfully from your data,
Act on recommendations to unlock more insights.
To illustrate how these features work, let’s see how they can be used to improve the speed and quality of an email marketing campaign to target high lifetime value customers with a ‘thank you’ discount on their next purchase.
Quickly know if your jobs have run successfully, and where to go if not with Copilot
Contoso Coffee recently implemented Customer Insights – Data, which involved integrating source data from various systems and creating unified customer profiles. To ensure that everything was running smoothly, they checked the system settings. Environment Status Summary, a Copilot feature not only highlighted a recent issue, but also used AI to identify where the issue occurred and provided a direct link to investigate. Thanks to this feature, Contoso’s IT Team was able to quickly fix a skipped customer profile job that would have otherwise blocked them from generating insights for an upcoming email marketing campaign. With the problem resolved in minutes, they could focus on re-engaging high lifetime value customers in a timely manner.
Understand your overall data quality with Copilot
Now that Contoso’s environment is running smoothly, they want to quickly understand the general health of their data estate.
They review a summary of their data quality on the Home Page by the Data Prep Report, a Copilot feature. This summary includes a data quality grade, which insights are available, the most critical data quality issues, and a link to a detailed data prep report to learn more. Using this summary, Contoso can see that their data quality is medium with a 75% score. They are able to generate some insights, but not the customer lifetime value prediction they want for their email marketing campaign.
If not for this summary, Contoso would have attempted to configure, train, score, and run a customer lifetime value prediction that would have failed completely or had low-grade results. The summary show where their data stands. Thus they don’t have to go through the frustration of trying to generate insights based on unusable data.
See which insights can be generated successfully from your data
Next, Contoso wants to deep dive the report to understand the next steps to build their email campaign. They click into the full Data Prep Report, which informs them that they can generate churn predictions, segments, or measures based on their current data. However, they want to pursue a customer lifetime value prediction to support their campaign. They filter the report to review the detailed issues and recommendations specific to customer lifetime value and see the issues listed in priority order from highest to lowest severity. The report gives them the targeted, easy-to-digest information they need to know how to proceed.
Act on recommendations to unlock more insights
Finally, Contoso engages their IT Team to act on the detailed issues and recommendations. The IT Team follows the recommendations by taking the suggested actions such as adding more data incorporating products with a sufficient number of purchases. With minimal time, effort, and ambiguity they are able to improve their data and light up the customer lifetime value prediction they want for their marketing campaign.
Create and use high-impact insights in marketing campaigns
With the help of Environment Status Summary and Data Prep Report, Contoso Coffee is able to get their Customer Data Platform environment set up correctly and resolve their top data quality issues effectively. By improving data quality and configuration with Copilot they are able to instantly get rich insights, such as customer lifetime value predictions, which are conveniently available out-of-the box in Customer Insights – Data. This lets their marketing team focus on launching an effectiveemail campaign that provides relevant and in-the-moment offers to their highest value customers to drive business results. Consult our product documentation and start using these AI-powered features today to achieve similar results!
What are some ways to engage further with Customer Insights – Data?
If you’re a new user, or want to test with demo data: Start a trial of Customer Insights
This article is contributed. See the original author and article here.
We are constantly evolving the Microsoft 365 platform by introducing new experiences like Microsoft Clipchamp and Microsoft Loop—available now for Microsoft 365 Business Standard or Microsoft 365 Business Premium subscribers.
This article is contributed. See the original author and article here.
The Viva Engage Festival, hosted by Swoop Analytics, is an interactive virtual event that brings together Viva Engage thought leaders, communication innovators, and community enthusiasts from around the globe. This is not just another webinar; it’s an opportunity to dive deep into the future of employee engagement, learn about new tech, explore the latest Viva Engage experiences, and connect with a community passionate about driving change in their businesses.
Hear from leading customers and directly from Microsoft
Viva Engage Festival includes customer speakers and industry experts who will share knowledge and expertise on a wide range of topics around Viva Engage, from Comcast, NSW Government, Johnson and Johnson, Vestas and more. Join us for an exclusive look into Microsoft’s journey with Viva Engage and communities as we share our own experiences.
We hope you join us to connect with like-minded individuals who share a passion for driving meaningful engagement. Whether you’re a business leader, a professional, or an enthusiast, you’ll leave the festival with the inspiration and knowledge needed to take your Viva Engage investments to the next level.
Nominate Viva Engage Community Champion!
As part of our 2023 Viva Engage Festival, Microsoft and SWOOP Analytics will announce this year’s regional winners of the Community Champion Award. The Viva Engage Community Champion Award is an opportunity to recognize passionate community managers around the world who are committed to employee engagement, knowledge sharing, and collaboration in their Viva Engage networks. Can you think of anyone who deserves this title? Let us know who it might be! The 2023 Viva Engage Community Champion will be announced for each region during the festival. Nominations close November 30, 2023.
This article is contributed. See the original author and article here.
Ignite has come to an end, but that doesn’t mean you can’t still get in on the action!
Display Your Skills and Earn a New Credential with Microsoft Applied Skills
Advancements in AI, cloud computing, and emerging technologies have increased the importance of showcasing proficiency in sought-after technical skills. Organizations are now adopting a skills-based approach to quickly find the right people with the appropriate skills for specific tasks. With this in mind, we are thrilled to announce Microsoft Applied Skills, a new platform that enables you to demonstrate your technical abilities for real-world situations.
Microsoft Applied Skills gives you a new opportunity to put your skills center stage, empowering you to showcase what you can do and what you can bring to key projects in your organization. This new verifiable credential validates that you have the targeted skills needed to implement critical projects aligned to business goals and objectives.
There are two Security Applied Skills that have been introduced:
Learners should have expertise in Azure infrastructure as a service (IaaS) and platform as a service (PaaS) and must demonstrate the ability to implement regulatory compliance controls as recommended by the Microsoft cloud security benchmark by performing the following tasks:
Learners should be familiar with Microsoft Security, compliance, identity products, Azure portal, and administration, including role-based access control (RBAC), and must display their ability to set up and configure Microsoft Sentinelb by demonstrating the following:
Create and configure a Microsoft Sentinel workspace
Deploy a Microsoft Sentinel content hub solution
Configure analytics rules in Microsoft Sentinel
Configure automation in Microsoft Sentinel
Earn these two credentials for free for a limited time only.
View the Learn Live Sessions at Microsoft Ignite On-demand
Learn Live episodes guide learners through a module on Learn and work through it in real-time. Microsoft experts lead each episode, providing helpful commentary and insights and answering questions live.
The Microsoft Ignite Edition of Microsoft Learn Cloud Skills Challenge is underway. There are several challenges to choose from, including the security-focused challenge Microsoft Ignite: Optimize Azure with Defender for Cloud. If you complete the challenge, you can earn an entry into a drawing for VIP tickets to Ignite next year. You have until January 15th to complete the challenge. Get started today!
Keep up-to-date on Microsoft Security with our Collections
This article is contributed. See the original author and article here.
Have you ever wondered why some SQL queries take forever to execute, even when the CPU usage is relatively low? In our latest support case, we encountered a fascinating scenario: A client was puzzled by a persistently slow query. Initially, the suspicion fell on CPU performance, but the real culprit lay elsewhere. Through a deep dive into the query’s behavior, we uncovered that the delay was not due to CPU processing time. Instead, it was the sheer volume of data being processed, a fact that became crystal clear when we looked at the elapsed time. The eye-opener was our use of SET STATISTICS IO, revealing a telling tale: SQL Server Execution Times: CPU time = 187 ms, elapsed time = 10768 ms. Join us in our latest blog post as we unravel the intricacies of SQL query performance, emphasizing the critical distinction between CPU time and elapsed time, and how understanding this can transform your database optimization strategies.
Introduction
In the realm of database management, performance tuning is a critical aspect that can significantly impact the efficiency of operations. Two key metrics often discussed in this context are CPU time and elapsed time. This article aims to shed light on these concepts, providing practical SQL scripts to aid database administrators and developers in monitoring and optimizing query performance.
What is CPU Time?
CPU time refers to the amount of time for which a CPU is utilized to process instructions of a SQL query. In simpler terms, it’s the actual processing time spent by the CPU in executing the query. This metric is essential in understanding the computational intensity of a query.
What is Elapsed Time?
Elapsed time, on the other hand, is the total time taken to complete the execution of a query. It includes CPU time and any additional time spent waiting for resources (like IO, network latency, or lock waits). Elapsed time gives a more comprehensive overview of how long a query takes to run from start to finish.
Why Are These Metrics Important?
Understanding the distinction between CPU time and elapsed time is crucial for performance tuning. A query with high CPU time could indicate computational inefficiency, whereas a query with high elapsed time but low CPU time might be suffering from resource waits or other external delays. Optimizing queries based on these metrics can lead to more efficient use of server resources and faster query responses.
Practical SQL Scripts
Let’s delve into some practical SQL scripts to observe these metrics in action.
Script 1: Table Creation and Data Insertion
CREATE TABLE EjemploCPUvsElapsed (
ID INT IDENTITY(1,1) PRIMARY KEY,
Nombre VARCHAR(5000),
Valor INT,
Fecha DATETIME
);
DECLARE @i INT = 0;
WHILE @i < 200000
BEGIN
INSERT INTO EjemploCPUvsElapsed (Nombre, Valor, Fecha)
VALUES (CONCAT(REPLICATE('N', 460), @i), RAND()*(100-1)+1, GETDATE());
SET @i = @i + 1;
END;
This script creates a table and populates it with sample data, setting the stage for our performance tests.
Script 2: Enabling Statistics
Before executing our queries, we enable statistics for detailed performance insights.
SET STATISTICS TIME ON;
SET STATISTICS IO ON;
Script 3: Query Execution
We execute a sample query to analyze CPU and elapsed time.
SELECT *
FROM EjemploCPUvsElapsed
ORDER BY NEWID() DESC;
Script 4: Fetching Performance Metrics
Finally, we use the following script to fetch the CPU and elapsed time for our executed queries.
SELECT
sql_text.text,
stats.execution_count,
stats.total_elapsed_time / stats.execution_count AS avg_elapsed_time,
stats.total_worker_time / stats.execution_count AS avg_cpu_time
FROM
sys.dm_exec_query_stats AS stats
CROSS APPLY
sys.dm_exec_sql_text(stats.sql_handle) AS sql_text
ORDER BY
avg_elapsed_time DESC;
Conclusion
Understanding and differentiating between CPU time and elapsed time in SQL query execution is vital for database performance optimization. By utilizing the provided scripts, database professionals can start analyzing and improving the efficiency of their queries, leading to better overall performance of the database systems.
This article is contributed. See the original author and article here.
We are thrilled to announce an addition to our Database Migration Service capability of supporting Oracle to SQL scenario and General availability of the Oracle Assessment and Database schema conversion toolkit . In tune with the changing landscape of user needs, we’ve crafted a powerful capability that seamlessly blends efficiency, precision, and simplicity, promising to make your migration journey smoother than ever.
Why Migrate?
Shifting from Oracle to SQL opens a world of advantages, from heightened performance and reduced costs to enhanced scalability.
Introducing the Database Migration Service Pack for Oracle
At the core of our enhanced Database Migration Service is the seamless integration with Azure Data Studio Extensions. This dynamic fusion marries the best of Microsoft’s Azure platform with the user-friendly interface of Azure Data Studio, ensuring a migration experience that’s both intuitive and efficient.
What’s Inside the Service Pack:
Holistic Assessment:
Gain deep insights into your Oracle database with comprehensive assessment tools.
Identify potential issues, optimize performance, right-size your target, and enjoy automated translation of Oracle PL/SQL to T-SQL.
Effortlessly convert Oracle schema to SQL Server format.
The conversion wizard guides you through the process, providing a detailed list of successfully converted objects and highlighting areas that may need manual intervention.
The database conversion employs SQL Project, delivering a familiar development experience.
Reuse and deploy your previous development work, minimizing the learning curve and maximizing efficiency.
Elevate Your Database Experience
Our Database Migration capability is not just a tool; it’s a solution to seamlessly transition from Oracle to SQL with ease.Ready to embark on a migration journey that exceeds expectations? Keep an eye out for updates, tutorials, and success stories as we unveil this transformative capability.
Modernize your database, upgrade your possibilities.
This article is contributed. See the original author and article here.
Digital Marketing Content (DMC) OnDemand works as a personal digital marketing assistant and delivers fresh, relevant and customized content and share on social, email, website, or blog. It runs 3-to-12-week digital campaigns that include to-customer content and to-partner resources. This includes an interactive dashboard that will allow partners to track both campaign performance and leads generated in real time and to schedule campaigns in advance
TIPS AND TRICKS
DMC campaigns were created to assist you with your marketing strategies in automated way. However, we understand that you want to make sure the focus remains on your business as customers and prospects discover your posts. There are several ways you can customize campaigns to put the focus on your business and offerings:
Customize the pre-written copy | Although we provide you with copy for your social posts, emails, and blog posts, pivoting this copy to highlight your unique value can help ensure customers and prospects understand more about your business and how you can help solve their current pain points.
Upload your own content throughout the campaigns | If you have access to the Partner GTM Toolbox co-branded assets, you can create your own content quickly and easily through customizable templates. Choose your colors, photography, and copy to help customers and prospects understand more about your business. Alternatively, you can learn more about how to create your own content by reading the following blog posts: one-pagers and case studies. Once complete, click on “Add new content” within the any campaign under “Content to share”.
Engage with your audience | Are people replying to your LinkedIn, Facebook, and X (formerly Twitter) posts? Take some time to respond to build a rapport.
Access customizable content | Many campaigns in PMC contain content that was designed for you to customize. Microsoft copy is included, but designated sections are left blank for your copy and placeholders are added to ensure you are following co-branding guidelines. You can find examples here.
Upload your logos | Cobranded content is being added on a regular basis, so make sure you’re taking advantage of this recently added functionality to extend your reach.
NEW CAMPAIGNS
NOTE: To access localized versions, click the product area link, then select the language from the drop-down menu.
This article is contributed. See the original author and article here.
Navigating the complex world of business data in the age of AI presents unique challenges to enterprises. Organizations grapple with harnessing disparate data sources, addressing security and compliance risks while maintaining cost effectiveness. Overwhelming challenges often lead to inefficiencies and missed opportunities more than ever. Dynamics 365 and Dataverse are offering a unified platform for managing data effectively and securely at hyperscale, while empowering low code makers and business users of Microsoft Dynamics 365 and Power Platform.
Customers in different industries, from finance to retail, have been trusting their data and processes with Microsoft Dynamics 365 and Power Platform for years. We are excited to announce multiple features for helping IT administrators navigate effectively with rising AI, data, and security challenges. With Microsoft Purview integration, data governance and compliance risks are significantly reduced. Azure Sentinel provides vigilant monitoring against threats, while enhanced logging to the Microsoft 365 unified audit log tackles insider threats head-on. With Dataverse Link to Fabric, administrators can enable simpler data integration for low code makers, without the need to build and govern complex data pipelines. Moreover, the introduction of Dataverse elastic tables and long-term data retention strategies promise a substantial improvement in both hyperscale management and ROI, reinforcing a robust, secure, and cost-efficient data ecosystem.
Protect your data and assets in the age of AI
Growing cyber risks and increased corporate liability exposure for breaches have driven an increased focus on security in many organizations. To address this, Dataverse provides a comprehensive platform to secure your data and assets. At Ignite 2023 we are announcing several new security capabilities:
Govern Dynamics 365 and Power Platform data in Dataverse through Microsoft Purview integration
Dataverse integration with Microsoft Purview’s Data Map, available shortly in public preview, enables automated data discovery and sensitive data classification. The integration will help your organization to understand and govern their business applications data estate, safeguard that data, and improve their risk and compliance posture. Learn more here: http://aka.ms/DataversePurviewIntegration
Monitor and react to threats with Sentinel
Microsoft Sentinel solution for Microsoft Power Platform will be in public preview across regions over the next few weeks. Microsoft Sentinel is also integrated with Dynamics 365, with recently added OOB analytics rules. With Sentinel integration, customers can detect various suspicious activities such as Microsoft Power Apps execution from unauthorized geographies, suspicious data destruction by Power Apps, mass deletion of Power Apps, phishing attacks (via Power Apps), Power Automate flows activity by departing employees, Microsoft Power Platform connectors added to an environment, and the update or removal of Microsoft Power Platform data loss prevention policies. Learn more here: http://aka.ms/DataverseSentinelIntegration
Manage the risk of insider threats via enhanced logging to the Microsoft 365 Unified Audit Log
To manage the risk of insider threats, all administrator actions in Power Platform are logged to the Microsoft 365 Unified Admin audit log, enabling security teams that manage compliance and insider risk management teams who act on events the ability to mitigate risks in an organization. Learn more here: http://aka.ms/PowerPlatformAdminAuditLogging
Seamlessly integrate your Dynamics 365 data with Fabric and Microsoft 365
Low code Makers can Link to Microsoft Fabric from the Analyze menu in Power Apps maker portal command bar. The system validates configuration and lets you choose a Fabric workspace without leaving maker portal. When you confirm, the system securely links all your tables into a system generated Synapse Lakehouse in the Fabric workspace you selected. Your data stays in Dataverse while you can work with all Fabric workloads like SQL, Python and PowerBI without making copies or building pipelines. As data gets updated in Dataverse, changes are reflected in Fabric, near real time.
We are also excited to announce joint partner and ISV solutions with Dynamics 365, Power platform and Microsoft Fabric. Partners and system are leveraging the Dataverse Link to Fabric to provide value added solutions that combine business functionality with insights and built-in actions.
MECOMS is a top recommended solution for energy and utility companies across the globe enables next generation cities with smart energy and utility solutions that manage consumption from “meter to cash”. MECOMS 365, built on Dynamics 365 and Microsoft Fabric, gathers smart meter data from homes and businesses, processes billing and reconciliation in Dynamics 365 and integrates with Dynamics 365 for customer engagement. Smart cities can not only provide billing and excellent service, but also provide insights to customers on how they can lower consumption and save money.
Ian Bruyninckx, Lead Product Architect, MECOMS, A Ferranti Company
ERP customers can extend their insights and reduce TCO by upgrading to Synapse Link
If you are a Dynamics 365 for Finance and Operations (F&O) customer, we have exciting news to share. Synapse Link for Dataverse service built into Power Apps, the successor to the Export to Data Lake feature in finance and operations apps, is now generally available. By upgrading from Export to data lake feature in F&O to Synapse Link, you can benefit from the improved configuration and enhanced performance which translates to a reduction in the total cost of ownership (TCO).
Reference Dataverse records with Microsoft 365 Context IQ to efficiently access enterprise data
One of the most time-consuming tasks, for any person that uses email, is sharing information in your line of business applications with colleagues. You must jump out of your Outlook web experience, open your line of business app, navigate to a record, and then copy and paste the link into your email. This is an incredibly time-consuming set of steps and actions.
We are excited to announce general availability for Dataverse integration with Microsoft 365 Context IQ a new feature that makes it possible for users to access their most recently used data directly from Outlook Web Client using a simple gesture.
Efficiently hyper(scale) your business applications
In the age of AI, customers are challenged with managing data at hyperscale from a plethora of sources. As a polyglot hyperscale data platform for structured relational and unstructured non-relational data, Dataverse can support all your business scenarios with security, governance, and life cycle management, with no storage limitations.
For very high scale data scenarios, such as for example utilizing third party data for time-sensitive marketing campaigns or driving efficiency with predictive maintenance for your IOT business, Dataverse elastic tables is a powerful addition to standard tables, supporting both relational and non-relational data. You can even optimize large volume data storage with time-defined auto-delete capability.
Additionally, while Dataverse can support your business growth with no limit on active data, to meet your company’s compliance, regulatory or other organizational policy, you can retain in-active data with Dataverse long term data retention and save 50% or more on storage capacity.
This article is contributed. See the original author and article here.
Organizations want to get the best out of their agents by maximizing their utilization, distributing work evenly, and providing enough breaks between calls. Least active routing, formerly known as most-idle routing, is an assignment strategy that can help achieve this. It assigns work to agents based on when they end their last conversation. It gives agents who are working on longer or more complex conversations a chance to take a break, while distributing the new conversations to other agents. Doing so helps improve workforce utilization and engagement.
How does least active routing help?
Contoso Health is a multinational healthcare and insurance provider. It has a large customer support organization covering more than 20 product lines, handled by more than 5,000 agents worldwide. Customers call the contact center to get their queries resolved.
In the contact center, an agent can talk to only one customer at a time. Eugenia, the director of customer support at Contoso, observes that some of her agents are being utilized up to 95% in their schedule. Meanwhile, others are occupied only for only 70-75%, and she wants to solve this problem. While doing that, she also wants to make sure not to impact the key metrics like customer satisfaction and SLAs. She comes across the least active routing assignment method and tries it for a queue.
Kayla and Finn are two agents working in a voice queue. Kayla has a call that comes in at 1:00 PM. Finn takes a call at 1:05 PM. Kayla’s issue is complex and takes her 15 minutes to close. Finn solves his customer’s problem in five minutes. The next call comes in at 1:20 PM. The round robin method would assign the new call to Kayla since it is her turn, and she is available. But with the least active routing, the system considers the idle time of agents and assigns the call to Finn, as his last call ended earlier than Kayla’s. This kind of assignment, considering the idle time, improves the agent utilization.
Least active routing assignment diagram
The Least active option is available in the Assignment method section of the queue.
Configuration screen for least active routing
The least active routing assignment method is currently available only for voice channel queues and is the default selection for new voice queues. Least active routing can also be used as an Order by condition in the custom assignment methods.
Build custom reports to monitor an agent’s last capacity release time
The least active assignment method works based on when the agent ended his or her last call. This data about the agent’s last call end time or last capacity release time, is available in the Dataverse entity ‘msdyn_agentchannelstateentity’. Organizations can use the Model customization feature in Dynamics 365 Customer Service to build a custom report that provides a view of this data.
This article is contributed. See the original author and article here.
While there are multiple methods for obtaining explicit outbound connectivity to the internet from your virtual machines on Azure, there is also one method for implicit outbound connectivity – default outbound access. When virtual machines (VMs) are created in a virtual network without any explicit outbound connectivity, they are assigned a default outbound public IP address. These IP addresses may seem convenient, but they have a number of issues and therefore are only used as a “last resort”:
These implicit IPs are subject to change which can result in issues later on if any dependency is taken on them.
Their dynamic nature also makes them challenging to use for logs or filtering with network security groups (NSGs).
Because these IPs are not associated with your subscription, they are very difficult to troubleshoot.
This type of open egress does not adhere to Microsoft’s “secure-by-default” model which ensures customers have a strong security policy without having to take additional steps.
Because of these factors, we recently announced the retirement of this type of implicit connectivity, which will be removed for all VMs in subnets created after September 2025.
Private subnet
At Ignite, a new feature—private subnet—was released in preview. This feature will let you prevent this type of insecure implicit connectivity for all subnets with the “default outbound access” parameter set to false. Any virtual machines created on this subnet will be prevented from connecting to the Internet without an explicit outbound method specified. To enable this feature, simply ensure the option is selected when a new subnet is created as shown below.
View of Azure portal with Private subnet selection
Note that currently:
A subnet must be created as private; this parameter cannot be changed following creation.
Certain services will not function on a virtual machine in a private subnet without an explicit method of egress (examples are Windows Activation and Windows Updates).
Both CLI and ARM templates can also be used; PowerShell is in development.
The good news is that there are much better, more scalable, and secure methods for having your VMs access the Internet. The three recommended options—in order of preference—are a NAT gateway, using outbound rules with a public load balancer, or placing a public IP directly on the VM network interface card (NIC).
Diagram of multiple explicit outbound methods
Azure NAT Gateway
NAT Gateway is the best option for connecting outbound to the internet as it is specifically designed to provide highly secure and scalable outbound connectivity. NAT Gateway enables all instances within private subnets of Azure virtual networks to remain fully private and source network address translate (SNAT) to a static public IP address. No connections sourced directly from the internet are permitted through a NAT gateway. NAT Gateway also provides on-demand SNAT port allocation to all virtual machines in its associated subnets. Since ports are not in fixed amounts to each VM, this means you don’t need to worry about knowing the exact traffic patterns of each VM. To learn more, take a look at our documentation on SNATing with NAT Gateway or our blog that explores a specific outbound connectivity failure scenario where NAT Gateway came to the rescue.
Azure Load Balancer with outbound rules
Another method for explicit outbound connectivity is using public Azure load balancers with outbound rules. To provide outbound connectivity with Azure Load Balancer, you assign a dedicated public IP address or addresses as the frontend IP of the outbound rule. Private instances in the backend pool of the Load balancer then use the frontend IP of the outbound rule to connect outbound in a secure manner, similar to NAT Gateway. However, unlike NAT Gateway, SNAT ports are not allocated dynamically with outbound rules. Rather, using a load balancer requires manual allocation of SNAT ports in fixed amounts to each instance of your backend pool prior to deployment. This manual SNAT port allocation gives you full declarative control over outbound connectivity since you decide the exact amount of ports each VM is allowed. However, this manual allocation creates more overhead management in ensuring that you have assigned the correct amount of SNAT ports needed by your backend instances for connecting outbound. While you can scale your Load balancer by adding more frontend IPs to your outbound rule, this scaling requires you to then re-allocate assigned SNAT ports per backend instance to ensure that you are utilizing the full inventory of SNAT ports available.
Public IP address assignment to virtual machine NICs
Another option for providing explicit outbound connectivity from an Azure virtual network is the assignment of a public IP address directly to the NIC of a virtual machine. Customers that want to have control over which public IP address their VMs use for connecting to specific destination endpoints may benefit from assigning public IPs directly to the VM NIC. However, customers with more complex and dynamic workloads that need a many to one SNATing relationship that scales with their traffic volume should look to NAT Gateway or Load Balancer.
As mentioned earlier, default outbound access is enabled only when no other explicit outbound connectivity method exists. Note that there is an order of priority among the explicit methods, shown in the flowchart below. In other words, if your VM has multiple forms of outbound connectivity defined, then the higher order one will be used (e.g. NAT Gateway takes precedence above a public IP address attached to the VM NIC).
To check the form of outbound your deployments are using, you can refer to the flowchart below which lists them in the order of precedence.
Flowchart showing priority order for diffrent outbound methods
Recent Comments