This article is contributed. See the original author and article here.
Welcome to the Viva Glint newsletter. These recurring communications coincide with platform releases and enhancements to help you get the most out of the Viva Glint product. You can access the current newsletter and past editions on the Viva Glint blog.
The Glint Customer Experience Survey is live!
We’re excited to announce that Glint’s Customer Experience Survey is now available. Your input is essential to our ability to provide a world-class experience for our customers and helps us to improve our product, customer support, and our Viva Glint resources.
If you participated in this survey previously, you may notice this cycle has been streamlined and feels a bit different. We appreciate you taking a few minutes to share your thoughts. The survey will take five minutes to complete and closes on Friday, November 8.
Viva Glint Admins can modify predefined Glint product roles. This new capability within the User Roles feature reduces the time required to assign roles and reduces the necessity to create new roles. Learn more in Viva Glint User Roles.
Hide the Comments report export feature for any program cycle. Disabling this feature improves confidentiality measures by decreasing the risk of matching survey data to a specific survey respondent. Learn more in Reporting Setup.
More enhancements for PDF exports. With this release, the enhanced technology for exporting PDF feedback reports, released for recurring and ad hoc survey programs last month, is now in place for 360 feedback reports and Focus Area reports. Read more.
View and manage users’ custom data access. Glint administrators can use a new export feature on the User Roles page to export and view users’ customized data access for survey results and Focus Areas. Use the exported file as a guide to upload new custom access in bulk in Advanced Configuration. Learn more
Upcoming events
Ask the Experts | November 12,
Our next session in this popular series focuses on choosing the right benchmark comparison for your survey results. Good comparison choices for feedback reporting are crucial for understanding strengths and opportunities on your team. Bring your questions!
Building Psychological Safety | November 18 Join us on for a conversation with Dr. Julie Morris to learn how to identify signs of psychological safety and what actions you can take to improve it on your team. Please invite your managers to this session!
Viva Community Call: Microsoft HR is Using Viva and M365 Copilot to Empower Employees
This webinar explored how Microsoft HR leverages the power of Microsoft Viva to communicate, provide opportunities for skilling and development, and measure success around M365 Copilot adoption and impact at Microsoft. Watch the video here.
Exciting new resource for all stakeholders
Are you looking to build a holistic employee listening ecosystem? Review this guide from the Viva People Science team to foster employee engagement and better performance. Check out the eBook here.
This article is contributed. See the original author and article here.
Enhancing User Experience with Timely Responses
Building a conversational bot using Azure Bot Composer offers a myriad of possibilities to create a seamless and engaging user experience. One such feature that can significantly enhance user interaction is introducing a custom delay between two messages. This small yet impactful addition can mimic human-like pauses, making conversations feel more natural and thoughtful.
This blog will guide you through the steps to introduce custom delays between messages in Azure Bot Composer.
Why Introduce a Delay?
Introducing delays between messages can serve several purposes:
Natural Flow: Mimics human conversation, making interactions feel less robotic.
Attention Management: Gives users time to read and process information before moving on to the next message.
Contextual Relevance: Helps in maintaining the context, especially in scenarios where the bot provides detailed explanations or instructions.
Expected wait time : It often happens that we might want to make an outbound call from Azure bot composer to outside and fetch a response back, it might also need some time to get the desired response, for example when we would like to fetch a token in return, in such scenarios we would like to introduce intentional delay
Setting Up Azure Bot Composer
Before we dive into introducing custom delays, ensure you have Azure Bot Composer installed and set up. You can download it from the official GitHub repository and follow the installation instructions provided.
Launch Azure Bot Composer and open your existing bot project or create a new one. Navigate to the dialog where you want to introduce the delay.
Step 2: Add a New Action
Within your dialog, click on the ‘+ Add’ button to insert a new action. From the list of available actions, select ‘Send a response’. This is the message you want to introduce the delay.
[Activity
Type = delay
Value = "5000"
]
Make sure to add this code as JSON, by clicking on view source code and then add above, it should look like :
Enter the message text you want to send after the delay. This could be any text, such as a follow-up question or additional information.
By default, the typing activity lasts for a short duration. To customize the delay, you can adjust the duration of the typing activity. Click on the typing activity and set the desired duration (in milliseconds) in the properties pane. For example, setting it to 3000 milliseconds will introduce a 3-second delay. Make sure to keep this value below 15 secs.
Step 3: Test Your Bot
Once you have configured the delay and follow-up message, it’s time to test your bot. Click on the ‘Test in Emulator’ button to launch the Bot Framework Emulator. Interact with your bot to ensure that the delay is working as expected, and the messages are being sent in the correct sequence.
Conclusion
Introducing custom delays between messages in Azure Bot Composer is a simple yet powerful way to enhance user experience. By following the steps outlined in this guide, you can create more natural and engaging conversations that keep users interested and informed.
This article is contributed. See the original author and article here.
We’re expanding our ambition to bring AI-first business process to organizations. First, we’re announcing that the ability to create autonomous agents with Microsoft Copilot Studio will be available in public preview in November 2024. Learn more on the Copilot Studio blog.
Second, we’re introducing 10 new autonomous agents in Microsoft Dynamics 365 to build capacity for sales, service, finance, and supply chain teams. These agents are designed to help you accelerate your time to value and are configured to scale operational efficiency and elevate customer experiences across roles and functions.
Scale your team with new autonomous agents
Discover more ways to drive impact with autonomous agents and Copilot Studio.
Microsoft Copilot is your AI assistant—it works for you—and Copilot Studio enables you to easily create, manage, and connect agents to Copilot. Think of agents as the new apps for an AI-powered world. We envision organizations will have a constellation of agents—ranging from simple prompt-and-response to fully autonomous. They will work on behalf of an individual, team, or function to execute and orchestrate business process ranging from lead generation, to sales order processing, to confirming order deliveries. Copilot is how you’ll interact with these agents.
Introducing autonomous agents for Dynamics 365
New autonomous agents enable customers to move from legacy line of business applications to AI-first business process. AI is today’s return on investment (ROI) and tomorrow’s competitive edge. These new agents are designed to help sales, service, finance, and supply chain teams drive business value—and are just the start. We will create many more agents in the coming year that give customers the competitive advantage they need to help future-proof their organization. Today, we’re introducing ten of these autonomous agents which will start to become available in public preview later in 2024 and continue into early 2025.
Sales: Help sellers focus time on building customer relationships to close deals faster
Agents will help sellers focus time on engaging customers to move through the sales cycle faster. The Sales Qualification Agent for Microsoft Dynamics 365 Sales can free up time for the seller to spend on higher value activities by researching and prioritizing inbound leads in the pipe and developing personalized sales emails to initiate a sales conversation.
For small to medium-sized businesses, the Sales Order Agent for Microsoft Dynamics 365 Business Central will automate the order intake process from entry to confirmation by interacting with customers, capturing their preferences. See Sales Order Agent in action.
Operations: Empower teams to grow the business, optimize process, and meet customer demand
To maintain smooth business operations, it’s crucial that process in key areas such as finance, procurement, and supply chain are optimized to minimize cost, mitigate risks, and accelerate decisions. Autonomous agents operate around the clock to execute a range of process, helping professionals spend less time on manual work and more time on strategic tasks like planning and decision making.
The Supplier Communications Agent for Microsoft Dynamics 365 Supply Chain Management autonomously manages collaboration with suppliers to confirm order delivery, while helping to preempt potential delays. With agents performing all the tasks related to confirming purchase orders, procurement specialists can focus on managing supplier relationships and improving overall supply chain resiliency.
Additional agents:
FinancialReconciliation Agent for Microsoft 365 Copilot for Finance helps teams prepare and cleanse data sets to simplify and reduce time spent on the most labor-intensive part of the financial period close process that leads to financial reporting. Learn more in this brief video.
Account Reconciliation Agent for Microsoft Dynamics 365 Finance, designed for accountants and controllers, automates the matching and clearing of transactions between subledgers and the general ledger, helping them speed up the financial close process. This enhances cash flow visibility and can result in faster decisions to drive business performance. Watch this video to learn more.
Time and Expense Agent for Microsoft Dynamics 365 Project Operations autonomously manages time entry, expense tracking, and approval workflows. It helps get invoices to customers promptly, preventing revenue leakage and helps ensure projects stay on track and within budget. See Time and Expense Agent in action.
Service: Transform customer experiences across self- and human-assisted service
Contact centers face interconnected, compounding challenges to successfully and efficiently serve customers. For example, keeping vital knowledge base articles current relies on manual process. Valuable insights from seasoned customer service representatives are often locked away in chat logs, call recordings, case notes, and other data silos. And self-service tools rely on inflexible, hard-coded dialog with embedded knowledge that must be predefined for potential customer issues.
The CustomerIntent and Customer Knowledge ManagementAgents, available for Microsoft Dynamics 365 Customer Service and Microsoft Dynamics 365 Contact Center, help contact centers transform customer experiences across self-service and human-assisted service. The CustomerIntent Agent enables evergreen self-service by continuously discovering new intents from past and current customer conversations across all channels, mapping issues and corresponding resolutions maintained by the agent in a library. The CustomerKnowledge ManagementAgent helps ensure knowledge articles are kept perpetually up to date by analyzing case notes, transcripts, summaries, and other artifacts from human-assisted cases to uncover insights.
Additional agents:
Case Management Agent for Customer Service automates key tasks throughout the case lifecycle—creation, resolution, follow up, closure—to reduce handle time and alleviate the burden on service representatives. See Case Management Agent in action.
Scheduling Operations Agent for Microsoft Dynamics 365 Field Service enables dispatchers to provide optimized schedules for technicians, even as conditions change throughout the workday—for example, accounting for issues such as traffic delays, double bookings, or last-minute cancellations that often result in conflicts or gaps.
Collectively, these agents are trained to autonomously learn to address new and emerging issues via self-service, improve the quality of issue resolution across channels and help drive time and cost savings.
As agents become more prevalent in the enterprise, customers want to be confident that they have robust data governance and security. The agents coming to Dynamics 365 follow our core security, privacy, and responsible AI commitments. Agents built in Copilot Studio include guardrails and controls established by maker-defined instructions, knowledge, and actions. The data sources linked to the agent adhere to stringent security measures and controls—all managed in Copilot Studio. This includes data loss prevention, robust authentication protocols, and more. Once these agents are created, IT administrators can apply a comprehensive set of features to govern their use.
This article is contributed. See the original author and article here.
Virtual Machines deployed in Azure used to haveDefault Outbound Internet Access. Until today, this allows virtual machines to connect to resources on the internet (including public endpoints of Azure PaaS services) even if the Cloud administrators have not configured any outbound connectivity method for their virtual machines explicitly. Implicitly, Azure’s network stack performed source network address translation (SNAT) with a public IP address that was provided by the platform.
As part of their commitment to increase security on customer workloads, Microsoft will deprecateDefault Outbound Internet Accesson 30 September 2025 (see the official announcementhere). As of this day, customers will need to configure an outbound connectivity method explicitly if their virtual machine requires internet connectivity. Customers will have the following options:
Deploy a Network Virtual Appliance (NVA) to perform SNAT, such asAzure Firewall, androuteinternet-bound traffic to the NVA before egressing to the internet.
Today, customers can start preparing their workloads for the updated platform behavior. By settingpropertydefaultOutboundAccesstofalseduring subnet creation, VMs deployed to this subnet will not benefit from the conventional default outbound access method, but adhere to the new conventions. Subnets with this configuration are also referred to as ‘private subnets’.
In this article, we are demonstrating (a) the limited connectivity of virtual machines deployed to private VNets. We are also exploring different options to (b) route traffic from these virtual machines to public internet and to (c) optimize the communication path for management and data plane operations targeting public endpoints of Azure services.
We will be focusing on connectivity with Azure services’publicendpoints. If you usePrivate Endpointsto expose services to your virtual network instead, routing in a private subnet remains unchanged.
Overview
The following architecture diagram presents the sample setup that we’ll use to explore the network traffic with different components.
The setup comprises following components:
A virtual network with a private subnet (i.e., a subnet that does not offer default outbound connectivity to the internet).
A virtual machine (running Ubuntu Linux) connected to this subnet.
A Key Vault including a stored secret as sample Azure PaaS service to explore Azure-bound connectivity.
A Log Analytics Workspace, storing audit information (i.e., metadata of all control and data plane operations) from that Key Vault.
A Bastion Host to securely connect to the virtual machine via SSH.
In the following sections, we will integrate following components to control the network traffic and explore the effects on communication flow:
An Azure Firewall as central Network Virtual Appliance to route outbound internet traffic.
An Azure Load Balancer with Outbound Rules to route Azure-bound traffic through the Azure Backbone (we’ll use the Azure Resource Manager in this example).
A Service Endpoint to route data plane operations directly to the service.
We’ll use following examples to illustrate the communication paths:
A simple http-call toifconfig.iowhich (if successful) will return the public IP address that will be used to make calls to public internet resources.
An invocation of the Azure CLI to get Key Vault metadata (az keyvault show), which (if successful) will return information about the Key Vault resources. This call to the Azure Resource Manager represents a management plane operation.
An invocation of the Azure CLI to get a secret stored in the Key Vault (az keyvault secret show), which (if successful) will return a secret stored in the Key Vault. This represents a data plane operation.
A query to the Key Vault’s audit log (stored in the Log Analytics Workspace), to reveal the IP address of the caller for management and data plane operations.
Prerequisites
The repositoryAzure-Samples/azure-networking_private-subnet-routingon GitHub contains all required Infrastructure as Code assets, allowing you to easily reproduce the setup and exploration in your own Azure subscription.
jqto parse and process JSON input (find installation instructionshere)
Git repository
Clone the Git repository from [TODO: Repo link here] and changecdinto its repository root.
$ git clone https://github.com/Azure-Samples/azure-networking_private-subnet-routing
$ cd azure-networking_private-subnet-routing
Azure subscription
Login to your Azure subscription via Azure CLI and ensure you have access to your subscription.
$ az login
$ az account show
Getting ready: Deploy infrastructure.
We kick off our journey by deploying the infrastructure depicted in the architecture diagram above; we’ll do that using the IaC (Infrastructure as Code) assets from the repository.
Open fileterraform.tfvarsin your favorite code editor, and adjust the values of variableslocation(the region to which all resource will be deployed) andprefix(the shared name prefix for all resources). Also don’t forget to provide login credentials for your VM by setting values foradmin_username and admin_password.
Set environment variableARM_SUBSCRIPTION_IDto point terraform to the subscription you are currently logged on to.
$ export ARM_SUBSCRIPTION_ID=$(az account show –query “id” -o tsv)
Using your CLI and terraform, deploy the demo setup:
$ terraform init
Initializing the backend…
[…]
Terraform has been successfully initialized!
$ terraform apply
[…]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only ‘yes’ will be accepted to approve.
Enter a value: yes
[…]
Apply complete!
[…]
☝️ In case you are not familiar with Terraform,this tutorialmight be insightful for you.
Explore the deployed resources in the Azure Portal. Note that although the network infrastructure components shown in the architecture drawing above are already deployed, they are not yet configured for use from the Virtual Machine:
The Azure Firewall is deployed, but the route table attached to the VM subnet does not (yet) have any route directing traffic to the firewall (we will add this in Scenario 2).
The Azure Load Balancer is already deployed, but the virtual machine is not yet member of its backend pool (we will change this in Scenario 3).
Log in to the Virtual Machine using the Bastion Host.
At this point, our virtual machine is deployed to a private subnet. As we do not have any outbound connectivity method set up, all calls to public internet resources as well as to the public endpoints of Azure resources will time out.
Test 1: Call to public internet
$ curl ifconfig.io –connect-timeout 10
curl: (28) Connection timed out after 10004 milliseconds
Test 2: Call to Azure Resource Manager
$ curl https://management.azure.com/ –connect-timeout 10
curl: (28) Connection timed out after 10001 milliseconds
Test 3: Call to Azure Key Vault (data plane)
$ curl https://no-doa-demo-kv.vault.azure.net/ –connect-timeout 10
curl: (28) Connection timed out after 10002 milliseconds
Scenario 2: Route all traffic through azure Firewall.
Typically, customers deploy a central Firewall in their network to ensure all outbound traffic is consistently SNATed through the same public IPs and all outbound traffic is centrally controlled and governed. In this scenario, we therefore modify our existing route table and add a default route (i.e., for CIDR range0.0.0.0/0), directing all outbound traffic to the private IP of our Azure Firewall.
Add Firewall and routes.
Browse tonetwork.tf, uncomment the definition ofazurerm_route.default-to-firewall.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_route.default-to-firewall will be created
[…]
Test 1: Call to public internet, revealing that outbound calls are routed through the firewall’s public IP.
$ curl ifconfig.io
4.184.163.38
Now that you have access to the internet, installAzure CLI.
Test 2: Call to Azure Resource Manager (you might need to change the Key Vault name if you changed the prefix in yourterraform.tfvars)
$ az keyvault show –name “no-doa-demo-kv” -o table
Location Name ResourceGroup
—————— ————– ————–
germanywestcentral no-doa-demo-kv no-doa-demo-rg
Test 3: Call to Azure Key Vault (data plane)
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
Query Key Vault Audit Log.
☝️ The ingestion of audit logs into the Log Analytics Workspace might take some time. Please make sure to wait for up to ten minutes before starting to troubleshoot.
Get Application ID of VM’s system-assigned managed identity:
$ ./scripts/vm_get-app-id.sh
AppId for Principal ID f889ca69-d4b0-45a7-8300-0a88f957613e is: 8aa9503c-ee91-43ee-96c7-49dc005ebecc
Go to Log Analytics Workspace, run the following query.
AzureDiagnostics |
where identity_claim_appid_g == “[Replace with App ID!]”
| project TimeGenerated, Resource, OperationName, CallerIPAddress
| order by TimeGenerated desc
Alternatively, run the prepared scriptkv_query-audit.sh:
🗣 Note that both calls to the Key Vault succeed as they are routed through the central Firewall; both requests (to Azure Management plane and Key Vault data plane) hit their endpoints with the Firewall’s public IP.
Scenario 3: Bypass Firewall for traffic to Azure management plane.
At this point all, internet and Azure-bound traffic to public endpoints is routed through the Azure Firewall. Although this allows you to centrally control all traffic, you might have good reasons to prefer to offload some communication from this component by routing traffic targeting a specific IP address range through a different component for SNAT — for example to optimize latency or reduce load on the firewall component for communication with well-known hosts.
☝️ As mentioned before, dedicated Public IP addresses, NAT Gateways and Azure Load Balancers are alternative options to configure SNAT for outbound access. You can find a detailed discussion about all optionshere.
In this scenario, we assume that we want network traffic to the Azure Management plane to bypass the central Firewall (we pick this service for demonstration purposes here). Instead, we want to use the SNAT capabilities of an Azure Load Balancer with outbound rules to route traffic to the public endpoints of the Azure Resource Manager. We can achieve this by adding a more-specific route to the route table, directing traffic targeting the correspondingservice tag(which is like a symbolic name comprising a set of IP ranges) to a different destination.
The integration of outbound load balancing rules into the communication path works differently than integrating a Network Virtual Appliance: While we defined the latter by setting the NVA’s private IP address as next hop in our user defined route in scenario 1, we only integrate the Load Balancerimplicitlyinto our network flow — by specifyingInternetas next hop in our route table. (Essentially, next hop ‘Internet’ instructs Azure to use either (a) the Public IP attached to the VM’s NIC, (b) the Load Balancer associated to the VM’s NIC with the help of an outbound rule, or (c) a NAT Gateway attached to the subnet the VM’s NIC is connected to.) Therefore, we need to take two steps to send traffic through our Load Balancer:
Deploy a more-specific user-defined route for the respective service tag.
Add our VM’s NIC to a load balancer’s backend pool with an outbound load balancing rule.
In our scenario, we’ll do this for the Service tagAzureResourceManager, which (amongst others) also comprises the IP addresses formanagement.azure.com, which is the endpoint for the Azure control plane. This will affect theaz keyvault getoperation to retrieve the Key Vault’s metadata.
Browse tonetwork.tf, uncomment the definition ofazurerm_route.azurerm_2_internet.
☝️ Note that this route specifiesInternet(!) as next hop type for any communication targeting IPs of service tagAzureResourceManager.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_route.azurerm_2_internet will be created
[…]
(optional)Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
$ curl ifconfig.io
4.184.163.38
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
Test 2: Call to Azure Resource Manager
$ az keyvault show –name “no-doa-demo-kv” -o table
: Failed to establish a new connection: [Errno 101] Network is unreachable
🗣 While the call to the Key Vault data plane succeeds, the call to the resource manager fails: Routeazurerm_2_internetdirects traffic to next hop typeInternet. However, as the VM’s subnet is private, defining the outbound route is not sufficient and we still need to attach the VM’s NIC to the Load Balancers outbound rule.
Instruct Azure to send internet-bound traffic through Outbound Load Balancer
Add virtual machine’s NIC to a backend pool linked with an outbound load balancing rule.
Browse tovm.tf, uncomment the definition ofazurerm_network_interface_backend_address_pool_association.vm-nic_2_lb.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb will be created
[…]
(optional)Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
Repeat Test 2: Call to Azure Resource Manager
$ az keyvault show –name “no-doa-demo-kv” -o table
Location Name ResourceGroup
—————— ————– —————
germanywestcentral no-doa-demo-kv no-doa-demo-rg
🗣 After adding the NIC to the backend of the outbound load balancer, routes with next hop typeInternetwill use the load balancer for outbound traffic. As we specifiedInternetas next hop type forAzureResourceManager, theVaultGetoperation is now hitting the management plane from the load balancer’s public IP. (Communication with the Key Vault data plane remains unchanged; theSecretGetoperation still hits the Key Vault from the Firewall’s Public IP.)
☝️ We explored this path for the platform-defined service tagAzureResourceManager. However, it’s equally possible to define this communication path for your self-defined IP addresses or ranges.
Scenario 4: Add ‘shortcut’ for traffic to Key Vault data plane.
For communication with many platform services, Azure offers customersVirtual Network Service Endpointsto enable an optimized connectivity method that keeps traffic on its backbone network. Customers can use this, for example, to offload traffic to platform services from their network resources and increase security by enabling access restrictions on their resources.
☝️ Note that service endpoints are not specific for individual resource instances; they will enable optimized connectivity foralldeployments of this resource type (across different subscriptions, tenants and customers). You may want to make sure to deploy complementing firewall rules to your resource as an additional layer of security.
In this scenario, we’ll deploy a service endpoint for Azure Key Vaults. We’ll see that the platform will no longer SNAT traffic to our Kay Vault’s data plane but use the VM’s private IP for communication.
Deploy Service Endpoint for Key Vault
Browse tonetwork.tf, uncomment the definition ofserviceEndpointsinazapi_resource.subnet-vm.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azapi_resource.subnet-vm will be updated in-place
[…]
(optional)Repeat test 1 (call to public internet) and test 2 (call to Azure management plane) to confirm behavior remains unchanged.
Test 3: Call to Azure Key Vault (data plane)
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
🗣 After deploying a service endpoint, we see that traffic is hitting the Azure Key Vault data plane from the virtual machine’s private IP address, i.e., not passing through Firewall or outbound load balancer.
Inspect NIC’s effective routes.
Eventually, let’s explore how the different connectivity methods show up in the virtual machine’s NIC’s effective routes. Use one of the following options to show them:
In Azure portal, browse to the VM’s NIC and explore the ‘Effective Routes’ section in the ‘Help’ section.
Alternatively, run the provided script (please note that the script willonly show the firstIP address prefix in the output for brevity).
$ ./scripts/vm-nic_show-routes.sh
Source FirstIpAddressPrefix NextHopType NextHopIpAddress
——– ———————- —————————– ——————
Default 10.0.0.0/16 VnetLocal
User 191.234.158.0/23 Internet
Default 0.0.0.0/0 Internet
Default 191.238.72.152/29 VirtualNetworkServiceEndpoint
User 0.0.0.0/0 VirtualAppliance 10.254.1.4
🗣 See that…
…system-defined route191.238.72.152/29 to VirtualNetworkServiceEndpointis sending traffic to Azure Key Vault data plane via service endpoint.
…user-defined route191.234.158.0/23 to Internetisimplicitlysending traffic toAzureResourceManagervia Outbound Load Balancer (by definingInternetas next hop type for a VM attached to an outbound load balancer rule).
…user-defined route0.0.0.0/0 to VirtualAppliance (10.254.1.4)is sending all remaining internet-bound traffic to the Firewall.
This article is contributed. See the original author and article here.
Why XML?
XML is widely used across various industries due to its versatility and ability to structure complex data. Some key industries that use XML:
Finance: XML is used for financial data interchange, such as in SWIFT messages for international banking transactions and in various financial reporting standards.
Healthcare: XML is used in healthcare for data exchange standards like HL7, which facilitates the sharing of clinical and administrative data between healthcare providers.
Supply Chain: XML is used in supply chain management for data interchange, such as in Electronic Data Interchange (EDI) standards.
Government: Multiple government entities use XML for various data management and reporting tasks.
Legal: XML is used in the legal industry to organize and manage documents, making it easier to find and manage information.
To provide continuous support to our customers in these industries, Microsoft has always provided strong capabilities for integration with XML workloads. For instance, XML was a first-class citizen in BizTalk Server. Now, despite the pervasiveness of the JSON format, we continue working to make Azure Logic Apps the best alternative for our BizTalk Server customers and customers using XML based workloads.
The XML Operations connector
We have recently added two actions for the XML Operations connector: Parse with schema and Compose with schema. With this addition, Logic Apps customers can now interact with the token picker during design time. The tokens are generated from the XML schema provided by the customer. As a result, the XML document and its contained properties will be easily accessible, created and manipulated in the workflow.
XML parse with schema
The XML parse with schema allow customers to parse XML data using an XSD file (an XML schema file). XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you need to enter the enter your XML content, the source of the schema and the name of the schema file. The XML content may either be provided in-line or selected from previous operations in the workflow using the token picker.
Based on the provided XML schema, tokens such as the following will be available to subsequent operations upon saving the workflow:
In the output, the Body field contains a wrapper ‘json’ property, so that additional properties may be provided besides the translated XML content, such as any parsing warning messages. To ignore the additional properties, you may pick the ‘json’ property instead.
You may also select the token for each individual properties of the XML document, as these tokens are generated from the provided XML schema.
XML compose with schema
The XML compose with schema allows customers to generate XML data, using an XSD file. XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you should select the XSD file along with entering the JSON root element or elements of your input XML schema. The JSON input elements will be dynamically generated based on the selected XML schema.
You can also switch to Array and pass an entire array for Customers and another for Orders:
Please watch the following video for a complete demonstration of this new feature.
Recent Comments