Part 2 – Observability for your azd-compatible app

Part 2 – Observability for your azd-compatible app

This article is contributed. See the original author and article here.

In Part 1, I walked you through how to azdev-ify a simple Python app. In this post, we will:



  • add the Azure resources to enable the observability features in azd

  • add manual instrumentation code in the app 

  • create a launch.json file to run the app locally and make sure we can send data to Application Insights

  • deploy the app to Azure


 


Previously…


We azdev-ified a simple Python app: TheCatSaidNo and deployed the app to Azure. Don’t worry if you have already deleted everything. I have updated the code for part 1 because of the Bicep modules improvements we shipped in the azure-dev-cli_0.4.0-beta.1 release. You don’t need to update your codes, just start from my GitHub repository (branch: part1):



  1. Make sure have the pre-requisites installed:


  2. In a new empty directory, run 

    azd up -t https://github.com/puicchan/theCatSaidNo -b part1​

    If you run `azd monitor –overview` at this point, you will get an error – “Error: application does not contain an Application Insights dashboard.” That’s because we didn’t create any Azure Monitor resources in part 1,




 


Step 1 – add Application Insights


The Azure Developer CLI (azd) provides a monitor command to help you get insight into how your applications are performing so that you can proactively identify issues. We need to first add the Azure resources to the resource group created in part 1.



  1. Refer to a sample, e.g., ToDo Python Mongo. Copy the directory /infra/core/monitor to your /infra folder.

  2. In main.bicep: add the following parameters. If you want to override the default azd naming convention, provide your own values here. This is new since version 0.4.0-beta.1. 

    param applicationInsightsDashboardName string = ''
    param applicationInsightsName string = ''
    param logAnalyticsName string = ''​


  3. Add the call to monitoring.bicep in /core/monitor

    // Monitor application with Azure Monitor
    module monitoring './core/monitor/monitoring.bicep' = {
      name: 'monitoring'
      scope: rg
      params: {
        location: location
        tags: tags
        logAnalyticsName: !empty(logAnalyticsName) ? logAnalyticsName : '${abbrs.operationalInsightsWorkspaces}${resourceToken}'
        applicationInsightsName: !empty(applicationInsightsName) ? applicationInsightsName : '${abbrs.insightsComponents}${resourceToken}'
        applicationInsightsDashboardName: !empty(applicationInsightsDashboardName) ? applicationInsightsDashboardName : '${abbrs.portalDashboards}${resourceToken}'
      }
    }


  4. Pass the application insight name as a param to appservice.bicep in the web module: 

    applicationInsightsName: monitoring.outputs.applicationInsightsName


  5. Add output for the App Insight connection string to make sure it’s stored in the .env file:

    output APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString​


  6. Here’s the complete main.bicep

    targetScope = 'subscription'
    
    @minLength(1)
    @maxLength(64)
    @description('Name of the the environment which is used to generate a short unique hash used in all resources.')
    param environmentName string
    
    @minLength(1)
    @description('Primary location for all resources')
    param location string
    
    // Optional parameters to override the default azd resource naming conventions. Update the main.parameters.json file to provide values. e.g.,:
    // "resourceGroupName": {
    //      "value": "myGroupName"
    // }
    param appServicePlanName string = ''
    param resourceGroupName string = ''
    param webServiceName string = ''
    param applicationInsightsDashboardName string = ''
    param applicationInsightsName string = ''
    param logAnalyticsName string = ''
    // serviceName is used as value for the tag (azd-service-name) azd uses to identify
    param serviceName string = 'web'
    
    @description('Id of the user or app to assign application roles')
    param principalId string = ''
    
    var abbrs = loadJsonContent('./abbreviations.json')
    var resourceToken = toLower(uniqueString(subscription().id, environmentName, location))
    var tags = { 'azd-env-name': environmentName }
    
    // Organize resources in a resource group
    resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
      name: !empty(resourceGroupName) ? resourceGroupName : '${abbrs.resourcesResourceGroups}${environmentName}'
      location: location
      tags: tags
    }
    
    // The application frontend
    module web './core/host/appservice.bicep' = {
      name: serviceName
      scope: rg
      params: {
        name: !empty(webServiceName) ? webServiceName : '${abbrs.webSitesAppService}web-${resourceToken}'
        location: location
        tags: union(tags, { 'azd-service-name': serviceName })
        applicationInsightsName: monitoring.outputs.applicationInsightsName
        appServicePlanId: appServicePlan.outputs.id
        runtimeName: 'python'
        runtimeVersion: '3.8'
        scmDoBuildDuringDeployment: true
      }
    }
    
    // Create an App Service Plan to group applications under the same payment plan and SKU
    module appServicePlan './core/host/appserviceplan.bicep' = {
      name: 'appserviceplan'
      scope: rg
      params: {
        name: !empty(appServicePlanName) ? appServicePlanName : '${abbrs.webServerFarms}${resourceToken}'
        location: location
        tags: tags
        sku: {
          name: 'B1'
        }
      }
    }
    
    // Monitor application with Azure Monitor
    module monitoring './core/monitor/monitoring.bicep' = {
      name: 'monitoring'
      scope: rg
      params: {
        location: location
        tags: tags
        logAnalyticsName: !empty(logAnalyticsName) ? logAnalyticsName : '${abbrs.operationalInsightsWorkspaces}${resourceToken}'
        applicationInsightsName: !empty(applicationInsightsName) ? applicationInsightsName : '${abbrs.insightsComponents}${resourceToken}'
        applicationInsightsDashboardName: !empty(applicationInsightsDashboardName) ? applicationInsightsDashboardName : '${abbrs.portalDashboards}${resourceToken}'
      }
    }
    
    // App outputs
    output AZURE_LOCATION string = location
    output AZURE_TENANT_ID string = tenant().tenantId
    output REACT_APP_WEB_BASE_URL string = web.outputs.uri
    output APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString


  7. Run `azd provision` to provision the additional Azure resources

  8. Once provisioning is complete, run `azd monitor –overview` to open the Application Insight dashboard in the browser.

    The dashboard is not that exciting yet. Auto-instrumentation application monitoring is not yet available for Python appHowever, if you examine your code, you will see that:



    • APPLICATIONINSIGHTS_CONNECTION_STRING is added to the .env file for your current azd environment.

    • The same connection string is added to the application settings in the configuration of your web app in Azure Portal:web.png




 


Step 2 – manually instrumenting your app


Let’s track incoming requests with OpenCensus Python and instrument your application with the flask middleware so that incoming requests sent to your app is tracked. (To learn more about what Azure Monitor supports, refer to setting up Azure Monitor for your Python app.)


 


For this step, I recommend using Visual Studio Code and the following extensions:



Get Started Tutorial for Python in Visual Studio Code is a good reference if you are not familiar with Visual Studio Code.


 



  1. Add to requirements.txt

    python-dotenv
    opencensus-ext-azure >= 1.0.2
    opencensus-ext-flask >= 0.7.3
    opencensus-ext-requests >= 0.7.3​


  2. Modify app.py to: 

    import os
    
    from dotenv import load_dotenv
    from flask import Flask, render_template, send_from_directory
    from opencensus.ext.azure.trace_exporter import AzureExporter
    from opencensus.ext.flask.flask_middleware import FlaskMiddleware
    from opencensus.trace.samplers import ProbabilitySampler
    
    INSTRUMENTATION_KEY = os.environ.get("APPLICATIONINSIGHTS_CONNECTION_STRING")
    
    app = Flask(__name__)
    middleware = FlaskMiddleware(
        app,
        exporter=AzureExporter(connection_string=INSTRUMENTATION_KEY),
        sampler=ProbabilitySampler(rate=1.0),
    )
    
    
    @app.route("/favicon.ico")
    def favicon():
        return send_from_directory(
            os.path.join(app.root_path, "static"),
            "favicon.ico",
            mimetype="image/vnd.microsoft.icon",
        )
    
    
    @app.route("/")
    def home():
        return render_template("home.html")
    
    
    if __name__ == "__main__":
        app.run(debug=True)​


  3. To run locally, we need to read from the .env file to get the current azd environment context. The easiest is to customize run and debug in Visual Studio Code by creating a launch.json file:

    • Ctrl-Shift+D or click “Run and Debug” in the sidebar

    • Click “create a launch.json file” to customize a launch.json file

    • Select “Flask Launch and debug a Flask web application

    • Modify the generated file to: 

      {
          // Use IntelliSense to learn about possible attributes.
          // Hover to view descriptions of existing attributes.
          // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
          "version": "0.2.0",
          "configurations": [
              {
                  "name": "Python: Flask",
                  "type": "python",
                  "request": "launch",
                  "module": "flask",
                  "env": {
                      "FLASK_APP": "app.py",
                      "FLASK_DEBUG": "1"
                  },
                  "args": [
                      "run",
                      "--no-debugger",
                      "--no-reload"
                  ],
                  "jinja": true,
                  "justMyCode": true,
                  "envFile": "${input:dotEnvFilePath}"
              }
          ],
          "inputs": [
              {
                  "id": "dotEnvFilePath",
                  "type": "command",
                  "command": "azure-dev.commands.getDotEnvFilePath"
              }
          ]
      }​




  4. Create and activate a new virtual environment . I am using Windows. So: 

    py -m venv .venv
    .venvscriptsactivate
    pip3 install -r ./requirements.txt​


  5. Click the Run view in the sidebar and hit the play button for Python: Flask

    • Browse to http://localhost:5000 to launch the app.

    • Click the button a few times and/or reload the page to generate some traffic.


    Take a break; perhaps play with your cat or dog for real. The data will take a short while to show up in Application Insights.



  6. Run `azd monitor –overview` to open the dashboard and notice the change dashboard.png

  7. Run `azd deploy` to deploy your app to Azure and start monitoring your app!


 


Get the code for this blog post here. Next, we will explore how you can use `azd pipeline config` to set up GitHub action to deploy update upon code check in.


 


Feel free to run `azd down` to clean up all the Azure resources. As you saw, it’s easy to get things up and running again. Just `azd up`!


 


We love your feedback! If you have any comments or ideas, feel free to add a comment or submit an issue to the Azure Developer CLI Repo.

!! Announcement !! Public Preview of SWIFT message processing using Azure Logic Apps

!! Announcement !! Public Preview of SWIFT message processing using Azure Logic Apps

This article is contributed. See the original author and article here.

SWIFT message processing using Azure Logic Apps


 


We are very excited to announce the Public Preview of SWIFT MT encoder and decoder for Azure Logic Apps. This release will enable customers to process SWIFT based payment transactions with Logic Apps Standard and build cloud native applications with full security, isolation and VNET integration.


 


What is SWIFT


SWIFT is the Society for Worldwide Interbank Financial Telecommunication (SWIFT) is a global member-owned cooperative that provides a secure network that enables financial institutions worldwide to send and receive financial transactions in a safe, standardized, and reliable environment. The SWIFT group develops several message standards to support business transactions in the financial market. One of the longest established and widely used formats supported by the financial community is SWIFT MT and it is used by SWIFT proprietary FIN messaging service.


 


SWIFT network is used globally by more than 11,000 financial institutions in 200 regions/countries. These institutions pay SWIFT annual fees as well as based on the processing of financial transactions. Failures in the processing in SWIFT network create delays and result in penalties. This is where Logic Apps enables customers to send/receive these transactions as per the standard as well as proactively address these issues.


 


Azure Logic Apps enables you to easily create SWIFT workloads to automate their processing, thereby reducing errors and costs. With Logic Apps Standard, these workloads can run on cloud or in isolated environments within VNET. With built-in and Azure connectors, we offer 600+ connectors to a variety of applications, on-premises or on cloud. Logic Apps is gateway to Azure – with the rich AI and ML capabilities, customers can further create business insights to help their business.


 


SWIFT capabilities in Azure Logic Apps


The SWIFT connector has two actions – Encoder and Decoder for MT messages. There are two key capabilities of the connector – transformation of the message from flat file to XML and viceversa. Secondly, the connector performs message validation based on the SWIFT guidelines as described in the SRG (SWIFT Release Guide). The SWIFT MT actions support the processing of all categories of MT messages.


 


How to use SWIFT in Logic Apps


In this example we are listing the steps to receive an MT flat file message, decode to MT XML format, and then send it to downstream application


 



  1. SWIFT support is only available in the ‘Standard’ SKU of Azure Logic Apps. Create a standard Logic App

  2. Add a new workflow. You can choose stateful or stateless workflow.

  3. Create the first step of your workflow which is also the trigger, depending on the source of your MT message. We are using a Request based trigger.

  4. Choose the SWIFT connector under Built-in tab. Add the action ‘SWIFT Encode’ as a next step. This step will transform the MT XML message (sample is attached) to MT flat file format.


DivSwa_3-1667602325952.png


 


DivSwa_4-1667602382992.png


 


By default, the action does message validation based on the SWIFT Release Guide specification. It can be disabled via the Message validation drop-down



  1. For scenarios where you are receiving a SWIFT MT message as flat file (sample is attached) from SWIFT network, you can use SWIFT decode action to validate and transform the message to MT XML format


 


DivSwa_5-1667602443085.png


 


Advanced Scenarios


For now, you need to contact us if you have any scenarios described below. We plan to document them soon so this is a short term friction.



  • SWIFT processing within VNET

    • To perform message validation, Logic Apps runtime leverages artifacts that are hosted on a public endpoint. If you want to limit calls to the internet, and want to do all the processing within VNET, you need to override the location of those artifacts with an endpoint within your VNET. Please reach out to us and we can share instructions.




 



  • BIC (Bank Identifier Code) validation

    • By default, BIC validation is disabled. If you would like to enable BIC validation, please reach out to us and we can share instructions



Assess supply chain risk more easily in new workspace

Assess supply chain risk more easily in new workspace

This article is contributed. See the original author and article here.

Understanding risk enables businesses to take proactive actions to balance cost and resilience as they optimize their supply chains. The new supply risk assessment workspace in Microsoft Dynamics 365 Supply Chain Management helps supply managers understand the risk of encountering sourcing shortages and delays.

Discover supply chain risk based on performance metrics

The supply risk assessment workspace helps you to discover risks to future planned purchases. Risk assessment considers the past performance of your suppliers or product metrics like purchase order delivery date confirmed as requested, on-time in-full deliveries (OTIF), on-time delivery (OT), and in-full delivery (IF).

The workspace also identifies single-sourced products that didn’t perform as expected so that you can change your order strategy for the future. You can build a supplier and product ranking, analyze it, and filter OTIF metrics over time or against other dimensions, such as delivery method or site.

Explore the supply risk assessment workspace

Begin your exploration in the supply risk assessment workspace, which provides views of products and vendors that fall outside your performance goals. Customize separate goals for OT, IF, OTIF, and other metrics in a dedicated configuration page.

table
Supply risk assessment workspace

Navigate directly from the workspace to Power BI reports to view product and vendor performance and ranking.

Supplier performance report

With the reports, you can:

  • Use filters to focus on specific legal entities, vendors, items, product groups, and vendor regions
  • Study performance history
  • Zoom in on specific time periods of concern

chart
Supply risk assessment report

  • Identify risks for future purchases by mapping past OTIF observations to planned orders and suppliers
  • Select the most impacted products by potential risks translated into quantity and amount at risk and validate the assigned vendors
  • Drill in on specific products or vendors with their planned amounts and volumes at risk

Enable the workspace in feature management

To take advantage of the new capability, enable Assess supply risks to prevent supply chain disruptions in feature management. You can change default thresholds for your metrics in Supply risk assessment parameters to specify what you consider a risk for your business. By default, the threshold is set to 96%. Then navigate to the Supply risk assessment workspace to start your discovery.

Tip: Supply risk assessment workspace doesn’t show updated data?

The performance metrics have been added to the Purchase cube. If you are not using the Purchase cube for analysis yet, go to the Entity Store page, refresh the Purchase cube, and enable it also for automatic refresh.

You might need to select the Refresh data link to view the updated data in the workspace. If the link is not available, go to the Data set cache configuration and enable the cache consumer VendSupplyRiskCacheDataSet to turn on manual refresh.

Learn more

To get started using the new workspace, read the product documentation: Supply risk assessment overview | Microsoft Learn

The post Assess supply chain risk more easily in new workspace appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Sphere – Image signing certificate update coming soon

This article is contributed. See the original author and article here.

Summary


Azure Sphere is updating keys used in image signing, following best practises for security. The only impact on production devices is that they will experience two reboots instead of one during the 22.11 release cycle (or when they next connect to the Internet if they are offline). For certain manufacturing, development, or field servicing scenarios where the Azure Sphere OS is not up to date, you may need to take extra steps to ensure that newly signed images are trusted by the device; read on to learn more.


 


What is an image signing key used for, and why update it?


Azure Sphere devices only trust signed images, and the signature is verified every time software is loaded. Every production software image on the device – including the bootloader, the Linux kernel, the OS, and customer applications, as well as any capability file used to unlock development on or field servicing of devices – is signed by the Azure Sphere Security Service (AS3), based on image signing keys held by Microsoft.


 


As for any modern public/private key system, the keys are rotated periodically. The image signing keys have a 2-year validity. Note that once an image is signed, it generally remains trusted by the device. There is a separate mechanism based on one-time programmable fuses to revoke older OS software with known vulnerabilities such as DirtyPipe and prevent rollback attacks – we used this most recently in the 22.09 OS release.


 


When is this happening?


The next update to the image signing certificate will occur at the same time as 22.11 OS is broadly released in early December. When that happens, all uses of AS3 to generate new production-signed application images or capabilities will result in images signed using the new key.


 


Ahead of that, we will update the trusted key-store (TKS) of Azure Sphere devices, so that the TKS incorporates all existing keys and the new keys. This update will be automatically applied to every connected device over-the-air.  Note that device TKS updates happen ahead of any pending updates to OS or application images. In other words, if a device comes online that is due to receive a new-key-signed application or OS, it will first update the TKS so that it trusts that application or OS.


 


We will update the TKS at the same time as our 22.11 retail-evaluation release, which is targeted at 10 November. The next time that each Azure Sphere device checks for updates (or up to 24 hours later if using the update deferral feature), the device will apply the TKS update and reboot. The TKS update is independent of an OS update, and it will apply to devices using both the retail and retail-eval feeds.


 


Do I need to take any action?


No action is required for production-deployed devices. There are three non-production scenarios where you may need to take extra steps to ensure that newly signed images are trusted by the device.


 


The first is for manufacturing. If you update and re-sign the application image you use in manufacturing, but you are using an old OS image with an old TKS, then that OS will not trust the application. Follow these instructions to sideload the new TKS as part of manufacturing.


 


The second is during development. If you have a dev board that you are sideloading either a production-signed image or a capability to, and it has an old TKS, then it will not trust that capability or image. This may make the “enable-development” command fail with an error such as “The device did not accept the device capability configuration.” This can be remedied by connecting the device to a network and checking that the device is up-to-date. Another method is to recover the device – the recovery images always include the latest TKS.


 


The third is for field servicing. During field servicing you need to apply a capability to the device as it has been locked down after manufacturing using the DeviceComplete state. However, if that capability is signed using the new image signing key and the device has been offline – so it has not updated its TKS – then the OS will not trust the capability. Follow these instructions to sideload the new TKS before applying the field servicing capability.


 


Thanks for reading this blog post. I hope it has been informative in how Azure Sphere uses signed images and best practises such as key rotation to keep devices secured through their lifetime.

2022 release wave 2 in action: Bringing innovation into focus across Dynamics 365 and Power Platform

2022 release wave 2 in action: Bringing innovation into focus across Dynamics 365 and Power Platform

This article is contributed. See the original author and article here.

In October, we launched the Microsoft Dynamics 365 and Microsoft Power Platform 2022 release wave 2. This is our second release wave of the year and it includes hundreds of new capabilities and features.

This release wave is a big one, and it comes at a critical time for many organizations. We are committed to continually innovate and help your business grow, no matter what challenges or headwinds you face. Dynamics 365 and Microsoft Power Platform help strengthen your technology ecosystem by seamlessly providing visibility into every area of your business, empowering employees to focus on what they do best, and enabling your teams to create world-class customer experiences.

To help you quickly get up to speed on highlights from this release wave, as well as provide context into what’s possible, we’ve created a set of demo videos dedicated to key areas of business. As introduced in the special Business Applications release launch session at Microsoft Ignite, each video showcases how real-world organizations are taking full advantage of the new capabilities to achieve new levels of efficiency, cross-functional engagement, and breakthrough customer experiences.

To get started, watch the overview below of some of the highlights from the 2022 release wave 2.

graphical user interface
Find out what’s new for Dynamics 365 and Microsoft Power Platform in this introduction from Charles Lamanna.

Do more with less to empower growth and agility

You’ll hear a common theme across these videos: do more with less by becoming more agile and efficient with Dynamics 365 and Microsoft Power Platform. The 2022 release wave 2 unlocks durable growth by unifying business data, relationships, and workflows with a single, cohesive business cloud.

Watch the video below to learn howeven in times of uncertainty and disruptionDynamics 365 and Microsoft Power Platform help reduce costs and complexity while empowering teams to focus on superior customer experiences and operational excellence.

graphical user interface, application
Learn how Dynamics 365 and Microsoft Power Platform help reduce costs and complexity.

Sales | How Teleperformance boosts its sellers’ effectiveness with Viva Sales

We recently announced the general availability of Microsoft Viva Sales, a seller experience that enriches Microsoft 365 applications and Microsoft Teams with seller workflows. Your sales team can now automatically capture, access, and register customer data into any customer relationship management (CRM) system, including Dynamics 365 and Salesforce. Learn how Teleperformance, a global business process outsourcing and customer experience service provider, eliminated the administrative burden of manual data entry to give sellers more time to focus on selling.

graphical user interface, application
Learn about new capabilities in the 2022 release wave 2 across Microsoft Dynamics 365 Marketing, Dynamics 365 Sales, Dynamics 365 Customer Insights, and Viva Sales.

Sales and marketing | Financial services provider Eika orchestrates personalized campaigns to fund sustainable businesses across Norway

Eika, one of the largest financial services providers in Norway, is an alliance of 53 independent banks supporting two Norwegian dialects. It is also focused on being a driving force for sustainability and has launched an initiative to provide loans to businesses that are installing sustainable solutions. Learn how new AI and automation capabilities in Dynamics 365 are helping Eika’s sales and marketing teams seamlessly collaborate on campaigns to provide a personalized customer experience.

a person looking towards the camera
Learn how Eika is creating new customer experiences with AI and automation capabilities in Dynamics 365.

Customer service | Baylor Scott & White brings a new level of patient experiences to healthcare

Healthcare organizations today are being evaluated on their ability to deliver preventative services and improve overall health outcomes for the communities they serve. One way they’re meeting this challenge is through personalized omnichannel services. Baylor Scott & Whitethe largest not-for-profit healthcare system in Texas and one of the largest in the United Statesis a leader in overall patient experience in the United States.

Using our Microsoft Digital Contact Center Platform, Baylor Scott & White is streamlining patient communications through a combination of personalized self-service and an AI-driven contact center. Explore how the new features in the release wave 2 for Microsoft Dynamics 365 Customer Service can support patient relations teams through enhancements and omnichannel engagement.

graphical user interface, application
Learn how Baylor Scott & White is streamlining patient communications with Dynamics 365 Customer Service.

Innovation across Dynamics 365 Field Service, Mixed Reality, and Connected Spaces

Field service operations are undergoing rapid changes due to a scarcity of skilled workers and the shift from a cost center to a revenue driver. In addition, technology spurred by the industrial metaverse is enabling new scenarios for mixed and augmented reality, as well as enabling organizations to monitor and optimize spacesfrom retail stores to factory floors.

In the video below, explore how 2022 release wave 2 updates to Microsoft Dynamics 365 Field Service, Dynamics 365 Remote Assist, and Dynamics 365 Connected Spaces are transforming field service operations.

graphical user interface, application
Explore how 2022 release wave 2 updates to Dynamics 365 Field Service, Remote Assist, and Connected Spaces are transforming field service operations.

Operations | Global IT services provider Columbus Global elevates consulting experiences with AI, streamlined processes, and analytics

Columbus Global, a leading IT services and consulting company, acts as a digital trusted advisor for organizations across the globe as they reimagine their businesses. One of its many offerings is subscription consultancy services. Learn how new automation, process support, and analytics capabilities empower teams across finance, project operations, and HR to seamlessly build quotes, onboard customers, and track progress on time and on budget.

Learn how IT services and consulting leader, Columbus Global, has transformed its operations with Dynamics 365.

Supply chain | Improve inventory visibility, and planning and agility of your warehouses

Supply chain disruptions over the last few years have exposed supplier vulnerabilities and fragility across industries and countries. Enhancements to Microsoft Dynamics 365 Supply Chain Management can help organizations exceed customer expectations, mitigate financial risks, and deliver on time.

In the next video, learn how Dynamics 365 can help digitally transform your supply chain without replacing existing systems and turn supply chains into a competitive advantage.

graphical user interface, text, application
Learn about new capabilities that will be released for Dynamics 365 Supply Chain Management in the 2022 release wave 2.

Scale low-code across the organization to do more with less

With new enhancements to Microsoft Power Platform in 2022 release wave 2, we’re continuing to empower users to rapidly build solutions and transform their businesses with a comprehensive set of low-code development tools. Two big announcements are that Microsoft Power Pages and Managed Environments are now generally available! Additionally, with the new AI copilot in Microsoft Power Automate, you can create a flow in seconds simply by describing what you want to automate in a sentence.  

Watch the video below to learn how organizations like Degrees of Change and Rabobank are using capabilities across the entire Microsoft Power Platform to streamline and automate their business processes.

graphical user interface, application
Learn about new capabilities that will be released for Microsoft Power Platform in the 2022 release wave 2.

Learn more about the 2022 release wave 2

The updates featured in these videos are just a handful of the new and updated capabilities in the 2022 release wave 2. To learn more, check out our roadmap for detailed release plans for Dynamics 365 and Microsoft Power Platform

The post 2022 release wave 2 in action: Bringing innovation into focus across Dynamics 365 and Power Platform appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Assess supply chain risk more easily in new workspace

Use percentage-based routing to load-balance customer service requests

This article is contributed. See the original author and article here.

Unified routing in Microsoft Dynamics 365 Customer Service provides capabilities to connect customers to the best agents. To provide a world-class customer engagement experience, around the clock and across the globe, large organizations rely on multiple vendors and expert pools. They need to balance the incoming workload across service departments, vendor queues, and their expert pools. Percentage-based routing, a new capability of unified routing, helps organizations to easily allocate work to different queues representing departments, vendors, or groups of agents in specific percentages.

How does percentage-based routing help?

Let us look at a customer scenario to understand how percentage-based routing can help your customer service organization.

Contoso Solutions is a Fortune 500 software product and services company. It has a large customer support organization covering more than 20 product lines, served by three vendors with more than 5,000 agents worldwide. Most of their customer queries are in the Billing and Subscriptions area. A single vendor team cannot handle the load. Rajeev, the director of customer support at Contoso, wants to distribute the workload across all three vendors based on each vendor’s pricing plan, quality of service, and the volume it can handle. He has come up with the following allocation:

  • 60% to Woodgrove Solutions, which has consistently delivered good customer support and has offered volume-discounted pricing to Contoso
  • 30% to Adatum Corporation, which has a smaller workforce but can quickly ramp up agents when Contoso releases new features
  • 10% to First Up Consultants, a new vendor that Contoso wants to try out

Rajeev is looking for a solution that can help him implement the percentage-based routing easily to control customer wait times during the busy holiday season. He learns about the percentage-based routing capability in Dynamics 365 Customer Service. In the Customer Service admin center, he opens the workstream and configures a Route to Queue rule with the work allocations he devised.

graphical user interface, application

From that moment, the algorithm dynamically routes every customer query from the Billing and Subscriptions area to one of the three vendor queues according to the configured percentages.

Monitor vendor queues in reports

Rajeev can check the number of customer queries that go to each vendor using the Omnichannel historical analytics insights dashboard.

table

Conclusion

In a world of high-volume, 24/7 customer engagement, percentage-based routing can be extremely helpful for organizations that want to efficiently manage their workload across multiple vendors and deliver delightful customer experiences to their global customer base.

Learn more 

To get more information about unified routing and automated routing rules in Dynamics 365 Customer Service, read the documentation: 

Overview of unified routing | Microsoft Learn

Configure route-to-queue rules | Microsoft Learn

Haven’t tried Customer Service yet? Visit the Dynamics 365 Customer Service overview, where you can take a tour and sign up for a free trial. 

This blog post is part of a series of deep dives that will help you deploy and use unified routing at your organization.See other posts in the series to learn more. 

The post Use percentage-based routing to load-balance customer service requests appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Microsoft announces partnership with SANS Institute

This article is contributed. See the original author and article here.

Microsoft Defender for Office 365 is pleased to announce a partnership with SANS Institute to deliver a new series of computer-based training (CBT) modules in the Attack Simulation Training service. The modules will focus on IT systems and network administrators. Microsoft is excited to collaborate with a recognized market leader in cyber security training to bring our customers training that can help our customers address a critical challenge in the modern threat landscape: educating and upskilling security professionals.


 


“We salute Microsoft for recognizing the requirement to direct security awareness training towards IT System and Network Administrators since our experience tells us that it is precisely these users who are more frequently targeted because of their privileged access.”


Carl Marrelli, Director of Business Development at the SANS Institute


 


We chose SANS Institute for its long track record of success in technical education and for its focus on an audience that Defender for Office 365 wants to support. Technical education is hard, and cyber security is even more difficult to deliver effectively. SANS Institute’s approach was best-in-class, and we think our customers are going to find this content very valuable for their organizational upskilling.


 


Today our Attack Simulation Training provides a robust catalog of end-user training experiences, soon to be expanding beyond social engineering topics. This partnership with SANS will help us expand our offerings to cover an important and challenging topic area. IT system administrators and network administrators have to acquire and use a broad and deep set of complex cyber security information in order to successfully protect their organizations. It can be difficult to find good training and Microsoft believes that this new set of training modules will help all of our organizations, large and small, upskill their administrative staff. These new courses will be self-paced, short-form, and easily digestible.


 


These new courses will be made available in the coming months through the Attack Simulation Training platform. They will ship alongside the rest of our catalog and can then be assigned through our training campaign workflows. Attack Simulation Training is available to organizations through Microsoft 365 E5 Security or Microsoft Defender for Office 365 Plan 2. The courses will meet all of Microsoft’s standards for accessibility, diversity, and inclusivity.


 


The SANS Institute was established in 1989 as a cooperative research and education organization. Today, SANS is the most trusted and, by far, the largest provider of cybersecurity training and certification to professionals in government and commercial institutions worldwide. Renowned SANS instructors teach more than 60 courses at in-person and virtual cybersecurity events and on demand. GIAC, an affiliate of the SANS Institute, validates practitioner skills through more than 35 hands-on technical certifications in cybersecurity. SANS Security Awareness, a division of SANS, provides organizations with a complete and comprehensive security awareness solution, enabling them to manage their “human” cybersecurity risk easily and effectively. At the heart of SANS are the many security practitioners, representing varied global organizations from corporations to universities, working together to support and educate the global information security community.


 


Want to learn more about Attack Simulation Training?


 


Get started with the available documentation today and check out the blogs for Setting up a New Phish Simulation Program-Part One and Part Two. In addition to these, you can read more details about new features in Attack Simulation Training.

Cisco Releases Security Updates for Multiple Products

This article is contributed. See the original author and article here.

Cisco has released security updates for vulnerabilities affecting multiple products. A remote attacker could exploit some of these vulnerabilities to take control of an affected system. For updates addressing lower severity vulnerabilities, see the Cisco Security Advisories page.

CISA encourages users and administrators to review the advisories and apply the necessary updates.

Apple Releases Security Update for Xcode

This article is contributed. See the original author and article here.

Apple has released a security update to address vulnerabilities in Xcode. A remote attacker could exploit one of these vulnerabilities to take control of an affected system.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the Apple security page for Xcode 14.1 and apply the necessary update.