Four Strategies for Cost-Effective Azure Monitoring and Log Analytics

Four Strategies for Cost-Effective Azure Monitoring and Log Analytics

This article is contributed. See the original author and article here.

Effective cost management in Azure Monitor and Azure Log Analytics is essential for controlling cloud expenditures. It involves strategic measures to reduce costs while maximizing the value derived from ingested, processed, and retained data. In Azure, achieving this balance entails adopting efficient data ingestion methods, smart retention policies, and judicious use of table transformations with Kusto Query Language (KQL).


Understanding the impact of data management practices on costs is crucial since each byte of data ingested and stored in Azure Log Analytics incurs expenses. Table transformations—such as filtering, projecting, aggregating, sorting, joining, and dropping data—are a great way to reduce storage and ingestion costs.  They allow you to filter or modify data before it’s sent to a Log Analytics workspace.  Reducing ingestion cost and also reducing long-term storage.


This document will explore four key areas to uncover strategies for optimizing the Azure Monitor and Azure Log Analytics environment, ensuring cost-effectiveness while maintaining high performance and data integrity. Our guide will provide comprehensive insights for managing cloud expenses within Azure services.


 


Key Areas of Focus:


 



  1.  Ingestion Cost Considerations: The volume of data ingested primarily influences costs. Implementing filters at the source is crucial to capture only the most relevant data. 

  2. Data Retention Strategies: Effective retention policies are vital for cost control. Azure Log Analytics allows automatic purging of data past certain thresholds, preventing unnecessary storage expenses.

  3. Optimization through Transformations: Refining the dataset through table transformations can focus efforts on valuable data and reduce long-term storage needs. Note that these transformations won’t reduce costs within the minimum data retention period.

  4. Cost Management Practices: Leveraging Azure Cost Management and Billing tools is crucial for gaining insight into usage patterns. These insights inform strategic adjustments, aligning costs with budgetary limits.


 




1) Ingestion Cost Considerations:


Efficient data ingestion within Azure Monitor and Log Analytics is a balancing act between capturing comprehensive insights and managing costs. This section delves into effective data ingestion strategies for Azure’s IaaS environments, highlighting the prudent use of Data Collection Rules (DCRs) to maintain data insight quality while addressing cost implications.


 


Data ingestion costs in Azure Log Analytics are incurred at the point of collection, with volume directly affecting expenses. It’s imperative to establish a first line of defense against high costs at this stage. Sampling at the source is critical, ensuring that applications and resources only transmit necessary data. This preliminary filtering sets the stage for cost-effective data management. Within Azure’s environment, DCRs become a pivotal mechanism where this essential data sampling commences. They streamline the collection process by specifying what data is collected and how. However, it’s important to recognize that while DCRs are comprehensive, they may not encompass all types of data or sources. For more nuanced or complex requirements, additional configuration or tools may be necessary beyond the standard scope of DCRs.


 


In addition:


 


Navigating Azure Monitor Ingestion in IaaS:


Azure Virtual Machines (VMs) provide a spectrum of logging options, which bear on both the depth of operational insights and the consequent costs. The strategic use of DCRs, in concert with tools like Log Diagnostic settings and Insights, is essential for proficient monitoring and management of VMs.”


 


A) Log Diagnostic Settings:


When enabling Log Diagnostic Settings in Azure, you are presented with the option to select a Data Collection Rule, although you are not given an option to modify the collection rule, you can access the DCR settings by navigating to the Azure Monitor Service Section. DCRs help tailor what logs and metrics are collected.  They support routing diagnostics to Azure Monitor Logs, Storage, or Event Hubs and are valuable for detailed data needs like VM boot logs or performance counters.


 


To minimize costs with DCRs:


Filter at Source: DCRs can enforce filters to send only pertinent data to the workspace, to modify the filters, Navigate to the Azure Portal, select Azure Monitor, under Settings select Data Collection Rules, select the collection rule you are trying to modify and click on Data Sources, here you can modify what is collected.  Some Items such as Microsoft-Perf allows you to add a transformation at this level.


                     


                                           freddydubon_0-1711595483453.png


 


Efficient Collection: DCRs can reduce collection frequency or focus on key metrics, which may require additional insights for complex data patterns.  In the Azure portal under the collection rule, select the data source, such as Performance Counters, and here you can adjust the sample rate (frequency) of data collection such as CPU sample rate 60 seconds, adjust the counters based on your need.


           


freddydubon_1-1711595483469.png


 


Regular Reviews: While DCRs automate some collection practices, manual oversight is still needed to identify and address high-volume sources.


 


   B) Insights (Azure Monitor for VMs):


Purpose: Azure VM Insights is an extension of Azure Monitor designed to deliver a thorough monitoring solution, furnishing detailed performance metrics, visual dependency maps, and vital health statistics for your virtual machines.


 


Details: Leveraging the Log Analytics agent, Azure VM Insights captures and synthesizes data from your VMs, offering a cohesive dashboard that showcases CPU, memory, disk, and network performance, alongside process details and inter-service dependencies.


 


Use Cases: Azure VM Insights is pivotal for advanced performance monitoring and diagnostics. It enables the early detection of performance issues, aids in discerning system alterations, and proactively alerts you to potential disruptions before they manifest significantly.


 


To Enable VM Insights, select the Data Collection Rule which defines the Log analytics workspace to be used.


 


                         freddydubon_2-1711595483481.png


 


      Cost-saving measures include:


Selective Collection: DCRs ensure only essential metrics are collected, yet understanding which metrics are essential can require nuanced analysis.


 


Metric Collection Frequency: Adjusting the frequency via DCRs can mitigate overload, but determining optimal intervals may require manual analysis.


 


Use Automation and Azure policy for Configuration: The cornerstone of scalable and cost-effective monitoring is the implementation of standardized configurations across all your virtual machine (VM) assets. Automation plays a pivotal role in this process, ensuring that monitoring configurations are consistent, error-free, and aligned with organizational policies and compliance requirements.


 


Azure Policy for Monitoring Consistency: Azure Policy is a service in Azure that you can use to create, assign, and manage policies. These policies enforce different rules over your resources, so those resources stay compliant with your corporate standards and service level agreements. Azure Policy can ensure that all VMs in your subscription have the required monitoring agents installed and configured correctly.


 


You can define policies that audit or even deploy particular settings like log retention periods and specific diagnostic settings, ensuring compliance and aiding in cost control. For example, a policy could be set to automatically deploy Log Analytics agents to any new VM that is created within a subscription. Another policy might require that certain performance metrics are collected and could audit VMs to ensure that collection is happening as expected. If a VM is found not to be in compliance, Azure Policy can trigger a remediation task that brings the VM into compliance by automatically configuring the correct settings.


 


C) Logs (Azure Monitor Logs):


Purpose: Azure Monitor Logs are pivotal for storing and analyzing log data in the Log Analytics workspace, leveraging Kusto Query Language (KQL) for complex queries.


 


Cost Control in Detail: While Azure Monitor Logs are adept at aggregating data from diverse sources, including VMs and application logs, effective cost management is essential. DCRs control the collection of logs for storage and analysis in Log Analytics same collection rules apply.


 


Azure Monitor Basic Logs: Azure monitor logs offers two log plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor’s advanced features and analytic capabilities based on your needs.  The default value of the tables in an Azure Log Analytics Workspace is “Analytics” this plan provides full analysis capabilities and makes log data available for queries, it provides features such as alerts, and use by other services.  The plan “Basic” lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. The retention period is fixed at 8 days.


 


– From the Log Analytics workspace menu select Tables


– Select the context menu for the table you want to configure and select “manage table”


freddydubon_10-1711596899430.png


 


– From the table plan dropdown on the table configuration screen, select “Basic” or Analytics. 


– Not all tables support the Basic plan for a list of tables please visit the documentation listed at the end of this document.


 


freddydubon_11-1711596899446.png


 


– Select Save.


 


  




2) Data Retention Strategies:


Effective retention policies play a vital role in cost control. Azure Log Analytics enables the automatic purging of data past certain retention thresholds, avoiding unnecessary storage expenses for data that is no longer needed.  Azure Monitor Logs retain data in two states: interactive retention, which lets you retain Analytics logs for interactive queries of up to 2 years, and Archive, which lets you keep older, less used data in your workspace at a reduced cost.  You can access data in the archived state by using search jobs and restore you can keep data in archive state for up to 12 years.


 



  • Purpose: Implementing well-defined data retention policies is essential to balance the accessibility of historical data with cost management in Azure Log Analytics. The purpose is to retain only the data that adds value to your organization while minimizing storage and associated costs.

  • Automated Purging: Azure Log Analytics facilitates cost control through automated data purging. Set retention policies to automatically delete data that exceeds your specified retention threshold, ensuring you’re not paying for storage you don’t need.

  • Retention Policy Design:

    • Assessment of Data Value: Regularly evaluate the importance of different data types and their relevance over time to determine the appropriate retention periods.

    • Compliance Considerations: Ensure that retention periods comply with regulatory requirements and organizational data governance policies.



  • Cost Reduction Techniques:

    • Reduction in Retention Period: By retaining only necessary data, you reduce the volume of data stored, leading to direct cost savings on storage resources. Some techniques include data purging, data deduplication, data archiving and life-cycle management policies.

      • Setting the Global Retention Period: Navigate to the Azure portal and select the Log Analytics Workspace.  In the Settings, locate Usage and Estimated Costs, select Data Retention and specify the retention period.  This will set the retention period globally for all tables in a Log Analytics workspace.






                     


freddydubon_0-1711898421115.png


 


 


Setting Per Table Retention period: 


 


you can also specify retention periods for each individual table in the Log Analytics Workspace.  In the Azure portal navigate and select the Log Analytics Workspace. In the Settings, select Tables, at the end of each table select the three dots and select manage table, here you can change the retention settings for the table.  If needed, you can reduce the interactive retention period to as little as four days using the API or CLI.


 


                      freddydubon_4-1711595483512.png


 


Interactive and Archive Retention Period:


lets you retain Analytic logs for interactive queries of up to 2 years. From the Log Analytics workspaces menu in the Azure portal, select your workspaces menu, select Tables. Select the context menu for the table you want to configure and select Manage Table. Configure the interactive retention period. i.e. 30 days Configure the Total Retention Period the difference between the interactive period and the total period is the Archive Period.    This difference will show up under the configuration menu.  Blue for interactive and orange for Archive period.


 


freddydubon_1-1711898710308.png


 


Automatic Purging data: If you set the data retention period to 30 days, you can purge older data immediately by using the immediatePurgeDataOn30Days parameter in the Azure Resource Manager.  Workspaces with a 30-day retention might keep data for 31 days if this parameter is not set.


 


Data Deduplication: Azure log analytics workspaces does not offer built-in data de-duplication features, however you can implement data duplication as part of the ingestion process before sending the data to Azure Log Analytics using an Azure function or a logic app.


 


Move older data to Azure Blob using Data export: Data Export in a log analytics workspace lets you continuously export data per selected tables in your workspace.  The data can be exported to a storage account or Azure event hubs.  Once the data is in a storage account the data can use life-cycle policies. Another benefit of exporting data is that smaller data sets result in quicker query execution times and potentially lower compute costs


 


freddydubon_2-1711899370579.png


 


 




 3) Optimization Through Transformations:


The primary purpose of data transformations within Azure Log Analytics is to enhance the efficiency of data handling, by honing in on the essential information, thus refining the datasets for better utility. During this process, which occurs within Azure Monitor’s ingestion pipeline, data undergoes transformations after the source delivers it but before it reaches its final destination (LAW). This key step not only serves to reduce data ingestion costs by eliminating extraneous rows and columns but also ensures adherence to privacy standards through the anonymization of sensitive information. By adding layers of context and optimizing for relevance, the transformations offer enriched data quality while simultaneously allowing for granular access control and streamlined cost management.


 


There are two ways to do transformations, one at the Data Collection Rule level,  which means you select only the items you need such as the Windows performance counters from a VM running the Windows OS in Azure, the second option is to do a transformation at the Table-Level in the Azure Log Analytics Workspace (LAW).


 



  • Transformation Process:

    • Data Selection: Transformations are defined in a data collection rule (DCR) and use a Kusto Query Language (KQL) statement that’s applied individually to each entry in the incoming data and create output in the structure expected by the destination. 

    • Table Transformations: Utilize Azure Log Analytics’ Kusto Query Language (KQL) to perform transformations on specific tables within the Azure Log Analytics Workspace.  Not all tables support transformations please check the for a complete list.

      • As an example, to add a table transformation for the ‘events’ table in Azure Log Analytics for cost optimization, you could perform the following steps:

      • Navigate to the Azure portal

      • Go to your Log Analytics Workspaces

      • Select the workspace

      • Under Settings select Tables.

      • Under the tables panel select the three dots to the right of the table row and click on “create transformation”






 


freddydubon_3-1711899542135.png


 


– Select a Data Collection Rule


 


freddydubon_0-1711899697098.png


– Under the Schema and transformation select “Transformation editor”


 


freddydubon_1-1711899778848.png


 


Source will show all data in the table, and a KQL query will allow you to select and project only the data needed.


source 


| where severity == “Critical”


| extend Properties = parse_json(properties)


| project


    TimeGenerated = todatetime([“time”]),


    Category = category,


    StatusDescription = StatusDescription,


    EventName = name,


    EventId = tostring(Properties.EventId)


 



  • Cost Reduction Techniques:

    • Reduced Storage: Setup Data Collection Rules to only capture the desired data, and setup Table Transformations to only allow data required into the Log Analytics workspace.

    • Regular Revision: Continuously evaluate and update transformation logic to ensure it reflects the current data landscape and business objectives.




 




4) Cost Management Practices:


The primary objective in the cost management is finding out where the charges are coming from and figuring out ways to optimize either ingestion at the source, or by adopting some or all the strategies outlined in this document.  The primary tool that can be used in Azure is the Azure Cost Management and Billing tool.  It is used to obtain a clear and actionable view of your Azure expenditure. These tools provide critical insights into how resources are consumed, enabling informed decision-making for cost optimization.  In addition to the strategies outlined already, the following are other Cost and Management techniques:


 



  • Cost Control Mechanisms:

    • Budgets and Alerts: Set up budgets for different projects or services and configure alerts to notify you when spending approaches or exceeds these budgets.

    • Commitment Tiers: Provide a discount on your workspace ingestion costs when you commit to a specific amount of daily data. Commitment can start at 100GB per day at a 15% discount from the pay-as-you-go pricing and as the amount increases the percent discount grows as well.  To take advantage of these navigate to the Azure portal, select log analytic workspaces, select your workspace, under settings select Usage and estimated costs, scroll down to see the available commitment tiers.




 


            


freddydubon_2-1711899997847.png


 



  • Log analytic workspaces placement: thoughtful placement of the Log Analytics Workspaces is important and can significantly impact expenses.  Start with a single workspace to simplify management and querying.  As your requirements evolve, consider creating multiple workspaces based on specific needs such as compliance. Regional placement should also be considered to avoid egress charges.  Creating separate workspaces in each region might reduce egress costs, but consolidating into a single workspace could allow you to benefit from Commitment Tiers and further cost savings.


                                                     


freddydubon_3-1711900058305.png



  • Implementation Strategies:

    • Tagging and Grouping: Implement resource tagging to improve visibility and control over cloud costs by logically grouping expenditures.

    • Cost Allocation: Allocate costs back to departments or projects, encouraging accountability and cost-conscious behavior.  To find data volume by Azure Resource, Resource Group, or subscription you can use KQL queries such as the following from the Log Analytics workspace Log section : 




find where TimeGenerated
between(startofday(ago(1d))..startofday(now())) project
ResourceId, IsBillable
| where IsBillable == true


 


freddydubon_4-1711900590689.png


 


 


 


 


In conclusion, this document has provided a structured approach to cost optimization in Azure, specifically for services related to Azure Monitor and Log Analytics. Through careful planning of ingestion strategies, data retention policies, transformative data practices, and prudent cost management practices, organizations can significantly reduce their cloud expenditures without sacrificing the depth and integrity of their analytics. Each section outlined actionable insights, from filtering and sampling data at ingestion to employing intelligent retention and transformation strategies, all aimed at achieving a cost-effective yet robust Azure logging environment. By consistently applying these strategies and regularly reviewing usage and cost patterns with Azure Cost Management tools, businesses can ensure their cloud operations remain within budgetary constraints while maintaining high performance and compliance standards.


 


 


 


Resources:



 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 

Navigating Azure WAF Exclusions

Navigating Azure WAF Exclusions

This article is contributed. See the original author and article here.

Introduction


Exclusions in Azure WAF (Web Application Firewall) are a critical feature that allows administrators to fine-tune security rules by specifying elements that should not be evaluated by WAF rules. This capability is essential for reducing false positives and ensuring that legitimate traffic flows unimpeded. Exclusions are designed to fine-tune the WAF’s sensitivity, allowing legitimate traffic to pass through while maintaining robust security measures. They are particularly useful in scenarios where certain request attributes, such as specific cookie values or query strings, are known to be safe but might trigger WAF rules due to their content or structure.


 


Azure WAF Exclusions: A Closer Look


Azure WAF exclusions can be applied to a rule, set of rules, rule group, or globally for the entire ruleset. This flexibility is crucial for meeting application-specific requirements and reducing false positives. For instance, exclusions introduced with CRS 3.2 on regional WAF with Application Gateway now allow attribute exclusions definitions by name or value of header, cookies, and arguments.


 


Attributes for WAF exclusions



  • Attributes that can be excluded include:


    • Request headers

    • Request cookies

    • Query strings

    • Post args

    • JSON entity (only for AFD WAF)


  • Operators for exclusions include:


    • Equals: For exact matches.

    • Starts with: Matches fields starting with a specific selector value.

    • Ends with: Matches fields ending with a specified selector value.

    • Contains: Matches fields containing a specific selector value.

    • Equals any: Matches all request fields (useful when exact values are unknown).



 


Note: The “Equals Any” condition automatically converts any value you enter in the selector field to an asterisk (*) by the backend when creating an exclusion. This feature is especially valuable when handling unknown or random values.


 



  • Exclusions can be applied on:


    • Rule

    • Rule set

    • Rule group

    • Global



 


Azure Front Door WAF: Exclusion Example


Azure Front Door WAF allows for exclusions to be set at a detailed level, targeting the values of match variables like request headers, cookies, and query strings. This granularity ensures that only the necessary parts of a request are excluded from rule evaluation, reducing the likelihood of false positives without significantly impacting the overall security posture.


 


SaleemBseeu_1-1711806525954.png


 


When configuring Azure Web Application Firewall (WAF) to inspect JSON request bodies, it’s crucial to understand how to handle legitimate requests that might otherwise be flagged as potential threats. For instance, consider the following JSON request body:


 


JSON Example


{


  “posts”: [


    {


      “eid”: 1,


      “comment”: “”


    },


    {


      “eid”: 2,


      “comment”: “”1=1″”


    }


  ]


}


 


In this example, the “1=1” in the comment field could be mistaken by the WAF as a SQL injection attempt. However, if this pattern is a normal part of your application’s operation, you can create an exclusion to prevent false positives. By setting an exclusion with a match variable of Request body JSON args name, an operator of Equals, and a selector of posts.comment, you instruct the WAF to overlook the “comment” property when scanning for threats. To refine the exclusion and make it more specific, we have applied it solely to the ‘SQLI’ rule group and the ‘942150 SQL Injection Attack’ rule. This ensures that only this particular selector is exempt from inspection by this rule, while all other rules will continue to inspect it for any threats.


 


Note: JSON request bodies inspection For Azure Front Door WAF is available from DRS 2.0 or newer.


 


Application Gateway WAF: Exclusion Example


SaleemBseeu_2-1711806525964.png


 


For WAF on Application gateway we will use a different JSON example:


 


JSON Example


{


    “properties”: {


        “credentials”: {


            “emai.l=l”: “admin”,


            “password”: “test”


        }


    }


}


 


In the previous example, we examined the value of the selector. However, in this case, our focus is on excluding the selector key itself, rather than excluding the value within the key. Above in the JSON example you can see “properties.credentials.emai.l=l” and  This specific key contains “l=l”, which could potentially trigger an SQL injection attack rule. To exclude this specific selector, we’ve created an exclusion rule matching the variable “Request Arg Keys” with the value “properties.credentials.emai.l=l”. This exclusion prevents further false positives.


 


This feature is available in Application Gateway WAF and only in CRS 3.2 or newer and Bot Manager 1.0 or newer. By excluding the key/selector itself, we significantly improve the tuning of WAF false positives. For more WAF Exclusion examples see here Web application firewall exclusion lists in Azure Application Gateway – Azure portal | Microsoft Learn


 


Furthermore, within Application Gateway WAF, you have the flexibility to selectively apply this exclusion to specific rules. In our scenario, we’ve opted to apply it to Rule 942130: SQL Injection Attack: SQL Tautology. This means that we’re excluding this particular key only for this specific rule, while still ensuring that the rest of the JSON is thoroughly inspected by the remaining ruleset


 


Note: Request attributes by names function similarly to request attributes by values and are included for backward compatibility with CRS 3.1 and earlier versions. However, we it’s recommended using request attributes by values instead of attributes by names. For instance, opt for RequestHeaderValues rather than RequestHeaderNames


 


This approach ensures that legitimate traffic is not inadvertently blocked, maintaining the flow of your application while still protecting against actual security risks. It’s a delicate balance between security and functionality, and exclusions are a powerful tool in the WAF’s arsenal to achieve this balance. Always verify the legitimacy of such patterns before creating exclusions to maintain the security posture of your application.


 


Match variable mapping.


When setting up exclusions, it can be challenging to map the Match variables you observe in your logs to the corresponding configuration on your Web Application Firewall (WAF). By referring to this table, you can streamline the exclusion process and ensure that your WAF is accurately tuned to your traffic.


 


Azure Application Gateway














































































MatchVars in Logs



MatchVars for Excl.



REQUEST_HEADERS_NAMES



Request Header Keys



REQUEST_HEADERS



Request Header Values/Names



REQUEST_COOKIES_NAMES



Request Cookie Keys



REQUEST_COOKIES



Request Cookie Values/Names



ARGS_NAMES



Request Arg Keys



ARGS



Request Arg Values/Names



ARGS_GET



Request Arg Values/Names



REQUEST_URI



Custom Rule



REQUEST_BODY



Request Arg Values/Names



REQUEST_BASENAME



Custom Rule (Part of URI)



MULTIPART_STRICT_ERROR



N/A



REQUEST_METHOD



N/A



REQUEST_PROTOCOL



N/A



Request_Filename



Custom



REQUEST_URI_RAW



Custom



XML



Custom



MULTIPART_STRICT_ERROR



N/A



 


 


Azure Front Door WAF










































MatchVars in Logs



MatchVars for Excl.



HeaderValue



Request header name



CookieValue



Request cookie name



QueryParamValue



Query string args name



MultipartParamValue



Request body post args name



JsonValue



Request body JSON args name



URI



Custom Rule



InitialBodyContents*



Custom Rule



DecodedInitialBodyContents*



Custom Rule



 


 


Best Practices for Implementing Exclusions


When implementing exclusions, it’s crucial to follow a structured approach:



  1. Document Current Settings: Before making any changes, document all existing WAF settings, including rules and exclusions.

  2. Test in Staging: Apply and test exclusions in a non-production environment to ensure they work as intended without introducing new risks.

  3. Apply to Production: Once verified, apply the tested exclusions to the production environment, monitoring closely for any unexpected behavior.


 


Conclusion


Exclusions in Azure WAF, whether for Azure Front Door or Application Gateway, offer a nuanced approach to web application security. By understanding and utilizing these features, administrators can ensure that security measures are effective without disrupting legitimate user activity. As Azure WAF continues to evolve, the ability to fine-tune security through exclusions will remain a cornerstone of its effectiveness.


 


Resources


Maximize efficiency with forecasting in Dynamics 365 Customer Service

Maximize efficiency with forecasting in Dynamics 365 Customer Service

This article is contributed. See the original author and article here.

In today’s dynamic business environment, efficient resource allocation and planning are paramount for the success of any organization. Microsoft Dynamics 365 Customer Service brings a powerful forecasting feature that empowers businesses to predict and manage their service volumes and agent demands effectively. Leveraging AI-based algorithms, this intelligent forecasting model runs behind the scenes, analyzing historical data and trends to provide accurate predictions. With the ability to forecast both volume and agent demand, organizations can streamline their operations, optimize resource allocation, and ultimately enhance customer satisfaction. This blog post focuses on volume forecasting for cases and conversations as well as agent forecasting for conversations. 

Front-office and back-office forecasting 

Dynamics 365 Customer Service recognizes the diverse nature of service channels, distinguishing between front office and back-office operations. Front office channels encompass voice, chat, email, messaging, and social channels, representing direct interactions with customers referred to as conversations in Dynamics 365. Back-office operations, on the other hand, refer to cases that require internal processing and resolution. Customer Service offers flexibility in forecasting front-office, back-office, or blended agents. This capability enables organizations to tailor their strategies to the specific needs of each operational area, ensuring optimal resource utilization and service delivery. 

Volume forecasting 

Forecasting provides daily forecasts for case and conversation volumes for up to six months into the future. Daily forecasts enable organizations to anticipate and prepare for fluctuations in service demand. Additionally, the system offers intraday forecasts at 15-minute intervals. Intraday forecasts allow for granular planning up to six weeks ahead. This level of foresight empowers businesses to allocate resources efficiently, ensuring optimal service levels. 

graphical user interface, application

Agent Forecasting 

In addition to volume forecasting, organizations can forecast agent demand for conversations on a daily interval for up to six months into the future. Like volume forecasting, the system provides intraday forecasts at 15-minute intervals, allowing for precise resource allocation and scheduling. 

table

Incorporating service-level metrics 

The feature considers operational metrics such as service level, shrinkage, and concurrency when forecasting agent demand. By considering these factors, organizations can ensure that the agent capacity forecast aligns with service level agreements and operational constraints, maximizing efficiency and customer satisfaction. 

Auto-detection of seasonality 

By analyzing historical traffic patterns, our AI model automatically detects seasonality, enabling more accurate forecasts. This feature helps organizations adapt their operations to seasonal variations in service demand. Addressing these variations helps organizations maintain high service levels regardless of fluctuations in customer activity. 

Auto-detection of holidays 

Our forecasting model utilizes historical traffic patterns to automatically identify holidays, which leads to more precise predictions. This functionality assists organizations in adjusting their operations according to holiday-related changes in service demand across various regions, guaranteeing that they can maintain optimal service levels despite fluctuations in customer activity during holidays. 

Forecast vs. actual charts 

User-friendly charts are available to visually represent service volume and agent demand forecasts alongside actual performance across daily, weekly, and monthly intervals for up to six months. This comparison enables organizations to assess the accuracy of their forecasts and identify areas for improvement.

graphical user interface

Customizable slicing and exporting 

The flexibility of forecasting extends to its ability to slice forecast data by channels and queues, providing insights tailored to specific operational needs. Moreover, users can export forecast data into a spreadsheet for further analysis or integration with other tools, enhancing the usability and accessibility of the forecasting feature. 

Key considerations for accuracy 

We recommend the following criteria for using historical data to generate accurate forecasts. 

Non-sparse data: The dataset contains information for every day, ensuring that there isn’t missing or incomplete data. Each day has a recorded volume, providing a comprehensive set of observations. 

Clear weekly pattern: The data shows a weekly pattern, wherein the volume consistently follows a specific trend. For instance, weekends consistently have low volumes, while workdays show higher volumes, and vice versa. This pattern helps establish a reliable basis for forecasting. 

Volume-based accuracy: If the criteria are met, the forecast quality improves with larger volume inputs. Higher volumes of data contribute to a more accurate and robust forecast. 

Absence of level shift: Recent days and future periods don’t experience any sudden or significant shifts in volume levels. This absence of sudden changes ensures that the historical patterns stay relevant and dependable for forecasting purposes. 

Longer historical data set: If all the above criteria are met, a longer history of data further improves the forecast accuracy. A greater historical data set provides a more comprehensive understanding of the patterns and trends over time. With an extended history, the forecast model can capture and incorporate more variations, leading to more accurate predictions. 

Weighting recent forecast accuracy: When considering future periods, understand the forecast’s accuracy tends to be higher for more immediate timeframes. As time progresses, the certainty and precision of the forecast may decrease. Therefore, the most recent forecast should be given more weight and considered to have better accuracy compared to future forecasts. 

Stay ahead of the curve

In conclusion, forecasting offers a comprehensive solution for predicting support volumes and agent demand, empowering organizations to optimize their operations and enhance customer satisfaction. With daily and intraday forecasts, and auto-detection of seasonality, businesses can achieve greater efficiency and agility in their service delivery. With Dynamics 365 Customer Service, organizations are driving success and growth through effective resource planning and management. 

Learn more about forecasting

To learn more about forecasting, read the documentation: Forecast agent, case, and conversation volumes in Customer Service | Microsoft Learn

The post Maximize efficiency with forecasting in Dynamics 365 Customer Service appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Enterprise Connect showcases new UC features in Microsoft Teams and Microsoft Teams Phone

Enterprise Connect showcases new UC features in Microsoft Teams and Microsoft Teams Phone

This article is contributed. See the original author and article here.

By Brenna Robinson, General Manager, Microsoft SMB


 


At this year’s Enterprise Connect conference, Microsoft showcased new products and solutions improving business communications and collaboration. The Enterprise Connect conference features the latest unified communications (UC) trends, technologies and best practices to help you expand your business. We’d like to share some of the announcements we’ve made, including new ways to manage your phone calls in Microsoft Teams Phone with the new Queues app. We’ve also enhanced Microsoft Teams channels with the new discover feed, and we’ve added meet now to group chats.


 


Teams Phone and unified communications


Microsoft Teams Phone is a key addition for a small business already using Microsoft Teams because it’s the only cloud telephone solution natively built for Teams. A business can get the benefits of unified communications without many technical hassles or much extra expense.


 


UC puts all your communications medium under one roof, including things like voice, chat, collaboration, scheduling, presence management and more. Unifying these capabilities lets you save on costs and increase productivity because you can see all your communications inside a single interface rather than having to switch across multiple providers. It also greatly helps with external communications, like customer service for example, since it combines conversations from different channels into a single, productive exchange. Teams Phone is your shortest path to all those benefits because it’s purpose-built to easily integrate with Teams.


 


Queues app


Case in point is the new Queues application now in private preview. This new app improves call management for Teams Phone users by allowing individuals to better manage customer calls and for supervisors to manage incoming call queues. It also provides new call analytics. The app will be available in the Teams Store and can be pinned to the left rail of the Teams client.  


 


The Queues app will be available as part of Teams Premium in Summer 2024 and requires a Teams Phone subscription. We are offering a limited number of pre-launch previews for interested customers. Share your interest by nominating your organization to participate.


 


Queues app allows for better managment of customer calls.Queues app allows for better managment of customer calls.


 


Private Line


We also want you to know that the Private line feature we announced last fall is now generally available. Private line allows you to identify and respond to priority callers, the people whose calls you want to receive immediately without wait times.  Private line is like having a second, private line that is only accessible to hand-picked contacts. Your priority callers can call you directly and bypass delegates, admins, or assistants. Supporting incoming calls only, Private line calls will be distinguished by a unique notification and ringtone.


 


Private line is available in Teams Premium and requires a Teams Phone subscription.


 


Discover feed in channels


Beyond Teams Phone, we’ve enhanced our core Teams collaboration and communication experience with discover feed, which builds off Teams channels. A channel enables workgroups to collaborate in a dedicated virtual workspace that you can organize by topic.


 


But with so much communication happening every day, it can be hard to keep track of the information that’s most important to you. The new discover feed is a personalized feed. It surfaces all the content most relevant for you, based on the people you work with and topics you might be interested in. Discover posts you may not be aware of because you are not directly mentioned, replied to, or tagged. You can then add comments or share a post just like any other channel.  Find out more about Teams discover feed.


 


Discover feed in channelsDiscover feed in channels


 


Meet now in Teams group chat (May 2024)


We are also enhancing Teams meet now feature by adding it to your group chats. This will make it even easier to start a conversation with others in your business without having to schedule a meeting. Meet now in group chat lets you communicate with your colleagues no matter where they’re located. You can start a huddle as easily as if you met by the water cooler. It’s a ringless experience that notifies you when a huddle has started and who initiated it. Then you can choose to join it or decline if you are busy. Since the huddle started from an existing group chat, any chats you send while the meeting is in progress will stay a part of your ongoing group chat thread, which maintains the right context around the content, and helps you find the information when you need it. Meet now will be available in the spring.


 


Meet now in Teams group chatMeet now in Teams group chat


 


 


Copilot in Teams compose enhancements


Copilot in Teams can help transform your ideas into clear chat messages and channels posts.  Just type a few words or sentences and Copilot will draft a message for you. You can also ask Copilot to customize a message by using your own prompt, like “make it shorter” or “add a call to action.” This capability will be available this spring and will require a Copilot for Microsoft 365 license.


 


Microsoft 365 Business Standard and Microsoft 365 Business Premium customers can enhance their Teams experience with all these new features and be ready for the new Copilot capabilities as they’re released. Just add Teams PremiumCopilot for Microsoft 3651 , and Teams Phone to your subscription. If you do not already have these core productivity offerings, you can purchase them now and then add Copilot for Microsoft 365 to your subscription.


 


Copilot in Teams compose enhancementsCopilot in Teams compose enhancements


 


 


Resources


 



 



  1. Copilot for Microsoft 365 may not be available for all markets and languages. To purchase, enterprise customers must have a license for Microsoft 365 E3 or E5 or Office 365 E3 or E5, and business customers must have a license for Microsoft 365 Business Standard or Business Premium, or a version of these suites that no longer includes Microsoft Teams.


 


 

OneDrive security and mobile features now available for Microsoft 365 Basic subscribers

OneDrive security and mobile features now available for Microsoft 365 Basic subscribers

This article is contributed. See the original author and article here.

BB-Microsoft-Hero-Image-Security-OneDrive-Shawn.jpeg


 


We are excited to announce the addition of ransomware detection and recovery, an expanded Personal Vault, password protected and expiring sharing links, and offline files and folders to Microsoft 365 Basic. These features are available today, at no additional charge, for all our Microsoft 365 Basic customers, and they complement the 100GB of cloud storage, ad-free Outlook email, and advanced email security features already included.



With additional security features from OneDrive, Microsoft 365 Basic subscribers will get additional peace of mind for their files and photos, at the same low price.



Let’s take a look at the newly added features and some helpful tips to get you started.



Ransomware detection and recovery



How it helps: By storing your important files and photos in OneDrive, you’re not just backing them up in the cloud; OneDrive vigilantly monitors them for signs of ransomware.



How it works: Our system monitors your account for signs of ransomware activity. This includes unusual file modifications, encryption actions, and other indicators of malicious intent.


 


Ransomware - Image 1.jpeg


OneDrive alerts you to ransomware on your device and via email.


 


When Microsoft 365 detects a ransomware attack, you’ll receive a notification on your device and an email from Microsoft, alerting you to the potential threat. We guide you through the process of assessing the extent of the issue, deleting suspicious files and then help you identify a safe point in time for restoration.


 


Ransomware - Image 2.jpeg


Choose the date and time to restore your OneDrive.


 


Ransomware - Image 3.jpeg


OneDrive will update you when the restoration completes.


 


While there’s a possibility of losing some data between the time of infection and detection, this measure mitigates the loss, safeguarding your most crucial files and memories. Check out this article for more detailed information on recovering your OneDrive.



Personal Vault



How it helps: Personal Vault in OneDrive provides an extra layer of security with Two-Factor Authentication (2FA), helping to ensure that only you can access your critical files. This feature is invaluable for important documents, such as passports, tax records, and financial documents, as well as any photos or digital keepsakes you hold dear. Microsoft 365 Basic subscribers previously could only store 3 files in their Personal Vault. That restriction has now been removed, and subscribers can put as many files as they want in Personal Vault up to their 100GB storage limit.


 


Vault - Image 1.jpeg


Personal Vault is accessible on any device via OneDrive.


 


Vault - Image 2.jpeg


Your files will be secured by identity verification, yet easily accessible across your devices.


 


How it works: Activating your Personal Vault is straightforward and requires just a few steps:



1. Start by Logging In: Navigate to OneDrive.com and sign in with your Microsoft credentials.



2. Enable Personal Vault: Head over to Settings and find the Personal Vault option. Click “Enable” to begin the setup.



3. Choose Two-Factor Authentication (2FA): For enhanced security, enabling your Personal Vault requires 2FA. You can opt to use a secondary email or, for optimal security, use the Microsoft Authenticator app available on both iOS and Android platforms.



4. Enter Your PIN: Upon setup, you’ll receive a PIN through your chosen 2FA method. Enter this PIN to activate your Personal Vault.



Every time you access your Personal Vault, you’ll be prompted to authenticate via your selected 2FA method, ensuring that only you can view and edit your most sensitive files.


 


Vault - Image 3.jpeg


You can sign in to Personal Vault via the Microsoft Authenticator app.


 


Vault - Image 4.jpeg


Store important files in your Personal Vault.


 


Personal Vault: Tips



Regularly Review Your Vault: Periodically assess the files in your Vault to ensure that everything stored there is still relevant and requires the extra layer of security.



Securely Close Your Vault: While your Personal Vault will close automatically after 20 min on inactivity, it’s a smart move to just close Personal Vault after you’re done. This simple habit ensures that your sensitive files remain locked away, even if you forget to close your browser.



To learn more about your Personal Vault, please read this support article.



Password protected and expiring sharing links



How it helps: Sharing files and photos is a necessity- whether it’s for collaboration, sharing and connecting over memories, or distributing important documents. Now Microsoft 365 Basic subscribers gain access to advanced sharing options, allowing for more secure and controlled sharing experiences. These new features are great for community and group projects, family memories, and sensitive documents.


 


Password Sharing - Image 1.jpeg


Choose the date for sharing links to expire.


 


How it works: Simply log into OneDrive wherever you want to share from (on the web, a PC, or mobile device) and you can manage how you are sharing any file or folder.



1. Initiate Sharing: Click on the sharing control for your desired file or folder to open the Sharing dialog.



2. Access Advanced Sharing Options: Select the edit drop down control and then select “Sharing settings.”



3. Set Expiration Dates: Choose the Expiration option to specify a date when the link will expire, rendering the file or folder inaccessible to recipients.



4. Create a Secure Password: Use the Password option to assign a unique password that recipients must enter to access the shared file or folder. Remember to communicate this password to your intended recipients separately.


 


Password Sharing - Image 2.jpeg


Easily manage all your sharing settings in one place.


 


Sharing: Tips



It’s always good to stay on top of the content you’ve shared with friends, family, and collaborators. OneDrive gives you an easy way to do so. Simply log into your account at OneDrive.com, and on the left-side navigation, you’ll see a view called “Shared.” The Shared view, lets you quickly see all the content that’s share with you and more importantly, all the content that you’ve shared with others.


 


From this view, simply click on the sharing control to once again bring up the Sharing dialog. At the bottom of the dialog, you can see which individuals have access to this content. Clicking on those names will open the advanced controls, letting you update permissions if you desire.



For more information on sharing files and folders please read this support article.



Offline Files and Folders on OneDrive Mobile



How it helps: Sometimes the real world doesn’t give you the best access to the digital world with spotty or non-existent connectivity. But you may need access to your files when you don’t have a connection. Offline Files and Folders, empowers Microsoft 365 Basic customers with seamless access to your files on the go, whether you’re traveling, in a location with poor connectivity, or simply want to save on data.


 


Offline on Mobile - Image 1.jpeg


Choose which files to make available offline.


 


How it works: To access this feature on your mobile device, make sure you have the latest version of the OneDrive app installed on your Android or iOS device. Even without an internet connection, you can open and edit files stored offline in the app.



1. Select Your Files or Folders: Browse your files in the app, and for any file or folder you wish to access offline, open the context menu by tapping the three dots next to the item.



2. Enable Offline Access: Choose the “Make Available Offline” option. You’ll see a blue sync icon appear, indicating the file is syncing. Once the icon turns grey, your file is available for offline use.



3. Automatic Sync: As soon as you reconnect to the internet, any changes you made to offline files are automatically synchronized with your OneDrive, ensuring your work is always up to date.


 


Offline on Mobile - Image 2.jpeg


OneDrive will confirm your files are available offline.


 


Offline Access: Tips



Plan Ahead: Before traveling or entering areas with poor connectivity, preemptively select important documents for offline access.



Storage Considerations: The number and size of files you can store offline are limited by your device’s available storage and the Microsoft Basic 100GB storage limit. Keep an eye on your device’s capacity to ensure optimal performance.



Data Management and Usage: Be mindful of your data plan when enabling offline access for large files or folders, especially if relying on cellular data. Syncing large files or numerous folders can consume significant amounts of your data plan. To avoid unexpected data usage, consider syncing over Wi-Fi or adjusting your sync settings.



Manage Storage: Regularly review and remove offline files you no longer need to free up storage on your mobile device.



For more information, please read these articles for Offline Files and Folders for Android and iOS.



Wrapping Up



Whether you’re safeguarding your family photos, managing your personal projects, or simply enjoying the ease of accessing your files anywhere, anytime, Microsoft 365 Basic is evolving with you, helping ensure that your digital life is secure, private, and seamlessly connected. We appreciate you, our Microsoft 365 Basic subscribers, and we are excited to continue making your experience better.



Thank you for entrusting us with your most precious digital assets and thank you for reading.



About the Author



Arvind Mishra is a Principal Product Manager on the OneDrive Consumer Growth team. He rejoined Microsoft in 2021, after more than a decade away, and is focused on building experiences for OneDrive’s consumer audience. Arvind is based in Los Angeles, and in his spare time, he can be found spending time with his family, snowboarding, scuba diving, or trying to progress to the next level in Duolingo (the Barbie movie got this so right).