This article demonstrates how to enable the use of preview features in Azure NetApp Files in combination with Terraform Cloud and the AzAPI provider. In this example we enhance data protection with Azure NetApp Files backup (preview) by enabling and creating backup policies using the AzAPI Terraform provider and leveraging Terraform Cloud for the deployment.
As Azure NetApp Files development progresses new features are continuously being brought to market. Some of those features arrive in a typical Azure ‘preview’ fashion first. These features normally do not get included into Terraform before general availability (GA). A recent example of such a preview feature at the time of writing is Azure NetApp Files backup.
In addition to snapshots and cross-region replication, Azure NetApp Files data protection has extended to include backup vaulting of snapshots. Using Azure NetApp Files backup, you can create backups of your volumes based on volume snapshots for longer term retention. At the time of writing, Azure NetApp files backup is a preview feature, and has not yet been included in the Terraform AzureRM provider. For that reason, we decided to use the Terraform AzAPI provider to enable and manage this feature.
Azure NetApp Files backup provides fully managed backup solution for long-term recovery, archive, and compliance.
Backups created by the service are stored in an Azure storage account independent of volume snapshots. The Azure storage account will be zone-redundant storage (ZRS) where availability zones are available or locally redundant storage (LRS) in regions without support for availability zones.
Backups taken by the service can be restored to an Azure NetApp Files volume within the region.
Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. In this article, we will be focusing on policy-based backups.
In the following scenario, we will demonstrate how Azure NetApp Files backup can be enabled and managed using the Terraform AzAPI provider. To provide additional redundancy for our backups, we will backup our volumes in the Australia East region, taking advantage of zone-redundant storage (ZRS).
Azure NetApp Files backup preview enablement
To enable the preview feature for Azure NetApp Files, you need to enable the preview feature. In this case, this feature needs to be requested via the Public Preview request form. Once the feature is enabled, it will appear as ‘Registered’.
A ‘Pending’ status means that the feature needs to be enabled by Microsoft before it can be used.
Managing Resource Providers in Terraform
In case you manage resource providers and its features using Terraform you will find that registering the preview feature will fail with the below message, which is expected as it is a forms-based opt-in feature.
Resource “azurerm_resource_provider_registration” “anfa” { name = “Microsoft.NetApp” feature { name = “ANFSDNAppliance” registered = true } feature { name = “ANFChownMode” registered = true } feature { name = “ANFUnixPermissions” registered = true } feature { name = “ANFBackupPreview” registered = true } }
Terraform Configuration
We are deploying Azure NetApp Files using a module with the Terraform AzureRM provider and configuring the backup preview feature using the AzAPI provider.
Microsoft has recently released the Terraform AzAPI provider which helps to break the barrier in the infrastructure as code (IaC) development process by enabling us to deploy features that are not yet released in the AzureRM provider. The definition is quite clear and taken from the provider GitHub page.
The AzAPI provider is a very thin layer on top of the Azure ARM REST APIs. This new provider can be used to authenticate to and manage Azure resources and functionality using the Azure Resource Manager APIs directly.
The code structure we have used looks like the sample below. However, if using Terraform Cloud you use the private registry for module consumption. For this article, we are using local modules.
Azure NetApp Files backup is supported in the following regions. In this example we are using the Australia East region.
After deployment, you will be able to see the backup icon as part of the NetApp account as below.
Azure NetApp Files backup policy creation
The creation of the backup policy is similar to a snapshot policy and has its own Terraform resource. The backup policy is a child element of the NetApp account. You’ll need to use the ‘azapi_resource’ resource type with the latest API version.
(!) Note
It is helpful to install the Terraform AzAPI provider extension in VSCode, as it will make development easier with the IntelliSense completion.
The ‘parent_id’ is the resource id of the NetApp account
Because we are deploying this in the Australia East region, which has support for availability zones, the Azure storage account used will be configured with zone-redundant storage (ZRS), as documented under Requirements and considerations for Azure NetApp Files backup. In the Azure Portal, within the volume context, it will look like the following:
(!) Note
Currently Azure NetApp File backups supports backing up the daily, weekly, and monthly local snapshots created by the associated snapshot policy to the Azure Storage account.
The first snapshot created when the backup feature is enabled is called a baseline snapshot, and its name includes the prefix ‘snapmirror’.
Assigning a backup policy to an Azure NetApp Files volume
The next step in the process is to assign the backup policy to an Azure NetApp Files volume. Once again, as this is not yet supported by the AzureRM provider, we will use the `azapi_update_resource` as it allows us to manage the resource properties we need from the existing NetApp account. Additionally, it does use the same auth methods as the AzureRM provider. In this case, the configuration code looks like the following where the data protection block is added to the volume configuration.
The data protection policy will look like the screenshot below indicating the specified volume is fully protected within the region.
AzAPI to AzureRM migration
At some point, the resources created using the AzAPI provider will become available in the AzureRM provider, which is the recommended way to provision infrastructure as code in Azure. To make code migration a bit easier, Microsoft has provided the AzAPI2AzureRM migration tool.
Summary
The Terraform AzAPI provider is a tool to deploy Azure features that have not yet been integrated in to the AzureRM Terraform provider. As we see more adoption of preview features in Azure NetApp Files this new functionality will give us deployment support to manage zero-day and preview features, such as Azure NetApp Files backup and more.
This article is contributed. See the original author and article here.
Commercial and public sector organizations continue to look for new ways to advance their goals, improve efficiencies, and create positive employee experiences. The rise of the digital workforce and the current economic environment compels organizations to utilize public cloud applications to benefit from efficiency and cost reduction.
This article is contributed. See the original author and article here.
As the world started unwinding from the pandemic, Sarah joined thousands of travelers in booking flights for a long-planned vacation. Realizing she forgot to add her food preferences, she immediately opened a chat on the airline portal. After a long wait, the virtual agent told her to try again later. All the human support agents were busy. She returned after a few hours and started another chat, but she was too late. The chat service had ended for the day. Super upset and frustrated, she left the chat giving the lowest rating.
Busy queues often lead to lower CSAT
Service delivery organizations dread scenarios like Sarah’s, where the customer must wait a long time in the queue, abandons the attempt with a low satisfaction rating, orworst of allboth. Managing workloads effectively during periods of peak demand is a frequent problem for all businesses. Service organizations face the additional challenge of support requests that arrive outside of business hours. Companies are looking for ways to enhance their customers’ experience to drive higher CSAT. Efficient queue overflow management is an essential part.
Introducing queue overflow management in Microsoft Dynamics 365 Customer Service
With queue overflow management, businesses can proactively manage overflow when callers are experiencing abnormally long wait times or a queue has many unassigned work items.
Corrective actions are specific to service channels or modalities. During peak demand, organizations can transfer calls to a different queue or to voicemail or offer to call the customer back later. Similarly, conversations and records can be transferred from an overflowing queue to a different queue.
How to set up queue overflow management
In the Customer Service admin center, select Queues > Advanced queues. Select Set overflow conditions in the Overflow management tile.
Then define the conditions that will determine whether the queue is overflowing and what action to take if it is.
Overflow evaluation happens before a work item is added to the queue. You can think of it as a sort of “pre-queueing” step.
How queue overflow management would make Sarah a happy customer
Let’s return to our excited traveler Sarah to learn how queue overflow management would help the airline avoid a dissatisfied customer. She’s bought her tickets and initiated a chat to add a meal preference to her reservation. The airline now has two queues for customer chats. With customers already holding in one, queue overflow management automatically routes Sarah to the other, where there’s no wait.
Investigate queue overflow events with routing diagnostics
Dynamics 365 Customer Service captures information about queue overflow events in routing diagnostics. Admins can use the information to understand failure scenarios and plan their business workflows accordingly.
Prepare better to serve better
It’s hard to predict peak demand events. Queue overflow management can help. Admins and supervisors are better prepared for contingencies, and customers like Sarah get a faster resolution to their issues.
This article is contributed. See the original author and article here.
The Azure Arc team is excited to announce generally availability of Automatic VM extension upgrades for Azure Arc-enabled servers. VM extensions allow customers to easily include additional capabilities on their Azure Arc-enabled servers. Extension capabilities range from collecting log data with Azure Monitor to extending your security posture with Azure Defender to deploying a hybrid runbook worker on Azure Automation. Over time, these VM extensions get updated with security enhancements and new functionality. Maintaining high availability of these services during these upgrades can be challenging and a manual task. The complexity only grows as the scale of your service increases.
With Automatic VM extension upgrades, extensions are automatically upgraded by Azure Arc whenever a new version of an extension is published. Auto extension upgrade is designed to minimize service disruption of workloads during upgrades even at high scale and to automatically protect customers against zero-day & critical vulnerabilities.
How does this work?
Gone are the days of manually checking for and scheduling updates to the VM Extensions used by your Azure Arc-enabled servers. When a new version of an extension is published, Azure will automatically check to see if the extension is installed on any of your Azure Arc-enabled servers. If the extension is installed, and you’ve opted into automatic upgrades, your extension will be queued for an upgrade.
The upgrades across all eligible servers are rolled out in multiple iterations where each iteration contains a subset of servers (about 20% of all eligible servers). Each iteration has a randomly selected set of servers and can contain servers from one or more Azure regions. During the upgrade, the latest version of the extension is downloaded to each server, the current version is removed, and finally the latest version is installed. Once all the extensions in the current phase are upgraded, the next phase will begin. If upgrade fails on any of the VM, then rollback to previous stable extension version is triggered immediately. This will remove the extension and install the last stable version of the extension. This rolled back VM is then included in the next phase to retry upgrade. You’ll see an event in the Azure Activity Log when an extension upgrade is initiated.
How do I get started?
No user action is required to enable automatic extension upgrade. When you deploy an extension to your server, automatic extension upgrades will be enabled by default. All your existing ARM templates, Azure Policies, and deployment scripts will honor the default selection. You however will have an option to opt-out during or any time after extension installation on the server.
After an extension installation, you can verify if the extension is enabled for automatic upgrade by looking for the status under “Automatic upgrade status” column in Azure Portal. Azure Portal can also be used to opt-in or opt-out of auto upgrades by first selecting the extensions using checkboxes and then by clicking on the “Enable Automatic Upgrade” or “Disable Automatic Upgrade” buttons respectively.
You can also use Azure CLI and Azure PowerShell to view the auto extension upgrade status and to opt-in or opt-out. You can learn more about this using our Azure documentation.
What extensions & regions are supported?
Limited set of extensions are currently supported for Auto extension upgrade. Extensions not yet supported for auto upgrade will have status as “Not supported” under the “Automatic upgrade status” column. You can also refer Azure documentation for complete list of supported extensions.
All public azure regions are currently supported. Arc enabled Servers connected to any public azure region are eligible for automatic upgrades.
Upcoming enhancements
We will be gradually supporting many more extensions available on Arc enabled Servers.
This article is contributed. See the original author and article here.
Skill-based routing automatically assigns the agents with the right skills to work on customer support requests. With skill-based routing in Microsoft Dynamics 365 Customer Service, your call center can reduce the number of queues it operates, improve agents’ productivity, and increase customers’ satisfaction.
To get the most out of skill-based routing, it should be easy to onboard your agents with the right set of skills, proficiencies, and queues. It should be just as easy to modify your workforce configuration to keep up with the ever-changing demands on your call center. With the October 2022 release, Dynamics 365 Customer Service introduces a new skills hub and an enhanced user management experience that helps you onboard and manage the agents in your workforce more efficiently than ever before.
One-stop skill-based routing in the new skills hub
Let’s look at a common scenario. Morgon is the call center administrator for Contoso Enterprises, a global e-commerce company. Morgon has observed a surge in service requests around “Returns” in the North American region. Customers are facing long wait times because of it. In response, Morgon wants to make two changes to his workforce.
First, he wants to boost the “Returns” skill and support the additional requests using agents who have lower proficiency in handling returns but can provide timely support. Second, he wants to move some agents from the Latin American queue to the North American queue to assist with the additional demand.
Previously, Morgon would have had to visit separate admin centers to accomplish these tasks. Now he can do everything in one place: the skills hub.
Skills, skill types, proficiency scales, and intelligent skill finder models are all important parts of skill-based routing. The new skills hub is the one-stop place to manage these attributes across your entire call center.
Here’s what you’ll find in the new skills hub:
An overview of all the skills you’ve configured in your call center and the number of users associated with each one
A single seamless flow to create a skill, add agents to it, and assign the agents’ proficiency in the skill
A simple way to add or remove multiple agents from a skill in just a few steps
An out-of-box proficiency scale to help you start using skill-based routing in as few steps as possible
An intuitive experience to create or modify proficiency scales
Enhanced user management
Along with skills, Customer Service uses queues and capacity profiles to efficiently route work requests to the agents best suited to handle them. With enhanced user management, you can easily view how your agents are configured across these attributes. Managing the attributes for multiple agents takes just a few simple steps.
Here are some highlights of the new user management experience:
A page that lists the skills, proficiency scales, queues, and capacity profiles of all agents in your organization
Search functionality to find agents with specific skills or other attributes
The ability to update the attributes of multiple agents at once
One place to manage skills, proficiencies, queues, and capacity profiles of users
You can even enable agents to participate in swarming requests as part of the collaboration features in Dynamics 365 Customer Service.
With the new skills hub and enhanced user management, call center administrators can now quickly configure their workforce and make changes on the fly to keep up with customers’ varying demands.
The skills hub and enhanced user management are available as a public preview in the Dynamics 365 Customer Service admin center for all organizations.
Next steps
To learn about the new features and try out their capabilities, read the documentation:
This article is contributed. See the original author and article here.
This report is provided “as is” for informational purposes only. The Department of Homeland Security (DHS) does not provide any warranties of any kind regarding any information contained herein. The DHS does not endorse any commercial product or service referenced in this bulletin or otherwise.
This document is marked TLP:WHITE–Disclosure is not limited. Sources may use TLP:WHITE when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release. Subject to standard copyright rules, TLP:WHITE information may be distributed without restriction. For more information on the Traffic Light Protocol (TLP), see http://www.cisa.gov/tlp.
Description
CISA received a benign 32-bit Windows executable file, a malicious dynamic-link library (DLL) and an encrypted file for analysis from an organization where cyber actors exploited vulnerabilities against Zimbra Collaboration Suite (ZCS). Four CVEs are currently being leveraged against ZCS: CVE-2022-24682, CVE-2022-27924, CVE-2022-27925 chained with CVE-2022-37042, and CVE-2022-30333. The executable file is designed to side-load the malicious DLL file. The DLL is designed to load and Exclusive OR (XOR) decrypt the encrypted file. The decrypted file contains a Cobalt Strike Beacon binary. The Cobalt Strike Beacon is a malicious implant on a compromised system that calls back to the command and control (C2) server and checks for additional commands to execute on the compromised system.
This artifact is a 32-bit executable file that has been identified as a version of vf_host.exe from Viewfinity and is benign. The file is used to side-load a DLL, vftrace.dll “058434852bb8e877069d27f452442167”.
This artifact is a malicious 32-bit DLL file loaded by “vxhost.exe” (4109ac08bdc8591c7b46348eb1bca85d). This file is designed to search and load an encrypted file “%current directory%bin.config” (be2b0c387642fe7e8475f5f5f0c6b90a) if installed on the compromised system. It decrypts the file using the hard-coded XOR key “0x401”. The decrypted binary contains a Cobalt Strike Beacon DLL that has an embedded shellcode inside of the MZ header. It copies the Cobalt Strike Beacon DLL into a buffer and executes the shellcode.
Screenshots
Figure 1 – This screenshot illustrates code extracted from this malware where it loads and XOR decrypts the encrypted file “bin.config” (be2b0c387642fe7e8475f5f5f0c6b90a) before executed in memory.
This file is decrypted and executed by “vftrace.dll” (058434852bb8e877069d27f452442167). This file is a 32-bit Portable Executable (PE) DLL that has an embedded shellcode inside of the MZ header, which is located at the start of the file. When executed, the shellcode decrypts an embedded beacon payload using a single-byte XOR key 0xC3. It executes the entry point of the decrypted payload in memory at runtime. The decrypted payload has been identified as a Cobalt Strike Beacon implant. During the execution, it decodes its configuration using a single-byte XOR key 0x4f. The configuration contains the, RSA public key, C2, communication protocol, and more. The parsed configuration data for the Cobalt Strike Beacon implant is displayed below in JSON format:
–Begin configuration in the Cobalt Strike Beacon– { “BeaconType”: [ “HTTPS” ==> Beacon uses HTTPS to communicate ], “Port”: 443, “SleepTime”: 5000, ==> Timing of C2 Beacons via Sleeptime and Jitter feature “MaxGetSize”: 1403644, “Jitter”: 20, ==> . Jitter value to force Beacon to randomly modify its sleep time. Jitter of 20 means that there is a random jitter of 20% of 5000 milliseconds “MaxDNS”: “Not Found”, ==> Publickey to encrypt communications “PublicKey”: “MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDApWEZn8vYHYN/JiXoF72xGpWuxdZ7gGRYn6E7+mFmsVDSzImL7GTMXrllB4TM6/oR+WDKk0L+8elLel63FXPQ3d3K/t1/8dnYBLpjPER+/G/iu2viAN+6KEsQfKA3O6ZvABg9/uH86G2erow7Ik4a2VinucYSkKJ8jYV1yfeDzQIDAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==”, “PublicKey_MD5”: “9b96180552065cdf6cc42f8ba6f43f8b”, “C2Server”: “207[.]148[.]76[.]235,/jquery-3.3.1.min.js”, “UserAgent”: “Mozilla/4.1 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36”, “HttpPostUri”: “/jquery-3.3.2.min.js”, “Malleable_C2_Instructions”: [ “Remove 1522 bytes from the end”, “Remove 84 bytes from the beginning”, “Remove 3931 bytes from the beginning”, “Base64 URL-safe decode”, “XOR mask w/ random key” ], “HttpGet_Metadata”: { “ConstHeaders”: [ “Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8”, “Referer: http://code.jquery.com/”, “Accept-Encoding: gzip, deflate” ], “ConstParams”: [], “Metadata”: [ “base64url”, “prepend “__cfduid=””, “header “Cookie”” ], “SessionId”: [], “Output”: [] }, “HttpPost_Metadata”: { “ConstHeaders”: [ “Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8”, “Referer: http://code.jquery.com/”, “Accept-Encoding: gzip, deflate” ], “ConstParams”: [], “Metadata”: [], “SessionId”: [ “mask”, “base64url”, “parameter “__cfduid”” ], “Output”: [ “mask”, “base64url”, “print” ] }, “SpawnTo”: “AAAAAAAAAAAAAAAAAAAAAA==”, “PipeName”: “Not Found”, “DNS_Idle”: “Not Found”, “DNS_Sleep”: “Not Found”, “SSH_Host”: “Not Found”, “SSH_Port”: “Not Found”, “SSH_Username”: “Not Found”, “SSH_Password_Plaintext”: “Not Found”, “SSH_Password_Pubkey”: “Not Found”, “SSH_Banner”: “”, “HttpGet_Verb”: “GET”, “HttpPost_Verb”: “POST”, “HttpPostChunk”: 0, “Spawnto_x86”: “%windir%syswow64dllhost.exe”, “Spawnto_x64”: “%windir%sysnativedllhost.exe”, “CryptoScheme”: 0, “Proxy_Config”: “Not Found”, “Proxy_User”: “Not Found”, “Proxy_Password”: “Not Found”, “Proxy_Behavior”: “Use IE settings”, “Watermark”: 1234567890, “bStageCleanup”: “True”, “bCFGCaution”: “False”, “KillDate”: 0, “bProcInject_StartRWX”: “False”, “bProcInject_UseRWX”: “False”, “bProcInject_MinAllocSize”: 17500, “ProcInject_PrependAppend_x86”: [ “kJA=”, “Empty” ], “ProcInject_PrependAppend_x64”: [ “kJA=”, “Empty” ], “ProcInject_Execute”: [ “ntdll:RtlUserThreadStart”, “CreateThread”, “NtQueueApcThread-s”, “CreateRemoteThread”, “RtlCreateUserThread” ], “ProcInject_AllocationMethod”: “NtMapViewOfSection”, “ProcInject_Stub”: “s7YR+gVAMtA1Jtjf0KV/Cw==”, ==> the Base64 encoded MD5 file hash of the Cobalt Strike “bUsesCookies”: “True”, “HostHeader”: “”, “smbFrameHeader”: “AAWAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=”, “tcpFrameHeader”: “AAWAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=”, “headersToRemove”: “Not Found”, “DNS_Beaconing”: “Not Found”, “DNS_get_TypeA”: “Not Found”, “DNS_get_TypeAAAA”: “Not Found”, “DNS_get_TypeTXT”: “Not Found”, “DNS_put_metadata”: “Not Found”, “DNS_put_output”: “Not Found”, “DNS_resolver”: “Not Found”, “DNS_strategy”: “round-robin”, “DNS_strategy_rotate_seconds”: -1, “DNS_strategy_fail_x”: -1, “DNS_strategy_fail_seconds”: -1 } –End configuration in the Cobalt Strike Beacon–
It is designed to use a JavaScript library jQuery malleable C2 profile for communication to evade detection. It attempts to send a GET request to its C2 server with metadata in the cookie header “__cfduid” that contains information about the compromised system such as, username, computer name, operating system (OS) version, the name of the malware executing on the victim’s system, and other information. The metadata in the cookie header is encrypted and encoded.
Displayed below is the RSA public key used to encrypt the metadata before it is encoded using NetBios (uppercase) and base64 encoding algorithm:
Analysis indicates that the C2 server will respond to the above HTTP GET request with encrypted data that contains commands, which the malware will decrypt and execute to perform additional functions. The C2 server response payload was not available for analysis.
Displayed below are sample functions built into the malware:
–Begin commands– Make and change directory Copy, move, remove files to the specified destination Download and upload files List drives on victim’s system Lists files in a folder Enable system privileges Kills the specified process Show running processes Binds the specified port on the victim’s system Disconnect from a named pipe Process injection Service creation –End commands–
Screenshots
Figure 2 – The screenshot of the shellcode embedded in the MZ header.
This article is contributed. See the original author and article here.
Accounts Payable teams around the globe spend hours processing invoices that come from different channelsfax, mail, email, handwritten, and electronic data interchange (EDI). The sheer volume of invoices is a difficult burden to overcomeand the number grows exponentially with the number of office locations. Resource-strained finance teams must manually transfer invoice details to the enterprise resource planning (ERP) system. This error-prone process increases finance cycle times, delays closing the books, and prevents timely financial statements. Misplaced invoices and delayed payments cost organizations thousands in lost vendor discounts. Wouldn’t it be great if all that manual work could be automated? With the new invoice capture feature in Microsoft Dynamics 365 Finance, it can.
Invoice capture automates the entire AP invoice-to-pay process using artificial intelligence (AI) and machine learning (ML) technologies called Optical Character Recognition (OCR) and Robotic Process Automation (RPA). With invoice capture, employees scan invoices as they arrive, OCR extracts critical data, and the system matches invoices with purchase orders, identifies exceptions and data errors, and updates financial records. Employees can quickly route coded invoices for approval through workflows that follow rules based on the invoice data and amount.
Invoice capture helps control spending and reduce paperwork
Invoice capture delivers the following benefits:
Spend control: Automated processing and better audit trails deliver more real-time visibility into spend and better reporting, which results in faster responses to time-sensitive vendor inquiries. This can help your business avoid late bill payments, take advantage of time-based discounts, and accelerate approvals.
Faster cycle times: Freeing your AP teams from manual data entry reduces errors, trims weeks from the payment cycle, and allows the team to focus on more strategic tasks like improving vendor relationships, optimizing sourcing contracts, and negotiating deeper discounts.
Paperless AP: Say goodbye to filing cabinets, lost invoices, and printer jams. With a fully digitized process, not only do you reduce your carbon footprint and printing costs, but documents are more secure.
Key capabilities of invoice capture
Invoice data extraction from multiple channels: Invoice capture offers flexible configuration settings that allow you to automate invoice processingregardless of how invoices come across your desk, whether they’re faxed, EDI, or even handwrittenand consolidate them centrally for approvals.
Empowered by AI Builder and Azure Form Recognizer: Invoice capture contains a prebuilt AI model powered by Microsoft AI Builder and Azure Form Recognizer, which can process most invoice formats from all over the world without extra model training effort. The Microsoft AI Builder continuously improves the model.
Custom AI model for invoice processing: When business complexity prevents the prebuilt model from recognizing an invoice format, invoice capture provides a way to build custom models to supplement the prebuilt model.
Intelligent and flexible business rule engine: Sometimes the AP team needs more information to make the right decision than the basic details on the invoice. It’s often helpful to have information from the supply chain or bank account details, for instance. Using an intelligent business rule engine, you can define derivations and validation rules to accommodate the complexity of your vendor invoice processing. This helps to streamline the accounts payable automation and relieves the AP team from repetitive work, allowing them to focus on more value-added tasks.
Recent Comments