New transactable offers from Signly, Tessell, and Varonis in Azure Marketplace

New transactable offers from Signly, Tessell, and Varonis in Azure Marketplace

This article is contributed. See the original author and article here.

Microsoft partners like Signly, Tessell, and Varonis deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:


 

















Signly.png

Signly SLaaS: Signly sign language as a service (SLaaS), a fully managed solution powered by Microsoft Azure, makes it easy to provide access to sign language by capturing the text of a web page and sending it to highly qualified deaf sign language translators. Translated content is then available for all users, enabling website owners to provide improved service for deaf customers.


Tessell.png

Tessell – Migrate and Manage Oracle on Azure: Tessell is a fully managed database as a service (DBaaS) designed to enable Oracle databases to thrive on Microsoft Azure by delivering enterprise-grade functionality coupled with consumer-grade experience. Tessell makes deploying Oracle databases on Azure simple and elegant, taking care of your data infrastructure and data management needs for both Oracle Enterprise Edition and Standard Edition 2.


Varonis.png

Varonis – Find, Monitor, and Protect Sensitive Data: Is your midsize or large organization trying to understand where your sensitive data is, who has access to it, and what users are doing with it? The Varonis platform protects your data with low-touch, accurate security outcomes by classifying more data, revoking permissions, enforcing policies, and triggering alerts for the Varonis incident response team to review on your behalf.


3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity

3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity

This article is contributed. See the original author and article here.

Enterprises are increasingly turning to collaborative apps to enhance workplace engagement and productivity. That presents an opportunity for independent software vendors (ISVs) to earn customer loyalty by building easily accessible enterprise apps with rich features that deliver business value.

The post 3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Propelling the Aerodynamics of Enterprise Innovation: Announcing the Microsoft AI SDK for SAP ABAP

Propelling the Aerodynamics of Enterprise Innovation: Announcing the Microsoft AI SDK for SAP ABAP

This article is contributed. See the original author and article here.

 Linkedin_Banner.jpg


 


 


We are excited to announce the launch of Microsoft AI SDK for SAP ABAP. This software development kit (SDK) is designed to provide SAP ABAP developers with the tools they need to create intelligent enterprise applications using Artificial Intelligence (AI) technologies.


 





















Git Repository Location AI SDK for SAP ABAP (github.com)
Documentation AI SDK for SAP Documentation 
Discussions Discussions · GitHub
Issues AI SDK for SAP ABAP: Issue Reporting

 


Engineered with a deep understanding of developers’ needs, the Microsoft AI SDK for SAP ABAP presents an intuitive interface that effortlessly brings AI capabilities to your ABAP applications. This toolkit offers an exciting avenue to tap into the power of Azure OpenAI. And this is just the beginning — our commitment to progress promises the inclusion of even more AI engines in future versions.


 


Azure OpenAI, the crown jewel of Microsoft Azure’s offerings, is a powerhouse of AI services and tools. It is your passport to harnessing machine learning algorithms, leveraging advanced natural language processing tools, and exploring versatile cognitive services. Its vast suite of tools paves the way for the creation of intelligent applications that excel in pattern detection, natural language processing, and data-driven predictions. Azure OpenAI grants you access to an array of pre-built AI models and algorithms, along with custom model training and deployment tools, all under the umbrella of stringent security, compliance, and data privacy standards.


 


With the AI SDK for SAP ABAP and Azure OpenAI integration with SAP, developers are on the brink of a new frontier. Now you have the power to craft innovative applications that can revolutionize the enterprise landscape by automating mundane tasks, bolstering smarter business decisions, and providing a more personalized customer experience. It’s more than a development kit — it’s your passport to an exciting future of technological evolution for enterprises running on the SAP platform.


 


Features:


The Microsoft AI SDK for SAP ABAP v1.0 is not just a toolset, it’s an innovation accelerator, an efficiency propellant. Designed for ABAP developers, it supercharges their workflows, slashing the time taken to integrate cutting-edge AI capabilities. With its streamlined integration process and ABAP-ready data types, developers can fast-track their tasks and concentrate on their real mission – crafting intelligent, transformative applications. This is no ordinary toolkit; it’s your express lane to the future of enterprise software development.


 



  • Extensive Capabilities: It provides a comprehensive suite of functionalities, including Models, Deployment, Files, Fine-Tuning, and Completion (GPT3), along with Chat Completion (GPT4) capabilities.


  • ABAP-Ready Data Types: We’ve simplified the integration process for ABAP developers by offering ABAP-ready data types. This feature substantially lowers the entry barriers, enabling developers to leverage the SDK with ease.


  • Azure OpenAI Support: The SDK is fully compatible with Azure OpenAI, ensuring seamless integration and performance.


  • Enterprise Control: To safeguard sensitive data, we’ve incorporated a robust enterprise control mechanism, offering three levels of control granularity. Enterprises can effectively manage SDK usage by implementing policies to permit or block specific functionalities. For instance, an organization could use authorizations to designate a user group capable of performing setup operations (Deployment, Files, and Fine-Tuning), while enabling all users to utilize the Completions functionality.


  • Flexible Authentication: The SDK supports authentication using either Azure OpenAI Keys or Azure Active Directory (AAD), providing users with a secure and flexible approach to authentication.


 


In this age of relentless technological progress, AI is undeniably the cornerstone of enterprise software development’s future. The Microsoft AI SDK for SAP ABAP is a dynamic and transformative tool, purpose-built for SAP professionals. It’s not just a toolkit; it’s a supercharger for your innovative instincts, enabling you to build intelligent, data-centric applications. Our aim is to help businesses stay nimble and competitive in a marketplace where the pace of innovation is breakneck.


The launch of the Microsoft AI SDK for SAP ABAP is a leap into the future. It encapsulates our commitment to fostering the symbiotic relationship between technology and business, nurturing an environment where the opportunities for innovation are limitless. As we unfurl this state-of-the-art tool, we can’t wait to see the inventive applications that you, the talented developers working within the SAP ecosystem, will craft. The potential is staggering, poised to redefine how businesses operate and flourish.


 


And our commitment doesn’t stop at providing you with the tools. We pledge unwavering support on your journey of discovery and innovation with the Microsoft AI SDK for SAP ABAP. We’re with you every step of the way — to guide, support, and celebrate as you traverse this transformative technological landscape. Let’s stride boldly together into this new era of intelligent, data-driven enterprise solutions. The future is here, and it’s brighter than ever.


 


Best Regards,


Gopal Nair –  Principal Software Engineer, Microsoft, – Author


Amit Lal – Principal Technical Specialist, Microsoft  – Contributor


 


Join us and share your feedback: Azure Feedback




#MicrosoftAISDK #AISDKforSAPABAP #EnterpriseGPT #GPT4 #AzureOpenAI #SAPonAzure #SAPABAP


 


Disclaimer: The announcement of the Microsoft AI SDK for SAP ABAP is intended for informational purposes only. Microsoft reserves the right to make adjustments or changes to the product, its features, availability, and pricing at any time without prior notice. This blog does not constitute a legally binding offer or guarantee of specific functionalities or performance characteristics. Please refer to the official product documentation and agreements for detailed information about the product and its use. Microsoft is deeply committed to the responsible use of AI technologies. It is recommended to review and comply with all applicable laws, regulations, and organizational policies to ensure the responsible and ethical use of AI.

Azure Policy Violation Alert using Logic apps

Azure Policy Violation Alert using Logic apps

This article is contributed. See the original author and article here.

For Azure log alert notification action using logic app, we have read numerous articles.  But I notice that most of them are either very brief or don’t go into great detail about all the nuances, tips, or tricks.  I therefore wanted to write one with as much detail as I could and some fresh additional strategies.  I hope this aids in developing the logic and putting it into practise.


 


So let’s get going.   We already know that creating the Alert rule and choosing the logic app as the action are necessary.  Additionally, the Logic app’s alert notification trigger for when an HTTP request is received.  So let’s construct one.


 


Vineeth_Marar_0-1683951928868.png


 


We can use the below sample schema for the above trigger task


 


 


{


    “type”: “object”,


    “properties”: {


        “schemaId”: {


            “type”: “string”


        },


        “data”: {


            “type”: “object”,


            “properties”: {


                “essentials”: {


                    “type”: “object”,


                    “properties”: {


                        “alertId”: {


                            “type”: “string”


                        },


                        “alertRule”: {


                            “type”: “string”


                        },


                        “severity”: {


                            “type”: “string”


                        },


                        “signalType”: {


                            “type”: “string”


                        },


                        “monitorCondition”: {


                            “type”: “string”


                        },


                        “monitoringService”: {


                            “type”: “string”


                        },


                        “alertTargetIDs”: {


                            “type”: “array”,


                            “items”: {


                                “type”: “string”


                            }


                        },


                        “configurationItems”: {


                            “type”: “array”,


                            “items”: {


                                “type”: “string”


                            }


                        },


                        “originAlertId”: {


                            “type”: “string”


                        },


                        “firedDateTime”: {


                            “type”: “string”


                        },


                        “description”: {


                            “type”: “string”


                        },


                        “essentialsVersion”: {


                            “type”: “string”


                        },


                        “alertContextVersion”: {


                            “type”: “string”


                        }


                    }


                },


                “alertContext”: {


                    “type”: “object”,


                    “properties”: {


                        “properties”: {},


                        “conditionType”: {


                            “type”: “string”


                        },


                        “condition”: {


                            “type”: “object”,


                            “properties”: {


                                “windowSize”: {


                                    “type”: “string”


                                },


                                “allOf”: {


                                    “type”: “array”,


                                    “items”: {


                                        “type”: “object”,


                                        “properties”: {


                                            “searchQuery”: {


                                                “type”: “string”


                                            },


                                            “metricMeasureColumn”: {},


                                            “targetResourceTypes”: {


                                                “type”: “string”


                                            },


                                            “operator”: {


                                                “type”: “string”


                                            },


                                            “threshold”: {


                                                “type”: “string”


                                            },


                                            “timeAggregation”: {


                                                “type”: “string”


                                            },


                                            “dimensions”: {


                                                “type”: “array”


                                            },


                                            “metricValue”: {


                                                “type”: “integer”


                                            },


                                            “failingPeriods”: {


                                                “type”: “object”,


                                                “properties”: {


                                                    “numberOfEvaluationPeriods”: {


                                                        “type”: “integer”


                                                    },


                                                    “minFailingPeriodsToAlert”: {


                                                        “type”: “integer”


                                                    }


                                                }


                                            },


                                            “linkToSearchResultsUI”: {


                                                “type”: “string”


                                            },


                                            “linkToFilteredSearchResultsUI”: {


                                                “type”: “string”


                                            },


                                            “linkToSearchResultsAPI”: {


                                                “type”: “string”


                                            },


                                            “linkToFilteredSearchResultsAPI”: {


                                                “type”: “string”


                                            }


                                        },


                                        “required”: [


                                            “searchQuery”,


                                            “metricMeasureColumn”,


                                            “targetResourceTypes”,


                                            “operator”,


                                            “threshold”,


                                            “timeAggregation”,


                                            “dimensions”,


                                            “metricValue”,


                                            “failingPeriods”,


                                            “linkToSearchResultsUI”,


                                            “linkToFilteredSearchResultsUI”,


                                            “linkToSearchResultsAPI”,


                                            “linkToFilteredSearchResultsAPI”


                                        ]


                                    }


                                },


                                “windowStartTime”: {


                                    “type”: “string”


                                },


                                “windowEndTime”: {


                                    “type”: “string”


                                }


                            }


                        }


                    }


                },


                “customProperties”: {}


            }


        }


    }


}


 


 


However, as can be seen, the output above is insufficient to provide a thorough error message for the notification.  In order to receive the message, we must perform additional tasks.  


 


The same query that was used in the Alert rule can be run again with additional filtering options to produce the error code and message shown below.


 


Vineeth_Marar_1-1683951928872.png


 


 


 


The aforementioned query serves as an example of how to extract the error message using multiple iterations from the Properties field.


 


Now, initialise the variables as shown below.


 


Vineeth_Marar_2-1683951928875.png


 


By choosing type as “String,” we must create 4 “Initialise Variable” tasks for “Runquery,” “Owner,” “HTMLtable,” and “Authorise.”


 


Also keep in mind that the List query result may contain multiple logs.  Therefore, we will use a foreach loop to go through each error log one at a time and send notifications for each one.  Let’s create the following foreach task to accomplish that.


 


Vineeth_Marar_3-1683951928877.png


 


The result of the “Run query and list result” task is the value.


 


The next step is to retrieve the current Log from a variable we previously initialised.  Let’s now set the value for that variable using the current item from the Foreach task.


 


Vineeth_Marar_4-1683951928879.png


 


The value is the output of the “Run query and list result” task.


 


The next step is to get the most recent Log from a variable that was initialised earlier.  Using the current item from the Foreach task, let’s now set the value for that variable.


 


So that we can obtain the field values in subsequent tasks, parse this variable into JSON.   You can simply run the Logicapp to obtain the output for the aforementioned variable in order to obtain the schema for this task.  Next, duplicate that output and paste it into the sample payload link for the task below.


 


Vineeth_Marar_5-1683951928881.png


 


 


Our actual strategy is to e-mail or notify each error log.  In this instance, the owner of the subscription will receive the email containing the reported error or violation.


 


We must make sure the logs are captured, which is done in the Alert rule window itself, because the query will be run once more after the alert.  So let’s add a requirement to only gather those logs.  


 


In order to ensure that the TimeGenerated field is between the Alert rule (trigger task), we will create a Condition task and choose it from the aforementioned “Parse JSON task.” Windows commencement and termination


 


Vineeth_Marar_6-1683951928884.png


 


Now If it is accurate, we can move on to obtaining the owner user or users’ information.  However, let’s use HTTP action for API GET call if you have numerous subscriptions and want to display Subscription Name in your Notification as well.   Use the API link as shown below and the SubscriptionID from the Current query Parse JSON task.


 


Vineeth_Marar_7-1683951928887.png


 


You can choose Managed Identity (of Logicapp) as your authentication type.  You can choose Identity from the main menu list in the Logicapp, enable Managed Identity, and grant Reader permission for each subscription before setting this task.


 


Run the logicapp now to obtain the results of the aforementioned API request.  To have the attributes of a subscription, copy the output and paste it into the sample payload for the subsequent Parse JSON task.


 


Vineeth_Marar_8-1683951928890.png


 


 


The Owners must now be filtered by the Subscription Users.  Let’s make another HTTP action for the API GET task to accomplish that.


 


Vineeth_Marar_9-1683951928893.png


 


 


Let’s run the logicapp once more to obtain the results of this API task, then copy and paste them into the sample payload for the subsequent Parse JSON task in order to obtain the schema.   Make sure the Content you choose is the API task’s body from above.


 


Vineeth_Marar_10-1683951928896.png


 


 


We currently have access to every subscription for the current log.   To send the notification, however, we only need the Owner user.  Therefore, we must use the Foreach task once more to filter each user and find the Owner user.   The output of the previous parse JSON task serves as the Value for this.


 


Vineeth_Marar_11-1683951928898.png


 


Let’s now enter the details of the current user into a variable.  Keep in mind that we previously initialised the variable “owner.”  To set the value for it, create a Set Variable task now.   Make sure the value represents the result of the previous foreach task.


 


Vineeth_Marar_12-1683951928900.png


 


To get the attribute values of the current user for later use, we must now parse the Variable into JSON.


 


Vineeth_Marar_13-1683951928902.png


 


 


To obtain the output of the aforementioned variable and obtain the schema, we must once again run the logicapp and copy/paste the results to the sample payload link above.


 


To identify the Owner user, we must now obtain the Owner’s Role AssignmentID (which is common in Azure).  To obtain the Role Assignment ID, go to your subscription’s IAM (Access Control), click Role Assignments, then select any Owner and the JSON tab.   However, you can also use PowerShell/CLI.   Alternately, you can use the logicapp to validate the owner’s role assignment ID after receiving the output of the aforementioned “Parse JSON for Users” task.  For future use, copy that.


 


Vineeth_Marar_14-1683951928904.png


 


 


Vineeth_Marar_15-1683951928906.png


 


 


The ID guid value can also be copied from the ID value.


 


To select only the Owner user for subsequent tasks, we must now create a Condition task to filter the users.   The ID field from the task “Parse JSON for current user” should be used as the condition field.


 


Vineeth_Marar_16-1683951928908.png


 


 


The most crucial thing to keep in mind right now is that we must run a Graph API query in order to obtain user attributes like email and UPN, etc.  For obtaining those attributes, the results of the current API queries are insufficient.   But we need the following permission in the AAD in order to access those users’ attributes.  The SPN (app registration) must be created, the following API permissions must be provided, and admin consent must be granted.


 


















































Permission



Type



Directory.AccessAsUser.All



Delegated



Directory.ReadWrite.All



Delegated



Directory.ReadWrite.All



Application



Group.ReadWrite.All



Delegated



Group.ReadWrite.All



Application



User.Read



Delegated



User.Read.All



Delegated



User.Read.All



Application



User.ReadWrite.All



Delegated



User.ReadWrite.All



Application



 


Additionally, duplicate the App ID and Tenant ID, make a secret, and copy the Secret key for the subsequent task.


 


To run a Graph API query, we must now execute the following HTTP action for API task.  To obtain information about the current user, use the ‘Parse Json for current user’s principalID’ command.


 


Vineeth_Marar_17-1683951928910.png


 


 


Choose the Authentication parameter and enter the SPN-copied Tenant ID, App ID (Client ID), and Secret.


 


For the output from the aforementioned API, create a new “Parse JSON” task.  To obtain the output of the aforementioned task’s sample payload to paste into the parse json task’s payload to obtain the schema, we can run the logicapp once more.


 


Vineeth_Marar_18-1683951928913.png


 


 


We should now have a good format for the notification to appear in the email.  We’ll use an HTML table for that, filled with information from the query above (such as the error code, error message, severity, and subname).  Although you are free to use your own format, you can use the sample provided by this github link (attached below) as a guide.  You must choose the HTML table (the initialise variable we created earlier) and use the ‘Set Variable’ task to paste the value from the example HTML code I’ve attached below.


 


<>


 


Vineeth_Marar_19-1683951928915.png


 


 


 


Update the fields/values as indicated below in the code at the appropriate lines/locations.


 


Vineeth_Marar_20-1683951928917.png


 


 


 


 


After that, a task called Send email V2 can be created in Outlook 365 to send the notification.


 


Vineeth_Marar_21-1683951928919.png


 


 


 


You will receive an email as below.


 


Vineeth_Marar_22-1683951928939.png


 


 


Before we go any further, make sure your Alert rule in Azure Monitor has been created and the aforementioned logicapp has been selected as the action. Make sure the error/administration diagnostic logs are enabled to send to the Log analytics workspace for all subscriptions.   If you want to set up separate alert rules for “Error” and “Critical,” create them separately and choose the same logicapp as the action.  Here is just a sample.


 


Vineeth_Marar_23-1683951928942.png


 


 


And the Condition query should be as below (you can modify as per your requirement)


 


Vineeth_Marar_24-1683951928945.png


 


 


 


The evaluation of the log analytics workspace (activity logs) will be performed every 5 minutes, and if any policy violation errors are discovered, an alert will be sent.  The Logic app will be activated as soon as the Alert is fired, and the Owner of the resource subscription will receive a notification email in the format shown above with all necessary information.


 


Hope you had a great reading and happy learning. 

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1  

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1  

This article is contributed. See the original author and article here.

One aspect of Dynamics 365 Field Service’s adaptability lies in the flexibility of its ‘status’ functionality, driving a meaningful work order lifecycle tailored to each organization. Field Service helps manage work orders and bookings across diverse use cases. This blog series explores critical status concepts: 

  • System Status and Substatus for Work Orders 
  • Booking Status and Field Service Status for Bookings 
  • Booking Status Impact on Work Orders Across Single and Multi-Booking Scenarios 

Grasping these concepts allows organizations to leverage the solution’s functionality, optimize field service processes, and ultimately provide better customer service. 

This blog will expand upon many of the concepts discussed in the existing work order and booking status documentation: Work order life cycle and statuses – Dynamics 365 Field Service | Microsoft Learn 

Work Order Status Concepts: System Status and Substatus 

System Status 

Work orders in Dynamics 365 Field Service have a column called System Status which helps organizations manage their field service processes efficiently. There are six values available for System Status: 

  • Unscheduled: Work order has been created, but resources have not been assigned. 
  • Scheduled: Work order has resources assigned and is ready for execution. 
  • In Progress: Work order is currently being executed by the assigned resources. 
  • Completed: Work order has been executed and finished. 
  • Posted: Work order has been invoiced and is now closed. 
  • Cancelled: Work order has been cancelled and will not be executed. 

As the documentation highlights, an organization must use this field as is because these values allow the FS solution to interpret the current state of the work order record and apply appropriate behaviors and validations. If this list is changed, it could cause many issues both immediately and unanticipated, down the line. New values in this list would not be interpretable by the solution. However, the Field Service solution has a powerful related concept that provides infinite flexibility which can be mapped directly to these System Status values. 

Substatus 

In Dynamics 365 Field Service, the Substatus table plays a crucial role in providing organizations with the ability to create meaningful states that are mapped to System Statuses. One noteworthy feature of the Substatus table is the option to define a “Default” Substatus for each mapped System Status. This default Substatus will be automatically applied when a work order transitions into the corresponding System Status through the out-of-the-box (OOTB) logic.  

The Default Substatus feature within the Substatus table allows organizations to streamline their work order management process by automatically applying a predefined Substatus when a work order moves into a particular System Status using the out-of-the-box logic. This helps ensure consistency across work orders while still allowing for customization and adaptability when needed.

For example, if your organization has a default Substatus of “Pending Customer Confirmation” for the System Status “Scheduled,” any work order that moves into the “Scheduled” System Status due to the standard logic will automatically be assigned the “Pending Customer Confirmation” Substatus. This helps maintain consistency and simplify the management of work orders, especially when dealing with a high volume of work orders. 

It’s important to note that if a work order already has a different Substatus applied within the same System Status, the default Substatus will not be applied. This means that, if an organization adds custom logic to set a substatus, the default logic will not override it. The existing Substatus will remain in place, allowing organizations to maintain flexibility and customization for specific work order situations. It is also worth noting that any custom logic is still subject the allowed System Status validations (for example: the Work Order cannot be forced into a Scheduled System Status if there are no bookings). 

Further, direct updates to the Substatus field will drive updates to the work order’s System Status, within the allowable System Status changes. For example, using some of the Substatus records proposed below, if a work order is in the “Technician Assignment Pending” Substatus, which maps to the Unscheduled System Status, a user could change the Substatus directly to “Customer Canceled” which will immediately move the System Status of the work order to Canceled. It is worth noting that the default form UI should filter the available Substatus values to allowed changes, based on the current state of the work order and the mapped System Status of the Substatus. In this example, none of the Subtatuses which mapped to the Scheduled or In Progress System Statuses would have shown up in the UI. They would have been dynamically filtered out so that a user couldn’t make a choice that wouldn’t have been allowed. 

Example: Substatus Records with Mapped System Status 

The following are an example set of possible meaningful Work Order Substatuses. These Substatuses will be mapped to the appropriate System Status to help drive actions and behaviors in the system while communicating meaningful information to anyone who looks at the work order in Dynamics 365 Field Service. 

Substatus  Mapped System Status  Default Substatus  What it communicates to the user who glances at the work order 
Technician Assignment Pending  Unscheduled  Yes  The work order has been created, but no technician has been assigned yet. 
Awaiting Parts  Unscheduled  No  The work order requires special parts which are on order and the required parts are pending delivery to the job site or warehouse. 
Pending Customer Confirmation  Scheduled  Yes  The work order has been tentatively scheduled and the booking is awaiting confirmation from the customer regarding their preferred appointment time or other necessary details. 
Appointment Confirmed  Scheduled  No  The work order has been scheduled and the customer has confirmed the appointment time. Every reasonable effort should be made to meet the commitment made with the customer. 
Remote Support Scheduled  Scheduled  No  What is communicates to the user who glances at the work order: The work order has been scheduled for remote support, such as a software installation or configuration. 
Service In Progress  In Progress  Yes  The technician is performing the service. 
Work Order Completed Successful  Completed  Yes  The work order has been successfully completed, and the scope of the work order has been resolved. 
Work Order Unresolved  Completed  No  The bookings have been completed, but the scope of the work order has not be resolved. Additional action may be required, such as escalating the issue to a higher-level technician or recommending alternative solutions to the customer. 
Work Order Invoiced  Posted  Yes  The work order has been invoiced, and the billing process is complete. 
Customer Cancelled  Cancelled  Yes  The work order has been cancelled by the customer. 
Resolved Remotely  Cancelled  No  The work order has been cancelled because the issue was able to be resolved remotely by customer service. 

Conclusion

While these are examples of Substatuses which may be valuable, each organization can create their own, set the defaults that make sense, and map them to relevant System Status values. 

Next up in the blog series –

Part 2 – Booking Status Concepts: Booking Status and Field Service Status
Part 3 – Booking status impact on work orders across single and multi-booking scenarios 

The post Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1   appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 2  

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 2  

This article is contributed. See the original author and article here.

Continuing our 3-part series exploring Dynamics 365 Field Service’s adaptability using critical status concepts. We have viewed the following concept Work Order Status Concepts: System Status and Substatus  in our last blog.

This blog explores the concept of:

  • Booking Status and Field Service Status for Bookings 

Grasping these concepts allows organizations to leverage the solution’s functionality, optimize field service processes, and ultimately provide better customer service. 

Booking Status Concepts: Booking Status and Field Service Status 

Before delving into the concepts of Booking Status and Field Service Status, it is important to understand the distinction between a work order and a booking in Dynamics 365 Field Service.  

A work order represents a scope of work to be performed for a customer. It includes the required services, the location of the work, type of resources, and other relevant information to complete the job. It also serves as a document which tracks how the scope of work is closed including what products and services were required, what tasks were completed, and other relevant information which someone may want to know about the work. Work orders are essential for organizing and managing service delivery, and their status changes as they progress through various stages, from creation to completion. 

On the other hand, a booking is a scheduled appointment or time slot that is associated with a work order. It is an essential component of the scheduling process, as it assigns a specific technician or resource to perform the services outlined in the work order. While work orders focus on the overall service request, bookings represent the individual appointments which are intersection of the specific time and duration of individual appointments and the assigned resource needed to fulfill the request. Each work order can have multiple bookings, allowing for more complex jobs to be split across multiple appointments or technicians. 

Booking Status 

Bookings for work orders in Dynamics 365 Field Service also have two critical status concepts. The first is Booking Status, which is a record that allows organizations to define their own meaningful statuses for bookings. By customizing Booking Status, organizations can better reflect their specific field service workflows and processes. 

Field Service Status 

The second critical concept for bookings is the Field Service Status value on Booking Status records. This status allows organizations to map their custom meaningful statuses to one of the six key values that the Field Service solution can interpret while driving important solution logic.  

SubStatus - Onsite bookings

These six key values are: 

  • Scheduled: The booking has been scheduled, and the resources are assigned. 
  • Traveling: The field service resources are en route to the job site. 
  • In Progress: The booking is currently being executed by the assigned resources. 
  • On Break: The field service resources are taking a break during the booking. 
  • Completed: The booking has been successfully executed and finished. 
  • Cancelled: The booking has been cancelled and will not be executed. 

By mapping their custom Booking Status values to the Field Service Status values, organizations ensure seamless integration between their unique processes and the overall Field Service solution. 

Example: Booking Status Records with Mapped Field Service Status 

For a Booking Status to be usable on a Booking which is related to a Work Order, the system expects the Booking Status to have a Field Service Status value. The following are an example set of meaningful Booking Status records. These Booking Statuses will be mapped to the appropriate Field Service Status to help drive actions and behaviors in the system while communicating meaningful information to anyone who looks at the booking in Dynamics 365 Field Service. 

Booking Status  Mapped Field Service Status  What it communicates to the user who glances at the booking 
Proposed Time  Scheduled  A proposed appointment time has been suggested for the booking, but it may still be subject to change or require further confirmation from the customer or technician. 
Confirmed with Customer  Scheduled  The appointment time has been confirmed with the customer, and the booking is set to proceed as planned. 
En Route  Traveling  The assigned technician is currently traveling to the job site or customer location to begin work on the booking. 
Lunch  On Break  The assigned technician is currently taking a lunch break or a short pause during their work schedule. 
On Site  In Progress  The assigned technician has arrived at the job site or customer location and has started working on the booking. 
Work Completed  Completed  The assigned technician has successfully finished the work on the booking. 
Finished – Parts Required  Completed  The technician is leaving but the work is partially complete and additional parts are needed to finish the job. 
Finished Helper Needed  Completed  The work is partially complete and the technician requires assistance from another team member to finish the job. 
Cancelled by Customer  Cancelled  The customer has cancelled the booking. 
Cancelled by Tech  Cancelled  The technician has cancelled the booking, possibly due to unforeseen circumstances or scheduling conflicts. 

This blog expands upon many of the concepts discussed in the existing work order and booking status documentation: Work order life cycle and statuses – Dynamics 365 Field Service | Microsoft Learn 

Next up in the blog series –

Part 3 – Booking status impact on work orders across single and multi-booking scenarios 

The post Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 2   appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 3  

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 3  

This article is contributed. See the original author and article here.

Completing our 3-part series exploring Dynamics 365 Field Service’s adaptability using critical status concepts. We have viewed the following concepts of Work Order Status Concepts: System Status and Substatus  and Booking Status Concepts: Booking Status and Field Service Status

This blog explores the concept of:

  • Booking status impact on work orders across single and multi-booking scenarios 

Grasping these concepts allows organizations to leverage the solution’s functionality, optimize field service processes, and ultimately provide better customer service. 

Booking Status Impact on Work Order System Status 

In addition to the status concepts explained earlier, it is essential to understand how the status of a booking, defined by Booking Status and interpreted by the Booking Status’ Field Service Status, drives the status of a work order, which can have more than one booking. This relationship plays a critical role in the efficient management of work orders and bookings in Dynamics 365 Field Service. 

Single Booking Impact on Work Order System Status 

When there is only one booking present: 

  1. If the booking is created and its Booking Status maps to the Field Service Status of Scheduled, the work order automatically moves to the System Status of Scheduled. 
  1. When the booking is updated to a Booking Status mapping to the Field Service Status of Traveling, In Progress, or On Break, the work order automatically moves to the System Status of In Progress. 
  1. When the booking is updated to a Booking Status mapping to the Field Service Status of Completed, the work order automatically moves to the System Status of Completed. 
  1. If the booking is updated to a Booking Status mapping to the Field Service Status of Cancelled, the work order automatically moves back to the System Status of Unscheduled. 

Multiple Bookings Impact on Work Order System Status 

When there is more than one booking present, the work order expresses the System Status related to the most active Booking Status (as interpreted by its set Field Service Status). The priorities for determining the Work Order System Status are as follows: 

  1. Highest Priority: Field Service Statuses that put a Work Order into the System Status of In Progress (Traveling, In Progress, and On Break). If any of the bookings are in these statuses, the Work Order will be in the System Status of In Progress. 
  1. Second Priority: Field Service Status that puts a Work Order into the System Status of Scheduled (Scheduled). If none of the bookings are in the highest priority statuses, but at least one is in the Scheduled status, the Work Order will be in the System Status of Scheduled. 
  1. Third Priority: Field Service Status that puts a Work Order into the System Status of Completed (Completed). If none of the bookings are in higher priority statuses and at least one is in the Completed status, the Work Order will be in the System Status of Completed. 
  1. Lowest Priority: The Field Service Status of Cancelled does not drive the Work Order into any System Status. Bookings in this state are effectively ignored as if they don’t exist from a Work Order System Status perspective. 

By understanding and managing the relationship between Booking Status and Work Order System Status, organizations can effectively coordinate their field service resources and ensure that work orders are updated accurately and efficiently. This knowledge allows for better decision-making, improved workflows, and ultimately a higher level of service for customers. Embrace the power of Dynamics 365 Field Service’s flexible status functionality and take your organization’s work order and booking management to new heights. 

Use Case 1: Single Booking Work Order 

Contoso Services, a field service company, receives a work order to repair a customer’s air conditioning unit.  

  • When the work order is initially created, it has a System Status of Unscheduled.  
  • Once a technician is booked to the work order, their Booking is created with a Booking Status of “Proposed Time” which maps to the Field Service Status of Scheduled. Consequently, the work order automatically moves to the System Status of Scheduled. 
  • As the technician begins traveling to the job site, the booking is updated to the Booking Status of “En Route” which maps to the Field Service Status of Traveling. This update causes the work order to move to the System Status of In Progress.  
  • As the technician moves the booking into the Booking Status of “Onsite” which maps to the Field Service Status of In Progress, the Work Order’s System Status doesn’t change, staying in In Progress.  
  • Of note, while this doesn’t have an impact on the Work Order’s System Status, if updated to this status from the mobile device, it does automatically update the Booking’s “Actual Arrival Time” and the Work Order’s “First Arrived On” values. 
  • Eventually, the technician completes the repair, and the booking is updated to the Booking Status of “Work Completed” which maps to the Field Service Status of Completed. This change results in the work order moving to the System Status of Completed. 
  • This will also update the Booking’s “End Time” and the Work Orders “Completed On” values. 

Use Case 2: Multiple Booking Work Order 

If a customer requests a two-stage service from Contoso Services, which requires different technicians for each stage. The work order now has two separate bookings.  

  • Initially, both bookings are in the Booking Status of “Confirmed with Customer” which maps to the Field Service Status of Scheduled, and the work order is in the System Status of Scheduled. 
  • When the first technician starts traveling, their booking’s status updates to “En Route” which is mapped to the Field Service Status of Traveling, so the work order’s System Status changes to In Progress.  
  • After the first technician completes their work, their booking status is changed to “Work Completed.” 
  •  However, the second booking is still in the “Confirmed with Customer” booking status which maps to the Field Service Status of Scheduled, so the work order reverts back to the System Status of Scheduled, as Scheduled has a higher priority than Completed so the remaining Scheduled booking is what will be expressed on the Work Order. 
  • Once the second technician starts traveling to the job site, their booking status changes to Traveling, and the work order updates to the System Status of In Progress.  
  • When the second technician finishes their work, their booking status is updated to Work Completed. Now, since both bookings are set to a booking status that has the Field Service Status of Completed, the work order moves to the System Status of Completed. 

Conclusion 

Understanding and leveraging the power of status functionality in Dynamics 365 Field Service, including System Status and Substatus on Work Orders and Booking Status and Field Service Status on Bookings, is crucial for organizations looking to optimize their field service processes. By understanding how they work and customizing these statuses to suit their specific needs, organizations can streamline their workflows, increase efficiency, and ultimately deliver better service to their customers.  

Start harnessing the power of Dynamics 365 Field Service’s adaptable status functionality today to unlock your organization’s full potential in managing work orders and bookings.

Read previous blogs from this series.

Part 1 – Work Order Status Concepts: System Status and Substatus
Part 2 – Booking Status Concepts: Booking Status and Field Service Status

The post Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 3   appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 3  

Printing Labels Using External Service with Dynamics 365 – Warehouse Management

This article is contributed. See the original author and article here.

Live in Dynamics 365 Supply Chain Management


Introduction

Barcodes and shipping labels are essential components in the supply chain landscape. They play a vital role in ensuring accurate inventory management, product tracking, and streamlining processes. Shipping labels are particularly important for navigating shipments complex global supply chains while maintaining end-to-end traceability. QR codes have also become a valuable tool for companies to engage customers and track their products worldwide.

With the 10.0.34 release, Supply Chain Management (SCM) has become even more robust, offering seamless integrations with third-party labelling solutions out-of-the-box.

Microsoft has partnered with Seagull Scientific BarTender and Loftware NiceLabel to enhance core Dynamics 365 SCM labeling capabilities and alleviate common pain points faced by many organizations. 

This enhancement further strengthens the capabilities of SCM in managing barcodes and shipping labels effectively.

This feature enables direct interaction between Microsoft Dynamics 365 Supply Chain Management and third-party solutions by providing a framework for communicating via HTTP APIs, without requiring the Document Routing Agent (DRA).

What capabilities does this unlock? 

Integrating third-party labelling solutions is important for several reasons:

  • Label design: It provides user-friendly interfaces for designing custom labels, allowing businesses to create labels that meet their specific requirements and comply with industry standards. It includes possibilities to design labels with barcode or QR codes.
  • Printer compatibility: These labelling solutions support a wide range of printers, enabling businesses to print labels on various devices without compatibility issues. This flexibility ensures that labels can be printed efficiently and accurately, regardless of the printer being used.
  • Automation: It offers automation capabilities, allowing businesses to streamline their labelling processes and reduce manual intervention. By integrating with Dynamics 365 SCM, businesses can automate label printing based on specific triggers or events within the SCM system.
  • Centralized management: It provides centralized management tools that enable businesses to control and monitor their entire labelling process from a single location. Integration with Dynamics 365 SCM ensures that businesses can manage their supply chain and labelling operations cohesively.
  • RFID technology support: It support RFID encoding for various RFID tag types and frequencies, ensuring compatibility with a wide range of RFID systems as well as management of RFID-enabled labels for enhanced tracking and data management

In conclusion, Microsoft Dynamics 365 SCM now provides a quick and simple method for linking Dynamics 365 SCM to many of the most popular enterprise labeling platforms. With Microsoft Dynamics 365 SCM’s seamless integration and flexible configuration options, implementation is a pain-free and rapid. It allows for a a seamless flow of communication and transactions to optimize your printing workflow.


Learn more

Print labels using an external service – Supply Chain Management | Dynamics 365 | Microsoft Learn

Print labels using the Loftware NiceLabel label service solution – Supply Chain Management | Dynamics 365 | Microsoft Learn

Print labels using the Seagull Scientific BarTender label service solution – Supply Chain Management | Dynamics 365 | Microsoft Learn

Not yet a Supply Chain Management customer? Take a guided tour.

The post Printing Labels Using External Service with Dynamics 365 – Warehouse Management appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

How Copilot in Microsoft Dynamics 365 and Power Platform delivers enterprise-ready AI built for security and privacy

How Copilot in Microsoft Dynamics 365 and Power Platform delivers enterprise-ready AI built for security and privacy

This article is contributed. See the original author and article here.

Over the past few months, the world has been captivated by generative AI and applications like the new chat experience in Bing, which can generate original text responses from a simple prompt written in natural language. With the introduction of generative AI across Microsoft business applicationsincluding Microsoft Dynamics 365, Viva Sales, and Power Platforminteractions with AI across business roles and processes will become second nature. With Copilot, Microsoft Dynamics 365 and Power Platform introduce a new way to generate ideas and content drafts, and methods to access and organize information across the business.

Before your business starts using Copilot capabilities in Dynamics 365 and Power Platform, you may have questions about how it works, how it keeps your business data secure, and other important considerations. The answers to common questions below should help your organization get started.

What’s the difference between ChatGPT and Copilot?

ChatGPT is a general-purpose large language model (LLM) trained by OpenAI on a massive dataset of text, designed to engage in human-like conversations and answer a wide range of questions on various topics. Copilot also uses an LLM; however, the enterprise-ready AI technology is prompted and optimized for your business processes, your business data, and your security and privacy requirements. For Dynamics 365 and Microsoft Power Platform users, Copilot suggests optional actions and content recommendations in context with the task at hand. A few ways Copilot for natural language generation is unique:

  • The AI-generated responses are uniquely contextual and relevant to the task at hand informed by your business datawhether responding to an email from within Dynamics 365, deploying a low-code application that automates a specific manual process, or creating a targeted list of customer segments from your customer relationship management (CRM) system.
  • Copilot uses both an LLM, like GPT, and your organization’s business data to produce more accurate, relevant, and personalized results. In short, your business data stays within your tenancy and is used to improve context only for your scenario, and the LLM itself does not learn from your usage. More on how the system works is below.
  • Powered by Microsoft Azure OpenAI Service, Copilot is designed from the ground up on a foundation of enterprise-grade security, compliance, and privacy.

Read on for more details about these topics. 

How does Copilot in Dynamics 365 and Power Platform work?

With Copilot, Dynamics 365 and Power Platform harness the power of foundation models coupled with proprietary Microsoft technologies applied to your business data:

  • Search (using Bing and Microsoft Azure Cognitive Search): Brings domain-specific context to a Copilot prompt, enabling a response to integrate information from content like manuals, documents, or other data within the organization’s tenant. Currently, Microsoft Power Virtual Agent and Dynamics 365 Customer Service use this retrieval-augmented generation approach as pre-processing to calling an LLM.
  • Microsoft applications like Dynamics 365, Viva Sales, and Microsoft Power Platform and the business data stored in Microsoft Dataverse.
  • Microsoft Graph: Microsoft Graph API brings additional context from customer signals into the prompt, such as information from emails, chats, documents, meetings, and more.

An illustration of Copilot technologies that harness the power of foundation models using an LLM, Copilot, Microsoft Graph, Search, and Microsoft applications like Dynamics 365 and Microsoft Power Platform.

Copilot requests an input prompt from a business user in an app, like Microsoft Dynamics 365 Sales or Microsoft Power Apps. Copilot then preprocesses the prompt through an approach called grounding, which improves the specificity of the prompt, so you get answers that are relevant and actionable to your specific task. It does this, in part, by making a call to Microsoft Graph and Dataverse and accessing the enterprise data that you consent and grant permissions to use for the retrieval of your business content and context. We also scope the grounding to documents and data which are visible to the authenticated user through role-based access controls. For instance, an intranet question about benefits would only return an answer based on documents relevant to the employee’s role.

This retrieval of information is referred to as retrieval-augmented generation and allows Copilot to provide exactly the right type of information as input to an LLM, combining this user data with other inputs such as information retrieved from knowledge base articles to improve the prompt. Copilot takes the response from the LLM and post-processes it. This post-processing includes additional grounding calls to Microsoft Graph, responsible AI checks, security, compliance and privacy reviews, and command generation.

Finally, Copilot returns a recommended response to the user, and commands back to the apps where a human-in-the-loop can review and assess. Copilot iteratively processes and orchestrates these sophisticated services to produce results that are relevant to your business, accurate, and secure.

How does Copilot use your proprietary business data? Is it used to train AI models?

Copilot unlocks business value by connecting LLMs to your business datain a secure, compliant, privacy-preserving way.

Copilot has real-time access to both your content and context in Microsoft Graph and Dataverse. This means it generates answers anchored in your business contentyour documents, emails, calendar, chats, meetings, contacts, and other business dataand combines them with your working contextthe meeting you’re in now, the email exchanges you’ve had on a topic, the chat conversations you had last weekto deliver accurate, relevant, contextual responses.

We, however, do not use customers’ data to train LLMs. We believe the customers’ data is their data, aligned to Microsoft’s data privacy policy. AI-powered LLMs are trained on a large but limited corpus of databut prompts, responses, and data accessed through Microsoft Graph and Microsoft services are not used to train Dynamics 365 Copilot and Power Platform Copilot capabilities for use by other customers. Furthermore, the foundation models are not improved through your usage. This means your data is accessible only by authorized users within your organization unless you explicitly consent to other access or use.

Are Copilot responses always factual?

Responses produced with generative AI are not guaranteed to be 100 percent factual. While we continue to improve responses to fact-based inquiries, people should still use their judgement when reviewing outputs. Our copilots leave you in the driver’s seat, while providing useful drafts and summaries to help you achieve more.

Our teams are working to address issues such as misinformation and disinformation, content blocking, data safety and preventing the promotion of harmful or discriminatory content in line with our AI principles.

We also provide guidance within the user experience to reinforce the responsible use of AI-generated content and actions. To help guide users on how to use Copilot, as well as properly use suggested actions and content, we provide:  

Instructive guidance and prompts. When using Copilot, informational elements instruct users how to responsibly use suggested content and actions, including prompts, to review and edit responses as needed prior to usage, as well as to manually check facts, data, and text for accuracy.

Cited sources. Copilot cites public sources when applicable so you’re able to see links to the web content it references.

How does Copilot protect sensitive business information and data?

Microsoft is uniquely positioned to deliver enterprise-ready AI. Powered by Azure OpenAI Service, Copilot features built-in responsible AI and enterprise-grade Azure security.

Built on Microsoft’s comprehensive approach to security, compliance, and privacy. Copilot is integrated into Microsoft services like Dynamics 365, Viva Sales, Microsoft Power Platform, and Microsoft 365, and automatically inherits all your company’s valuable security, compliance, and privacy policies and processes. Two-factor authentication, compliance boundaries, privacy protections, and more make Copilot the AI solution you can trust.

Architected to protect tenant, group, and individual data. We know data leakage is a concern for customers. LLMs are not further trained on, or learn from, your tenant data or your prompts. Within your tenant, our time-tested permissions model provides safeguards and enterprise-grade security as seen in our Azure offerings. And on an individual level, Copilot presents only data you can access using the same technology that we’ve been using for years to secure customer data.

Designed to learn new skills. Copilot’s foundation skills are a game changer for productivity and business processes. The capabilities allow you to create, summarize, analyze, collaborate, and automate using your specific business content and context. But it doesn’t stop there. Copilot recommends actions for the user (for example, “create a time and expense application to enable employees to submit their time and expense reports”). And Copilot is designed to learn new skills. For example, with Viva Sales, Copilot can learn how to connect to CRM systems of record to pull customer datalike interaction and order historiesinto communications. As Copilot learns about new domains and processes, it will be able to perform even more sophisticated tasks and queries.

Will Copilot meet requirements for regulatory compliance mandates?

Copilot is offered within the Azure ecosystem and thus our compliance follows that of Azure. In addition, Copilot adheres to our commitment to responsible AI, which is described in our documented principles and summarized below. As regulation in the AI space evolves, Microsoft will continue to adapt and respond to fulfill future regulatory requirements in this space.

Woman standing, holding a tablet, as a colleague walks by in the background.

Next-generation AI across Microsoft business applications

With next-generation AI, interactions with AI across business roles and processes will become second nature.

Committed to responsible AI

Microsoft is committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are helping our customers use our AI products responsibly, sharing our learnings, and building trust-based partnerships. For these new services, we provide our customers with information about the intended uses, capabilities, and limitations of our AI platform service, so they have the knowledge necessary to make responsible deployment choices.  

Take a look at related content

The post How Copilot in Microsoft Dynamics 365 and Power Platform delivers enterprise-ready AI built for security and privacy appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Running OpenFOAM simulations on Azure Batch

Running OpenFOAM simulations on Azure Batch

This article is contributed. See the original author and article here.

OpenFOAM (Open Field Operation and Manipulation) is an open-source computational fluid dynamics (CFD) software package. It provides a comprehensive set of tools for simulating and analyzing complex fluid flow and heat transfer phenomena. It is widely used in academia and industry for a range of applications, such as aerodynamics, hydrodynamics, chemical engineering, environmental simulations, and more.



Azure offers services like Azure Batch and Azure CycleCloud that can help individuals or organizations run OpenFOAM simulations effectively and efficiently. In both scenarios, these services allow users to create and manage clusters of VMs, enabling parallel processing and scaling of OpenFOAM simulations. While CycleCloud provides a similar experience to on-premises thanks to its support to common schedulers like OpenPBS or SLURM; Azure Batch provides a cloud native resource scheduler that simplifies the configuration, maintenance and support of your required infrastructure.



This article covers a step-by-step guide on a minimal Azure Batch setup to run OpenFOAM simulations. Further analysis should be performed to identify the right sizing both in terms of compute and storage. A previous article on How to identify the recommended VM for your HPC workloads could be helpful.



Step 1: Provisioning required infrastructure



To get started, create a new Azure Batch account. At this point a pool, job or task is not required. In our scenario, the pool allocation method would be configure as “User Subscription” and public network access configured to “All Networks”.



A shared storage across all nodes would be also required to share the input model and store the outputs. In this guide, an Azure Files NFS share would be used. Alternatives like Azure NetApp Files or Azure Managed Lustre could also be an option base on your scalability and performance needs.



Step 2: Customizing the virtual machine image



OpenFOAM provides pre-compiled binaries packaged for Ubuntu that can be installed through its oficial APT repositories. If Ubuntu is your distribution of choice, you can follow the oficial documentation on how to install it, using a pool’s start task is a good approach to do it. As an alternative, you can create a custom image with everything already pre-configured.



This article would cover the second option using CentOS 7.9 as base image to show the end-to-end configuration and compilation of the software from source code. To simplify the process, it would rely on the available HPC images that provide the required pre-requisites already installed. The reference URN for those images is: OpenLogic:CentOS-HPC:s7_9-gen2:latest. The SKU of the VM we would use both to create the custom image and run the simulations is a HBv3.



Start the configuration creating a new VM. After the VM is up and running, execute the following script to download and compile OpenFOAM source code.

## Downloading OpenFoam
sudo mkdir /openfoam
sudo chmod 777 /openfoam
cd /openfoam
wget https://dl.openfoam.com/source/v2212/OpenFOAM-v2212.tgz
wget https://dl.openfoam.com/source/v2212/ThirdParty-v2212.tgz

tar -xf OpenFOAM-v2212.tgz
tar -xf ThirdParty-v2212.tgz

module load mpi/openmpi
module load gcc-9.2.0

## OpenFoam 10 requires cmake 3. CentOS 7.9 cames with a previous version.
sudo yum install epel-release.noarch -y
sudo yum install cmake3 -y
sudo yum remove cmake -y
sudo ln -s /usr/bin/cmake3 /usr/bin/cmake

source OpenFOAM-v2212/etc/bashrc
foamSystemCheck
cd OpenFOAM-v2212/
./Allwmake -j -s -q -l


The last command compiles with all cores (-j), reduced output (-s, -silent), with queuing (-q, -queue) and logs (-l, -log) the output to a file for later inspection. After the initial compilation, review the output log or re-run the last command to make sure that everything was compiled without errors. Output is so verbose that errors could be missed in a quick review of the logs.
It would take a while before the compilation process finishes. After that, you can delete the installers and any other folder not required in your scenario and capture the image into a Shared Image Gallery.


 


Step 3. Batch pool configuration



Add a new pool to your previously created Azure Batch account. You can create a new pool using the standard wizard (Add) and fulfilling the required fields with the values mentioned in the following JSON, or you can copy and paste this file into the Add (JSON editor).
Make sure you customize the properties between .


 

{
    "properties": {
        "vmSize": "STANDARD_HB120rs_V3",
        "interNodeCommunication": "Enabled",
        "taskSlotsPerNode": 1,
        "taskSchedulingPolicy": {
            "nodeFillType": "Pack"
        },
        "deploymentConfiguration": {
            "virtualMachineConfiguration": {
                "imageReference": {
                    "id": ""
                },
                "nodeAgentSkuId": "batch.node.centos 7",
                "nodePlacementConfiguration": {
                    "policy": "Regional"
                }
            }
        },
        "mountConfiguration": [
            {
                "nfsMountConfiguration": {
                    "source": "",
                    "relativeMountPath": "data",
                    "mountOptions": "-o vers=4,minorversion=1,sec=sys"
                }
            }
        ],
        "networkConfiguration": {
            "subnetId": "",
            "publicIPAddressConfiguration": {
                "provision": "BatchManaged"
            }
        },
        "scaleSettings": {
            "fixedScale": {
                "targetDedicatedNodes": 0,
                "targetLowPriorityNodes": 0,
                "resizeTimeout": "PT15M"
            }
        },
        "targetNodeCommunicationMode": "Simplified"
    }
}


Wait till the pool is created and the nodes are available to accept new tasks. Your pool view should look similar to the following image.


 


jangelfdez_0-1683902712739.png


 


Step 4. Batch Job Configuration



Once the pool allocation state value is “Ready”, continue with the next step: create a new Job. Default configuration is enough in this case. In our case, the job is called “flange” because we would use the flange example from OpenFOAM tutorials.


 


jangelfdez_1-1683902721296.png


 


Step 5. Task Pool Configuration



Once the job state value changes to “Active”, it is ready to admit new tasks. You can create a new task using the standard wizard (Add) and fulfilling the required fields with the values mentioned in the following JSON, or you can copy and paste this file into the Add (JSON editor).



Make sure you customize the properties between .

{
  "id": "",
  "commandLine": "/bin/bash -c '$AZ_BATCH_NODE_MOUNTS_DIR/data/init.sh'",
  "resourceFiles": [],
  "environmentSettings": [],
  "userIdentity": {
    "autoUser": {
      "scope": "pool",
      "elevationLevel": "nonadmin"
    }
  },
  "multiInstanceSettings": {
    "numberOfInstances": 2,
    "coordinationCommandLine": "echo "Coordination completed!"",
    "commonResourceFiles": []
  }
}


Task commandline parameter is configured to execute a Bash script stored into the Azure Files that Batch is mounting automatically into the ‘$AZ_BATCH_NODE_MOUNTS_DIR/data’ folder. You need to copy first the following scripts and the flange example mentioned above into a folder called flange inside that directory.


 


Command Line Task Script


This script would configure the environment variables and pre-process the input files before launching the mpirun command to execute the solver in parallel across all the available nodes. In this case, 2 nodes with 240 cores.


 

#! /bin/bash
source /etc/profile.d/modules.sh
module load mpi/openmpi

# Azure Files is mounted automatically in this directory based on the pool configuration
DATA_DIR="$AZ_BATCH_NODE_MOUNTS_DIR/data"
# OpenFoam was installed on this folder
OF_DIR="/openfoam/OpenFOAM-v2212"

# A new folder is created per execution and the input data copied there.
mkdir -p "$DATA_DIR/flange"
unzip -o "$DATA_DIR/flange.zip" -d "$DATA_DIR/$AZ_BATCH_TASK_ID"

# Configures OpenFoam environment
source "$OF_DIR/etc/bashrc"
source "$OF_DIR/bin/tools/RunFunctions"

# Preprocessing of the files
cd "$DATA_DIR/$AZ_BATCH_JOB_ID-flange"
runApplication ansysToFoam "$OF_DIR/tutorials/resources/geometry/flange.ans" -scale 0.001
runApplication decomposePar

# Configure the host file
echo $AZ_BATCH_HOST_LIST | tr "," "n" > hostfile
sed -i 's/$/ slots=120/g' hostfile

# Launching the secondarr script to perform the parallel computation.
mpirun -np 240 --hostfile hostfile "$DATA_DIR/run.sh" > solver.log

 


Mpirun Processing Script



This script would launch the task in all the nodes available. It is required to configure the environment variables and folders the solver would need to access. If this script is not executed and the solver is invoked directly on the mpirun command, only the primary task node would have the right configuration applied and the rest of the nodes would fail with file not found errors.


 

#! /bin/bash
source /etc/profile.d/modules.sh
module load gcc-9.2.0
module load mpi/opennmpi

DATA_DIR="$AZ_BATCH_NODE_MOUNTS_DIR/data"
OF_DIR="/openfoam/OpenFOAM-v2212"

source "$OF_DIR/etc/bashrc"
source "$OF_DIR/bin/tools/RunFunctions"

# Execute the code across the nodes.
laplacianFoam -parallel > solver.log

Step 6. Checking the results


 


Mpirun output is redirected to a file called solver.log in the directory where the model is stored inside the Azure Files file share. Checking the first lines of the log, it’s possible to validate that the execution has properly started and it’s running on top of two HBv3 with 240 processes.


 

/*---------------------------------------------------------------------------*
| ========= | |
|  / F ield | OpenFOAM: The Open Source CFD Toolbox |
|  / O peration | Version: 2212 |
|  / A nd | Website: www.openfoam.com |
| / M anipulation | |
*---------------------------------------------------------------------------*/
Build : _66908158ae-20221220 OPENFOAM=2212 version=v2212
Arch : "LSB;label=32;scalar=64"
Exec : laplacianFoam -parallel
Date : May 04 2023
Time : 15:01:56
Host : 964d5ce08c1d4a7b980b127ca57290ab000000
PID : 67742
I/O : uncollated
Case : /mnt/resource/batch/tasks/fsmounts/data/flange
nProcs : 240
Hosts :
(
(964d5ce08c1d4a7b980b127ca57290ab000000 120)
(964d5ce08c1d4a7b980b127ca57290ab000001 120)
)

 


Conclusion



By leveraging Azure Batch’s scalability and flexible infrastructure, you can run OpenFOAM simulations at scale, achieving faster time-to-results and increased productivity. This guide demonstrated the process of configuring Azure Batch, customizing the CentOS 7.9 image, installing dependencies, compiling OpenFOAM, and running simulations efficiently on Azure Batch. With Azure’s powerful capabilities, researchers and engineers can unleash the full potential of OpenFOAM in the cloud.