This article is contributed. See the original author and article here.
Advanced Configuration
In this final part of the Field Service Mobile offline blog series, we will discuss some of the more advanced configuration and recommendations for IT pros and partners to get the most out of their offline application.
Limit relationships to avoid slow-running data queries.
In addition to limiting data being downloaded, it is also important to limit the complexity of expensive SQL queries that are run to fetch that data. Gains realized by reducing data can be offset by complex queries which take longer run on the server. The following best practices can be considered when defining relationships:
If your data model includes several levels of relationships generating multiple joins across tables, consider using simple filters like ‘all records’; it can be faster to download more data up front as part of the one-time initial sync so more frequent delta syncs will be faster without the complex queries.
If using time-based filters to reduce records, it is recommended to use time ranges with fixed boundaries. For the most efficient sync experience, you could include fixed time window of last month, current month and next month. If requiring more dynamic time-based filtering, filter using Created on in the last N-days. Using these filtering techniques will help support downloading only recent, relevant data for the Frontline Worker.
Avoid using both custom data filters and selecting relationships on the same table. This will result in complex queries impacting performance.
NOTE: Be aware that using a custom filter creates an OR with relationships, and each relationship creates as an OR as well.
Avoid self-joins, where a table is making a circular reference with the same table within customer filters.
If using time-based calendar items that result in downloading many related records and files, consider reducing that time window to reduce total data download
Leveraging ODATA to view Offline Profile configuration.
Makers may be able to better evaluate joins and complexity of their offline profile by viewing those joins directly via API. The following APIs can be used to view details of the offline profile.
OData call used to return JSON of the mobile offline profile showing profile filters.
This is the fetch xml for offline profile items for any entity within the profile. This could be used to inspect the complexity and relationships.
NOTE: For the snippets below
{orgurl} is your CRM organization URL
{profileID} is the GUID for your mobile offline profile
{entityname} the logical name of your entity
{entitysetname} is the plural name you assign for your entity (must be lower case)
{fetchXml}: Return value from your get filter ODATA call
To get started you can locate your profile id leveraging:
https://{orgurl}/api/data/v9.0/mobileofflineprofileitemfilters?$filter=_mobileofflineprofileid_value eq ‘{profileID}’ and returnedtypecode eq '{entityname}' and type eq 2&$select=fetchXml,returnedtypecode
To get the ODATA to test FetchXML for an entity you are including in your profile:
Understanding Application data & schema changes and their impact on Offline Sync
The offline sync client uses Dataverse change tracking to find updated records to download. Even a minor change to one column will trigger the re-download of the entire record. Watch out for processes that automatically update many records on a frequent basis as this will lead to longer synchronization times.
Similarly, when the schema of a data table changes, the offline sync client will re-download records in that table to ensure that no data is missed. Whenever possible, avoid schema changes to offline-enabled tables. When schema changes are required, group them together in a single release or solution update so that data is only re-downloaded for each table one time.
Leverage “online light up” for edge cases, or scenarios that may not require offline access.
There are some scenarios where offline access may not be necessary. An example of this may be iOT data which is only a live feed from a connected device that is only accessible online.
In these cases, you can include that table as part of the user experience in the application, but not include it in the mobile offline profile. By doing so, the views for that table will be accessible to the Field Service Mobile users only when the network is available.
Leveraging online light up for online-only scenarios helps to reduce data which would otherwise need to be synchronized to the device. It is a great way to meet business needs for uncommon or edge-case scenarios without having to download more data for standard business scenarios that must function offline.
This concludes our 3-part blog series on getting the best of your Dynamics 365 Field Service mobile application setup. If there are new enhancement suggestions, it is recommended to submit those asks via the Field Service Mobile Ideas portal: Field Service Mobile – Ideas. This will allow the product team to evaluate new requests and plan for future product release waves.
This article is contributed. See the original author and article here.
t’s gearing up to be an exciting week November 14th – 17th as we prepare for this year’s Microsoft Ignite and PASS Data Summit all happening at once! While across the way we’ll be sharing the latest from Microsoft all up, taking place at the Convention Center. We’re excited to be with our community digging into all things data. We’re back as premier sponsors together with our partners at Intel. With over 30+ sessions we’ll cover everything from ground to the cloud, with SQL Server, Azure SQL, all the way up to powering analytics for your data with Microsoft Fabric. This year, we’ll also bring in our developer community with sessions covering our solutions for open-source database, PostgreSQL.
We hope you’ll join us to “Connect, Share and Learn” alongside the rest of your peers at the PASS community. The official event kicks off with our keynote Wednesday morning with Shireesh Thota Vice President of Azure Databases who’s been hard at work getting ready for the event:
This workshop will provide a technically led driven approach to translating your knowledge and skills from SQL Server to Azure SQL. You will experience an immersive look at Azure SQL including hands-on labs, no Azure subscription required.
In this workshop, we’ll dedicate a full day to deep diving into each one of these new features such as JSON, Data API builder, calling REST endpoints, Azure function integrations and much more, so that you’ll learn how to take advantage of them right away. Also being covered:
Understand the use cases
Gain practical knowledge that can be used right away
Keynote
Wednesday: Shireesh Thota, Vice President of Azure Join Microsoft’s Shireesh Thota and Microsoft engineering leaders for a keynote delivered live from Seattle. We’ll showcase how the latest from Microsoft across databases, analytics, including the recently announced Fabric, seamlessly integrate to accelerate the power of your data.
General Sessions
30+ more sessions over three days:Check them all out here. From SQL Server to Azure SQL and analytics and governance, Microsoft’s experts will bring the latest product develops to help you build the right data platform to solve your business needs.
Connect, Grow, Learn with us!
As a special offer from Microsoft, enter the code AZURE150 at checkout to receive $150 off on the 3-Day conference pass (in-person).
SQL Server: 30 and thriving!
Already registered? Pop on by opening night as we say Happy Birthday SQL Server!
This article is contributed. See the original author and article here.
Introduction
A new feature is coming that is designed to improve performance when working with financial dimensions and reduce the overall storage cost of storing financial dimension values. The initial changes for improved performance and reduced storage will being rolling out in application release 10.0.38. There are 3 new fields being added to the table Dimension code combination (DimensionAttributeValueCombination) for this initial application release.
You will see this improvement fully realized in application release 10.0.42 when 22 fields and related indexes are removed. These fields all begin with SystemGeneratedAttribute and are used for processes like financial journal entry.
Feature details
Enabling the feature Financial dimension performance and storage improvement feature will allow your environment to use just the 3 fields newly added to this table.
If you would like to test the removal of the 22 fields and indexes please contact technical support for further information and early enablement before application 10.0.42. Testing this change with any customizations that utilize data directly from this table – which should be very uncommon – will ensure smooth transition when they are permanently removed in 10.0.42.
Why is this a benefit? Removing these fields and indexes from this highly used table will provide an improved query and insert performance as well as reduced storage cost. While removing 22 fields is a great benefit, the larger gain for your environment is the removal of the related indexes.
Call to action
After enabling this feature in your test environment, verify all of your customizations and key business scenarios. Because all of these data model changes are fully encapsulated in Microsoft owned API calls, there should be no impact for environments with proper customizations. Any customization accessing this table should review the business need and consider other API endpoints for proper access.
This article is contributed. See the original author and article here.
In Part 1 of this series, we discussed the end-user experience of the offline-first Field Service Mobile application. In this second part we will go through some of the configuration and best practices for a successful offline rollout.
Mobile Offline Configuration & Best Practices
Leverage the out of the box mobile offline profile
The out of the box Field Service Mobile offline profile is a great starting point when enabling offline for your organization. It has common Field Service tables pre-configured along with some recommended filters to limit data. When modifying the mobile offline profile, it is recommended to not remove existing tables, but only add new/custom tables required by your organization. If you do want to remove tables from the OOTB profile, be sure there are no references or cross-linking in the views as relationships between tables can at times be difficult to identify at first glance.
Limit offline data synchronized to the device
One of the most important things to set your organization up for success with mobile offline is enabling the right data for your business scenarios. Given bandwidth and device constraints it is critical that data being synced from the server is limited as much as possible to have a fast and efficient experience.
We recommend you evaluate your offline data needs by considering the following:
What are the core business scenarios for a given Work Order assigned to the frontline worker using the application?
What is the minimum historical data which is required offline?
What relationships exist between tables which will be required to drive lists/views/lookups and cross references?
What elements on the application may not be needed offline and can be considered online only (excluded from the offline profile)?
Determining above may take several conversations with business stakeholders and frontline workers. It is recommended to document these details in text before diving in to configure your mobile offline profile.
Offline sync and application data
In addition to the data sync, the first sync will include app data which is used to drive the views and forms of the application. This app data is highly compressed when downloading over the network and unpacks after being downloaded to the device.
App data includes scripts, images, and other resources from the Microsoft Field Service solution and any additional customization from solution providers and admins.
While many of the out-of-the-box scripts should not be modified by the organization, for custom app data be sure to follow best practices:
Minify scripts to reduce file size.
Reduce image assets sizes.
Only include assets which are strictly required for mobile app usage.
Test as a user in real world conditions
It is important to test changes to your offline profile directly on the mobile application while using an account that mirrors the role that real end-users will ultimately be using to access the device. This is important because different roles in the organization may have different data access levels and have dramatically different results during offline synchronization.
When testing you can evaluate the Offline Status Page in the application to see which tables are being synchronized and how many records per table are being downloaded.
In addition to testing with the correct user role, be sure to test or simulate real world conditions; for example, you will want test cases to mirror the following:
Wi-Fi
Cellular (strong)
Cellular (weak/low signal)
No network
Testing in various network conditions will help you identify hidden issues where a table is missing from the profile or filter condition may be excluding a necessary record. In some cases, internal business logic may go to the server to get the missing record from the mobile offline profile; this provides a better user experience by avoiding errors for connected scenarios but can result in errors when the application is running without network.
This level of testing will give further validation that your offline configuration has met your business requirements and frontline workers will have success in any network condition.
Avoid extensive use of Web Resources with the offline application
Web resources have several offline limitations which can differ by mobile operating system. Due to these limitations and inconsistency between device operating systems, it is recommended to leverage PowerApps Composition Framework (PCF) controls
Be aware of larger file types such as images, videos, and documents
Large files and images require some special handling to enable for offline and limit so to avoid consuming excessive amounts of bandwidth or disc space.
The offline-enabled Field Server Mobile application will sync data from the server at regular intervals. If part of a workflow depends on interaction with the server, the response may take minutes to return to the client when the network is available, and not at all if the user is truly offline. To avoid the delay and make the offline experience more consistent, it is recommended to move as much business logic to the client as possible.
This may involve moving some capabilities traditionally handled by a server-side plugin to the client so it can function properly in offline mode.
Within the Mobile Offline Profile configuration each table can have its own sync interval. This interval determines how often that table is checked for updates.
You can change the sync interval for each table to reduce the frequency of syncing as users use the app. This may reduce network and battery usage.
It is recommended to set intervals to be less frequent on tables which are not updated often.
If you’d like to slow down all data downloads, update the sync interval for all tables in the offline profile to a higher interval.
With the release of Offline Sync Settings in Wave 2 2023, users can control their individual sync settings and set their client to only sync while on Wi-Fi. These settings can be leverages for scenarios where the Frontline Worker may for work extended periods of time without the need to sync, or have data capacity limits on their cellular plans.
Moving the mobile offline profile between environments
Commonly, configuration of the mobile offline profile is done in a sandbox environment and will need to be moved up to a test environment before ultimately being updated in production. To ensure consistently between environments it is recommended you move the offline profile as part of a managed solution.
This can be accomplished by creating a new solution and then binding the offline profile to that solution which can be exported. Simply re-import the solution to the new environment then publish and your changes will be updated with consistency between environments.
Watch this space – the next blog is coming in 2 days!
If there are new enhancement suggestions, it is recommended to submit those asks via the Field Service Mobile Ideas portal: Field Service Mobile – Ideas. This will allow the product team to evaluate new requests and plan for future product release waves.
This article is contributed. See the original author and article here.
Selling is all about relationships. We hear a lot these days about the disconnect that our increasingly digital world can create. But at Microsoft, we believe that digital tools, especially those powered by generative AI and real-time insights, can help strengthen sellers’ relationships with their customers. We’re continually investing in Microsoft Dynamics 365 Sales to enable sellers to engage with their customers more meaningfully. We are pleased to announce that Microsoft has been named a Leader in The Forrester WaveTM: Sales Force Automation, Q3 2023 report, with top scores possible in the Vision, Innovation, and Roadmap criteria for our sales force automation (SFA) platform.
Reducing complexity to drive seller success
The role of a seller has only grown more complex. A process that used to involve a couple of phone calls and face-to-face meetings now includes everything from targeted emails to impromptu online chats. Organizations rely on everything from digital sellers to field sellers to customer success champions to ensure their customers are supported end-to-end throughout the sales journey. Especially with hybrid workplaces and shrinking travel budgets, sellers need assistance from technology to build connections—between colleagues, across multiple data sources, and with customers.
The challenge is that sellers need to build these connections and foster relationships without sacrificing productivity. According to the Microsoft Work Trend Index, sellers spend more than 66 percent of their day managing email, leaving only about a third of their time for actual sales activities. Our answer is to provide simple solutions—focusing on collaboration, productivity, AI, and insights—to help sellers focus on closing deals. As Forrester states in its report, “Dynamics [365 Sales] showcases how SFA and office productivity solutions work together.” We believe this is what has earned our position as a Leader: we built solutions to give sellers access to real-time customer insights, subject matter experts, relevant data across different sources, and important customer and account information right in their app of choice—no context switching necessary.
Dynamics 365 Sales works natively with Microsoft Teams to create open lines of communication for collaborating and aligning on work items across marketing, sales, and service departments. Additionally, copilot capabilities bring next-generation AI and customer relationship management (CRM) platform updates into collaborative apps like Outlook and Teams, unlocking productivity for sellers whether they are working in Dynamics 365 Sales or Microsoft 365 apps. By helping to eliminate manual data entry, meeting summarization, and other cumbersome processes, Dynamics 365 Sales ensures sellers have more time to create and nourish their customer connections, ultimately driving sales.
Providing insights that improve customer retention—and grow revenue
Referring to Microsoft, Forrester also reports that “Embedded insights are a highlight of the product”—something that Microsoft customer, DP World, knows well. DP World is the leading provider of worldwide, end-to-end supply chain and logistics. DP World implemented Dynamics 365 Sales to help the company diversify and scale after an acquisition that was driving new demand and traffic to the company. Dynamics 365 Sales provides its sellers predictive lead scoring and prioritized worklists based on AI, giving full visibility into its sales funnels and helping it effectively qualify leads and opportunities. This reduced DP World’s sales cycle, enabling five times more proactive sales and two times greater customer retention.
Learn more about sales
We’re excited to have been recognized as a Leader in The Forrester Wave and are committed to providing innovative sales force automation platform capabilities to help our customers accomplish more.
Microsoft named a Leader
We received top scores in The Forrester Wave™: Sales Force Automation, Q3 2023.
Contact your Microsoft representative to learn more about the value and return on investments, as well as the latest offers—including a limited-time 26 percent savings on subscription pricing—for Dynamics 365 Sales Premium.
This article is contributed. See the original author and article here.
Introduction
Containers technologies are no longer something new in the industry. It all started focusing on how to deploy reproducible development environments but now you can find many other fields where applying containers, or some of the underlying technologies used to implement them, are quite common.
I will not cover here Azure Container Instances nor Azure Kubernetes Services. For an example of the latter you can browse this article NDv4 in AKS. ACI will be explained in another article.
Currently there are many options available when working with containers, Linux seasoned engineers quite likely have worked with LXC; later Docker revolutionized the deployment of development environments, more recently other alternatives like Podman have emerged and are now competing for a place in many fields.
However, in HPC, we have been working for some years with two different tools, Shifter as the first fully focused containers project for supercomputers and Singularity. I will show you how to use Singularity in HPC clusters running in Azure. I will also explain how to use Podman for running AI workloads using GPUs in Azure VMs.
Running AI workloads using GPU and containers
Running AI workloads do not need the presence of GPUs, but almost all the frameworks for machine learning/deep learning are designed to make use of them. So, I will assume GPU compute resources are required in order to run any AI workload.
There are many ways of taking advantage of GPU compute resource within containers. For example, you can run the whole container in privileged mode in order to get access to all the hardware available in the host VM, some nuances must be highlighted here because privileged mode cannot grant more permissions than those inherent to the user running the container. This means running a container as root in privileged mode is way different than running the container as a regular user with less privileges.
The most common way to get access to the GPU resources is via nvidia-container-toolkit, this package contains a hook in line with OCI standard (see references below) providing direct access to GPU compute resources within the container.
I will use a regular VM using Nvidia T4 Tesla GPU (NC8as_T4_v3) running RHEL 8.8. Let’s get started.
These are all the steps required to run AI workloads using containers and GPU resources in a VM running in Azure:
A VM using any family of N-series (for AI workloads like machine learning, deep learning, etc… NC or ND are recommended) and a supported operating system.
Install CUDA drivers and CUDA toolkit if required. You can omit this if you are using DSVM images from Marketplace, these images come with all required drivers preinstalled.
Install your preferred container runtime environment and engine to work with containers.
Install nvidia-container-toolkit.
Run a container using any image with the tools required to check the GPU usage like nvidia-smi command. Using any container from NGC is more than recommended to avoid additional steps.
Create the image with your code or commit the changes in a running container.
I will start with step 2 because I’m sure there is no need to explain how to create a new VM with N-series.
Installing CUDA drivers
There is no specific restriction about which CUDA release must be installed. You have the freedom to choose the latest version from Nvidia website, for example.
Let’s check if the drivers are installed correctly by using nvidia-smi command:
[root@hclv-jsaelices-nct4-rhel88 ~]# nvidia-smi
Fri Nov 3 17:41:03 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.12 Driver Version: 535.104.12 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000001:00:00.0 Off | Off |
| N/A 51C P0 30W / 70W | 2MiB / 16384MiB | 7% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
Installing container runtime environment and engine
As I commented in the introduction, Podman will be our main tool to run containers. By default, Podman will use runc as the runtime environment, runc adheres to OCI standard so no additional steps to make sure nvidia-container-toolkit will work in our VM.
$ sudo dnf install -y podman
I won’t explain here all the benefits of using Podman against Docker. I’ll just mention Podman is daemonless and a most modern implementation of all technologies required to work with containers like control groups, layered filesystems and namespaces to name a few.
Let’s verify Podman was successfully installed using podman info command:
Podman fully supports OCI hooks and that is precisely what nvidia-container-toolkit provides. Basically, OCI hooks are custom actions performed during the lifecycle of the container. It is a prestart hook that is called when you run a container providing access to the GPU using the drivers installed in the host VM. The already created repository is also providing this package so let’s install it using dnf:
$ sudo dnf install -y nvidia-container-toolkit
Podman is daemonless so no need to add the runtime using nvidia-ctk runtime configure, but, in this case, an additional step is required to generate the CDI configuration file:
$ sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
$ nvidia-ctk cdi list
INFO[0000] Found 2 CDI devices
nvidia.com/gpu=0
nvidia.com/gpu=all
Running containers for AI workloads
Now, we have all the environment ready for running new containers for AI workloads. I will make use of NGC images from Nvidia to save time and avoid the creation of custom ones. Please, keep in mind some of them are quite big so make sure you have enough space in your home folder.
Let’s start with an Ubuntu 20.04 image with CUDA already installed on it:
[jsaelices@hclv-jsaelices-nct4-rhel88 ~]$ podman run --security-opt=label=disable --device=nvidia.com/gpu=all nvcr.io/nvidia/cuda:12.2.0-devel-ubuntu20.04
==========
== CUDA ==
==========
CUDA Version 12.2.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
Another example running the well-known DeviceQuery tool that comes with CUDA toolkit:
[jsaelices@hclv-jsaelices-nct4-rhel88 ~]$ podman run --security-opt=label=disable --device=nvidia.com/gpu=all nvcr.io/nvidia/k8s/cuda-sample:devicequery-cuda11.7.1-ubuntu20.04
/cuda-samples/sample Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Tesla T4"
CUDA Driver Version / Runtime Version 12.2 / 11.7
CUDA Capability Major/Minor version number: 7.5
Total amount of global memory: 15948 MBytes (16723214336 bytes)
(040) Multiprocessors, (064) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1590 MHz (1.59 GHz)
Memory Clock rate: 5001 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 4194304 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 65536 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 1 / 0 / 0
Compute Mode:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.2, CUDA Runtime Version = 11.7, NumDevs = 1
Result = PASS
You can see in these examples that I’m running those containers with my user without root privileges (rootless environment) with no issues, and that is because of that option passed to the podman run command, –security-opt=label=disable. This command is used to disable all SELinux labeling. This is performed this way for the sake of this article’s length. I could use a SELinux policy created with Udica or use the one that comes with Nvidia (nvidia-container.pp) but I preferred to disable the labeling for these specific samples.
Now it is time to try running specific frameworks for AI using Python. Let’s try with Pytorch:
[jsaelices@hclv-jsaelices-nct4-rhel88 ~]$ podman run --rm -ti --security-opt=label=disable --device=nvidia.com/gpu=all pytorch/pytorch
root@7cb030cc3b47:/workspace# python
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>>
As you can see PyTorch framework can see the GPU and would be able to run any code using GPU resources without any issue.
I won’t create any custom image as suggested in the last step described previously. That can be a good exercise for the reader, so it is your turn to test your skills running containers and using GPU resources.
Running HPC workloads using containers
Now it is time to run HPC applications in our containers. You can also use podman to run those, in fact there is an improvement over podman developed jointly by NERSC and Red Hat called Podman-HPC but, for this article, I decided to use Singularity which is well-know in HPC field.
For this section, I will run some containers using Singularity in a cluster created with CycleCloud using HB120rs_v3 size for the compute nodes. For the OS, I’ve chosen Almalinux 8.7 HPC image from Azure Marketplace.
I will install Singularity manually but this can be automated using cluster-init in CycleCloud.
Installing Singularity in the cluster
In Almalinux 8.7 HPC image epel repository is installed by default so you can easily install singularity with a single command:
I won’t explain all the pros and cons when using Singularity over other containers alternatives. I will just highlight some of the security features provided by Singularity and, especially, the format of the image used (Singularity Image Format, SIF) during the examples.
One of the biggest advantages of using Singularity is the size of the images, SIF is a binary format and is very compact comparing to regular layered Docker images. See below an example of the image of OpenFOAM:
[jsaelices@slurmhbv3-hpc-2 .singularity]$ ls -lh openfoam-default_latest.sif
-rwxrwxr-x. 1 jsaelices jsaelices 349M Nov 3 18:00 openfoam-default_latest.sif
Docker is using a layered format that is substantially bigger in size:
[root@slurmhbv3-hpc-1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
opencfd/openfoam-default latest dc7161e16205 3 months ago 1.2GB
Running MPI jobs with Singularity
Singularity is fully compatible with MPI and there are 2 different ways to submit an MPI job with SIF images.
I will use the bind method for its simplicity but you can also use the hybrid method if binding volumes between the host and the container is not desirable.
Let’s create a simple definition file called mydefinition.def (similar to Dockerfile or Containerfile):
Here, I’m just using the Almalinux image from Docker Hub, copying the MPI application, defining some useful environment variables and a few simple commands to execute when the container is called without any parameter.
[jsaelices@slurmhbv3-hpc-1 ~]$ cat slurmhbv3-hpc-1-2-singularity-mpi
Hello world: rank 0 of 4 running on slurmhbv3-hpc-1
Hello world: rank 1 of 4 running on slurmhbv3-hpc-1
Hello world: rank 2 of 4 running on slurmhbv3-hpc-2
Hello world: rank 3 of 4 running on slurmhbv3-hpc-2
With this example this article ends.
You’ve seen how to run containers, how to make use of GPU and run AI workloads in a simple and effective way. You’ve also learnt how to run Singularity containers and MPI jobs easily. You can use all this material as a starting point to extend your knowledge and apply it to more complex tasks. Hope you enjoyed it.
This article is contributed. See the original author and article here.
We are happy to announce the general availability of the User Interface (UI) for the Azure Virtual Desktop Web Client. The new UI offers a cleaner, more modern look and feel. With this update, you can
Switch between Light and Dark Mode
View your resources in a grid or list format
Reset web client settings to their defaults
How to access it
The new client is toggled on by default on the web client, and the “preview” caption has now been removed from the toggle.
This article is contributed. See the original author and article here.
Just a decade ago, few people seemingly knew or cared about firmware. But with the increasing interconnectedness of devices and the rise of cybersecurity threats, there’s a growing awareness of firmware as the foundational software that powers everything from smartphones to smart TVs.
Traditionally developed using the C language, firmware is essential for setting up a device’s basic functions. As a globally recognized standard, UEFI — Unified Extensible Firmware Interface enables devices to boot with fundamental security features that contribute to the security posture of modern operating systems.
Call for greater firmware security
As the security of our device operating systems gets more sophisticated, firmware needs to keep up. Security is paramount, but it shouldn’t compromise speed or user-friendliness. The goal is clear – firmware that’s both fast and secure.
What does this modern approach look like? Let’s start by looking at the key challenges:
Evolving threat landscape: As operating systems become more secure, attackers are shifting their focus to other system software, and firmware is a prime target. Firmware operates at a very foundational level in a device, and a compromise here can grant an attacker deep control over a system.
Memory safety in firmware: Many firmware systems have been historically written in languages like C, which, while powerful, do not inherently protect against common programming mistakes related to memory safety. These mistakes can lead to vulnerabilities such as buffer overflows, which attackers can exploit.
Balance of speed and security: Firmware needs to execute quickly. However, increasing security might introduce execution latency, which isn’t ideal for firmware operations.
Rust in the world of firmware
When it comes to modern PC firmware, Rust stands out as a versatile programming language. It offers flexibility, top-notch performance, and most importantly, safety. While C has been a go-to choice for many, it has its pitfalls, especially when it comes to errors that might lead to memory issues. Considering how crucial firmware is to device safety and operation, any such vulnerabilities can be a goldmine for attackers, allowing them to take over systems.[1] That’s where Rust shines. It’s designed with memory safety in mind, without the need for garbage collection, and has strict rules around data types and parallel operations. This minimizes the probability of errors that expose vulnerabilities, making Rust a strong choice for future UEFI firmware development.
Unlocking new possibilities with Rust
Rust is not just another programming language; it’s a gateway to a wealth of resources and features that many firmware developers might have missed out on in the past. For starters, Rust embraces a mix of object-oriented, procedural, and functional programming approaches and offers flexible features like generics and traits, making it easier to work with different data types and coding methods. Many complex data structures that must be hand-coded in C are available “for free” as part of the Rust language. But it’s not just about versatility and efficiency. Rust’s tools are user-friendly, offering clear feedback during code compilation and comprehensive documentation for developers. Plus, with its official package management system, developers get access to tools that streamline coding and highlight important changes. One of those features is Rust’s use of ‘crates’ – these are like ready-to-use code packages that speed up development and foster collaboration among the Rust community.
Making the move from C to Rust
Rust stands out for its emphasis on safety, meaning developers often don’t need as many external tools like static analyzers, which are commonly used with C. But Rust isn’t rigid; if needed, it allows for exceptions with its “unsafe code” feature, giving developers some flexibility. One of Rust’s advantages is how well it interacts with C. This means teams can start using Rust incrementally, without having to abandon their existing C code. So, while Rust offers modern advantages, it’s also mindful of the unique requirements of software running directly on hardware — without relying on the OS or other abstraction layers. Plus, it offers compatibility with C’s data structures and development patterns.
The Trio: Surface, Project Mu and Rust
Surface with Windows pioneered the implementation of Project Mu in 2018 as an open-source UEFI core to increase scalability, maintainability, and reusability across Microsoft products and partners. The idea was simple but revolutionary, fostering a more collaborative approach to reduce costs and elevate quality. It also offers a solution to the intricate business and legal hurdles many partners face, allowing teams to manage their code in a way that respects legal and business boundaries. A major win from this collaboration is enhanced security; by removing unnecessary legacy code, vulnerabilities are reduced. From its inception, Surface has been an active contributor, helping Project Mu drive innovation and improve the ecosystem.
Pioneering Rust adoption through Project Mu and Surface
Surface and Project Mu are working together to drive adoption of Rust into the UEFI ecosystem. Project Mu has implemented the necessary changes to the UEFI build environment to allow seamless integration of Rust modules into UEFI codebases. Surface is leveraging that support to build Rust modules in Surface platform firmware. With Rust in Project Mu, Microsoft’s ecosystem benefits from improved security transparency while reducing the attack surface of Microsoft devices due to Rust’s memory safety benefits. Also, by contributing firmware written in Rust to open-sourced Project Mu, Surface participates in an industry shift to collaboration with lower costs and a higher security bar. With this adoption, Surface is protecting and leading the Microsoft ecosystem more than ever.
Building together: Surface’s commitment to the Rust community
Surface and Project Mu plan to participate in the open Rust development community by leveraging and contributing to popular crates and publishing new ones that may be useful to other projects. A general design strategy is to solve common problems in a generic crate that can be shared and integrated into the firmware. Community crates, such as r-efi for UEFI, have already been helpful during early Rust development.
Getting Started
Project Mu has made it easier for developers to work with Rust by introducing a dedicated container in the Project Mu Developer Operations repository (DevOps repo). This container is equipped with everything needed to kickstart Rust development. As more Rust code finds its way into Project Mu’s repositories, it will seamlessly integrate with the standard Rust infrastructure in Project Mu, and the dedicated container provides an easy way to immediately take advantage of it.
The Project Mu Rust Build readme details how to begin developing with Rust and Project Mu. Getting started requires installing the Rust toolchain and Cargo make as a build runner to quickly build Rust packages. Refer to the readme for guidance on setting up the necessary build and configuration files and creating a Rust module.
Demonstrating Functionality
QEMU is an open-source virtual machine emulator. Project Mu implements open-source firmware for the QEMU Q35 platform in its Mu Tiano Platforms repository. This open virtual platform is an easily accessible demonstration vehicle for Project Mu features. In this case, UEFI (DXE) Rust modules are already included in the platform firmware to demonstrate their functionality (and test it in CI).
Looking ahead
With the expansion of firmware code written in Rust, Surface looks forward to leveraging the Project Mu community to help make our firmware even more secure. To get involved with Project Mu, review the documentation and check out the Github repo. Regularly pull updates from the main repo, keep an eye on the project’s roadmap, and stay engaged with the community to remain informed about changes and new directions.
This article is contributed. See the original author and article here.
Microsoft Learn offers you the latest resources to ensure you have what you need to prepare for exams and reach your skilling goals. Here we share some important updates about Security content, prep videos, certifications, and more.
Exam Readiness Zone: preparing for Exams SC-100, SC-200, and SC-300
Now, you can leverage the Exam Readiness Zone, our free exam prep resource available on Microsoft Learn for your next Security certification! View our expert-led exam prep videos to help you identify the key knowledge and skills measured on exams and how to allocate your study time. Each video segment corresponds to a major topic area on the exam.
During these videos, trainers will point out objectives that many test takers find difficult. In these videos, we include example questions and answers with explanations.
For technical skilling, we now have videos available for the following topics:
Are you thinking of adopting the upcoming Security Copilot? This challenge will help you prepare, as it includes the security operations analyst skills required to tune up your platform and get it ready for Security Copilot.
Complete the challenge within 30 days and you can be eligible to earn a 50% discount on the Certification exam.
The Exam SC-400 evaluates your proficiency in performing the following technical tasks: implementing information protection, implementing DLP, implementing data lifecycle and records management, monitoring and investigating data and activities through Microsoft Purview, and managing insider and privacy risks in Microsoft 365.
The Microsoft Learn Community offers a variety of ways to connect and engage with each other and technical experts. One of the core components of this experience are the learning rooms, a space to find connections with experts and peers.
This article is contributed. See the original author and article here.
Efficiently managing a contact center requires a fine balance between workforce engagement and customer satisfaction. The ability to create agent-specific capacity profiles in Dynamics 365 Customer Service empowers administrators and supervisors to fine-tune the work allocation based on an agent’s experience and expertise, optimizing agent performance and delivering tailored customer service.
Understand capacity profiles
Capacity profiles are at the core of Dynamics 365 Customer Service, defining the type and amount of work agents can handle, ensuring equitable work distribution. Profiles are even more beneficial when agents are blended across various channels. Agent-specific capacity profiles take this a step further, enabling customized work limits for individual agents based on their proficiency. Let’s explore this capability with an example.
A real-world scenario: Casey’s challenge
Meet Casey, a Customer Service administrator at Contoso Bank who aims to maximize the efficiency of her customer service team. She wants senior agents to handle more responsibilities, giving junior agents the time to focus on training and skill development.
Casey decides to use agent-specific capacity profiles for credit card inquiries in the North America region. She sets up a “Credit Card NAM” profile with a default limit of two concurrent conversations. She assigns it to Kiana, a seasoned agent, and Henry, a junior agent who recently joined Contoso.
Customize capacity limits
Casey recognizes that Kiana’s seniority and expertise warrant a different limit. With agent-specific capacity profiles, she can easily update Kiana’s limit to handle three conversations at a time. The immediate benefit of this approach is apparent. This balance allows junior agents like Henry to invest more time in training and development while experienced agents like Kiana manage a higher workload efficiently.
Flexibility in action
In the dynamic world of customer service, circumstances can change rapidly. Contoso Bank faces an unexpected surge in insurance-related queries. Casey needs to adapt to this evolving scenario promptly and this is where agent-specific capacity profiles truly shine.
Casey has Kiana take on the additional insurance queries alongside her credit card queries. She assigns the “Insurance” profile to Kiana. She also resets Kiana’s work limit for the “Credit Card NAM” profile back to the default amount, providing her the bandwidth to handle the increased workload efficiently.
The result: Optimal efficiency
This example showcases the flexibility and real-time adaptability that agent-specific capacity profiles offer. Casey is empowered to make agile and precise work distribution decisions, ensuring that agents’ expertise and experience are utilized optimally.
Conclusion
In the world of customer service, where every interaction matters, this feature is a game-changer. It helps organizations reduce agent stress, elevate customer satisfaction, and offer a flexible solution for modern customer service management. By embracing this feature, businesses can ensure that their customer service is optimized for excellence, regardless of changing circumstances.
Recent Comments