This article is contributed. See the original author and article here.
Announcing SynapseML v0.11. The new version contains many new features to help you build scalable machine learning pipelines.
We are pleased to announce SynapseML v0.11, a new version of our open-source distributed machine learning library that simplifies and accelerates the development of scalable AI. In this release, we are excited to introduce many new features from the past year of developments well as many bug fixes and improvements. Though this post will give a high-level overview of the most salient new additions, curious readers can check out the full release notes for all of the new additions.
OpenAI Language Models and Embeddings
A new release wouldn’t be complete without joining the large language model (LLM) hype train and SynapseML v0.11 features a variety of new features that make large-scale LLM usage simple and easy. In particular, SynapseML v0.11 introduces three new APIs for working with foundation models: `OpenAIPrompt`, ` OpenAIEmbedding`, and `OpenAIChatCompletion`. The `OpenAIPrompt` API makes it easy to construct complex LLM prompts from columns of your dataframe. Here’s a quick example of translating a dataframe column called “Description” into emojis.
from synapse.ml.cognitive.openai import OpenAIPrompt
emoji_template = """
Translate the following into emojis
Word: {Description}
Emoji: """
results = (OpenAIPrompt()
.setPromptTemplate(emoji_template)
.setErrorCol("error")
.setOutputCol("Emoji")
.transform(inputs))
This code will automatically look for a database column called “Description” and prompt your LLM (ChatGPT, GPT-3, GPT-4) with the created prompts. Our new OpenAI embedding classes make it easy to embed large tables of sentences quickly and easily from your Apache Spark clusters. To learn more, see our docs on using OpenAI embeddings API and the SynapseML KNN model to create an LLM-based vector search engine directly on your spark cluster. Finally, the new OpenAIChatCompletion transformer allows users to submit large quantities of chat-based prompts to ChatGPT, enabling parallel inference of thousands of conversations at a time. We hope you find the new OpenAI integrations useful for building your next intelligent application.
Simple Deep Learning
SynapseML v0.11 introduces a new Simple deep learning package that allows for the training of custom text and deep vision classifiers with only a few lines of code. This package integrates the power of distributed deep network training with PytorchLightning with the simple and easy APIs of SynapseML. The new API allows users to fine-tune visual foundation models from torchvision as well as a variety of state-of-the-art text backbones from HuggingFace.
Here’s a quick example showing how to fine-tune custom vision networks:
Keep an eye out with upcoming new releases of SynapseML featuring additional simple deep-learning algorithms that will make it easier than ever to train and deploy models at scale.
LightGBM v2
LightGBM is one of the most used features of SynapseML and we heard your feedback on better performance! SynapseML v0.11 introduces a completely refactored integration between LightGBM and Spark, called LightGBM v2. This integration aims for high performance by introducing a variety of new streaming APIs in the core LightGBM library to enable fast and memory-efficient data sharing between spark and LightGBM. In particular, the new “Streaming execution mode” has a >10x lower memory footprint than earlier versions of SynapseML yielding fewer memory issues and faster training. Best of all, you can use the new mode by just passing a single extra flag to your existing LightGBM models in SynapseML.
ONNX Model Hub
SynapseML supports a variety of new deep learning integrations with the ONNX runtime for fast, hardware-accelerated inference in all of the SynapseML languages (Scala, Java, Python, R, and .NET). In version 0.11 we add support for the new ONNX model hub, which is an open collection of state-of-the-art pre-trained ONNX models that can be quickly downloaded and embedded into spark pipelines. This allowed us to completely deprecate and remove our old dependence on the CNTK deep learning library.
To learn more about how you can embed deep networks into Spark pipelines, check out our ONNX episode in the new SynapseML video series:
Causal Learning
SynapseML v0.11 introduces a new package for causal learning that can help businesses and policymakers make more informed decisions. When trying to understand the impact of a “treatment” or intervention on an outcome, traditional approaches like correlation analysis or prediction models fall short as they do not necessarily establish causation. Causal inference aims to overcome these shortcomings by bridging the gap between prediction and decision-making. SynapseML’s causal learning package implements a technique called “Double machine learning”, which allows us to estimate treatment effects without data from controlled experiments. Unlike regression-based approaches, this approach can model non-linear relationships between confounders, treatment, and outcome. Users can run the DoubleMLEstimator using a simple code snippet like the one below:
from pyspark.ml.classification import LogisticRegression
from synapse.ml.causal import DoubleMLEstimator
dml = (DoubleMLEstimator()
.setTreatmentCol("Treatment")
.setTreatmentModel(LogisticRegression())
.setOutcomeCol("Outcome")
.setOutcomeModel(LogisticRegression())
.setMaxIter(20))
dmlModel = dml.fit(dataset)
For more information, be sure to check out Dylan Wang’s guided tour of the DoubleMLEstimator on the SynapseML video series:
Vowpal Wabbit v2
Finally, SynapseML v0.11 introduces Vowpal Wabbit v2, the second-generation integration between the Vowpal Wabbit (VW) online optimization library and Apache Spark. With this update, users can work with Vowpal wabbit data directly using the new “VowpalWabbitGeneric” model. This makes working with Spark easier for existing VW users. This more direct integration also adds support for new cost functions and use cases including “multi-class” and “cost-sensitive one against all” problems. The update also introduces a new progressive validation strategy and a new Contextual Bandit Offline policy evaluation notebook to demonstrate how to evaluate VW models on large datasets.
Conclusion
In conclusion, we are thrilled to share the new SynapseML library with you with you and hope you will find that it simplifies your distributed machine learning pipelines. This blog only covered the highlights, so be sure to check out the full release notes for all the updates and new features. Whether you are working with large language models, training custom classifiers, or performing causal inference, SynapseML makes it easier and faster to develop and deploy machine learning models at scale.
This article is contributed. See the original author and article here.
We’re excited to return to Field Service Palm Springs from April 25 through April 27, 2023, at the JW Marriott Desert Springs Resort & Spa.
We will showcase how Connected Field Service helps leaders:
Move beyond the costly break/fix model to a proactive, predictive model.
Unlock the power of data and use Internet of Things (IoT), machine learning, and AI.
Transform their field operations and improve customer experience.
This year, we are hosting a thought leadership luncheon with our partner Hitachi Solutions to discuss the benefits of a connected field service and how to use data to remain competitive, and continuously improve business performance and customer experiences in an increasingly challenging environment.
Field service organizations manage hundreds of technicians with varying expertise, experiences, and skills. With 80 percent of consumers more likely to make a purchase from a brand that provides personalized experiences, organizations have come to realize how important quality service is to remain resilient despite uncertainty.1 Employees are working from remote or distributed locations, reducing the amount of personalized interaction. Meanwhile, remote monitoring of IoT devices continues to transform service from a cost center to a revenue generator.
Connected Field Service is the ability to add connected devices, powered by the Internet of Things (IoT), and uses cloud capabilities to augment your existing field service operations. It enables organizations to transform the way they provide service from a costly, reactive break-fix model to a proactive, and in some cases, even predictive service model through the holistic combination of IoT diagnostics, scheduling, asset maintenance, and inventory on the same platform.
IoT has brought a new level of efficiency to the field service industry, helping service professionals address issues more proactively and minimize downtime. As McKinsey researchers predict, IoT applications could generate a value of over $470 billion annually by 2025 by enhancing operations across various industries.2
By integrating IoT signals across the enterprise, a connected field service helps organizations predict and resolve customer issues before the customer is aware, thereby ensuring consistent and dependable customer operations through hassle-free and preemptive field service.
Four Connected Field Service solutions
Connected Field Service combines four innovative Microsoft solutions that enable service leaders to digitally transform service organizations:
Provides service technicians with upsell and cross-sell recommendations
Enables team members in non-sales roles to advance deals with step-by-step guidance
Enables sales teams and service technicians to access customer information and sales resources in non-office environments
Drives visibility into product and parts usage across the organization
Connected Field Service becomes a reality with Microsoft. Service leaders can better manage costs, enhance service delivery, and increase customer satisfaction (CSAT) by proactively resolving customer issues before the customer is aware. Take advantage of smart, internet-ready devices that can detect and diagnose issues, integrating with field service management (FSM) software like Dynamics 365 Field Service to automatically initiate troubleshooting and, when needed, create work orders to dispatch technicians for onsite service. Learn how you can use technology to schedule preventative maintenance based on consumption rather than rely on a regimented schedule. Best of all, enjoy the flexibility of implementing the solution in stages so your team can ramp up via a natural progression. Learn more about the latest Dynamics 365 Field Service features.
Engage with Microsoft at Field Service Palm Springs 2023
We invite you to join us, along with our partners, to discover how Connected Field Service using Dynamics 365 Field Service and IoT can help create a seamless service experience that enhances customer experiences, increases cost savings, and improves efficiency.
Register for Field Service Palm Springs and visit the Microsoft booth (101/103) where you can meet with Dynamics 365 Field Service experts to discuss how connected data enables better experiences across your organization.
About Field Service Palm Springs
For 20 years, Field Service Palm Springs has become the must-attend conference for service executives. From early IoT concepts to AI, Field Service is where innovative ideas spread, and future strategies are created. Today, Field Service is a global event, with major conferences in Palm Springs, Amelia Island, San Diego, Amsterdam, and Singapore.
Since 2003, the top service and support minds have gathered in Palm Springs in April for the flagship Field Service conference. With forward-looking content and unique session formats that ensure you learn and network most effectively, Field Service is designed to help you achieve service excellence and drive profitability.
Microsoft Dynamics 365 Field Service
Optimize service operations and inventory management.
This article is contributed. See the original author and article here.
Pricing is one of the fundamental tools to boost supply chain profits by better matching supply and demand. Many businesses have started to reform their pricing strategies in recent years as a result of the growth of e-commerce and the constantly changing business environments in order to improve pricing transparency, supply chain agility and margin optimization.
We are launching the Public Preview of Pricing management within Dynamics 365 Supply Chain Management from 10.0.33 to support sales managers managing and execute the attribute-based omnichannel sales pricing.
Why attribute-based omnichannel pricing?
Transaction to omnichannel pricing:
Traditional business-to-business (B2B) organizations are increasingly considering switching to omnichannel sales and selling directly to end customers in order to have greater control over price and margins. The omnichannel transformation results in significant modifications to pricing models and rules.
By offering an omnichannel price engine, a central place to manage pricing rules and automated omnichannel pricing execution, Dynamics 365 Supply Chain Management aids B2B business in the transition to omnichannel pricing.
Transaction to attribute-based pricing:
Working with the marketing and product manager to comprehend the product differentiating features, target customer segments and other pricing sensitivity elements is one of the important responsibilities of the sales managers. The package types, the delivery mode and the expected receipt date could be one of the pricing differentiators. By giving business the ability to convert data from customers, product, and orders into price attributes and building pricing on different pricing structure, Dynamics 365 Supply Chain Management supports business to adopt the attribute-based pricing model.
What is Pricing management?
Dynamics 365 Supply Chain Management Pricing Management leverages the Commerce Scale Unit (CSU) to help traditional B2B companies embrace omnichannel pricing. Pricing management enables attribute-based pricing for the price components that are across the sales pricing structures, including product base price, sales trade agreement price, discounts, charges and rebate management.
How Pricing management supports business flows:
DESIGN your pricing component types using price attributes.
CONSTRUCT your pricing structure with pricing components, such as margin elements.
MANAGE price markup based on product standard cost (for manufactured products) or vendor price catalog (for trading products).
SIMULATE pricing rules and impacts.
EXECUTE pricing calculation across channels.
MONITOR promotion fund consumption with control.
Flexible data model for building price attributes. Price attributes can be based on categorized product pricing differentiators, customer groups and order types.
Central place to offer, manage and calculate pricing. Boost pricing transparency across channels, which is essential for aligning pricing strategies across multiple channels.
Manage complex pricing structureswith price component breakdowns. When you place an order, the pricing details reflect the pricing structure for you to understand the pricing calculation sequence and price breakdowns for future in-depth analysis.
Establish the sophisticated pricing with pricing simulator to evaluate the impact. When converting from B2B pricing to B2B and B2C pricing, consider discount concurrency, bundle sales, mandatory sales items, and bonus free item pricing rules.
Fund control to ensure you don’t avoid margin leakage from fund consumption.
Real-time cross channel pricing execution with the pricing engine to quickly determine pricing while considering a variety of commercial aspects, such as the item’s general base price, the price of a sales trade agreements, long-term discount agreements, short-term promotion discounts, and retrospective rebate calculations for each sales order.
External applications can retrieve calculated pricing by leverage the Commerce Scale Unit (CSU)based Pricing APIs.
Next steps:
If your organization on the journal transition to attribute-based omnichannel selling pricing, consider taking the next step with Pricing Management within Dynamics 365 Supply Chain Management.
If you are a potential customer or partner and want to learn more, contact the https://learn.microsoft.com/en-us/dynamics365/supply-chain/pricing-management/price-attributes-overview product team directly by email.
This article is contributed. See the original author and article here.
Your brand is the face of your business. And often, the live chat widget on your website is the first point of contact for your customers. Having a strong brand for your customer service products can build trust and credibility, differentiate yourself from competitors, ensure consistency in communication, and create a positive emotional connection with customers.
We are excited to announce our upgraded live chat widget that allows you to customize every detail of the widget to match your brand identity. From the font and color scheme to the iconography, you can now own every pixel of the widget and ensure that it represents your brand in the best possible way.
Three customized chat widgets, each representing a different branding style
Style every component of the live chat widget to reflect your brand
When you update your environment with the latest release, you can use our live chat script tag customization to edit the design of the live chat widget through CSS-represented styling. It is easier than ever to create a branded look for your chat widget. You can choose the font, color, style, and size of every component of the chat widget to reflect your brand. The image below shows examples of chat widget components and the different ways you can change them.
Editable elements in the default chat button and chat container
This article is contributed. See the original author and article here.
Como criar uma extensão customizada para o Azure DevOps
Em alguns casos, é necessário criar uma extensão personalizada para o Azure DevOps, seja para adicionar funcionalidades que não estão disponíveis nativamente ou para modificar alguma funcionalidade existente que não atenda às necessidades do projeto. Neste artigo, mostraremos como criar uma extensão personalizada para o Azure DevOps e como publicá-la no Marketplace do Azure DevOps.
Antes de começar certifique:
Ter uma conta no Azure DevOps. Caso ainda não tenha uma, você pode criar uma seguindo as instruções disponíveis aqui.
Ter um editor de código instalado, como o Visual Studio Code, que pode ser baixado em code.visualstudio.com.
Ter a versão LTS do Node.js instalada, disponível para download em, nodejs.org. Ter o compilador de TypeScript instalado, sendo a versão recomendada 4.0.2 ou superior. Ele pode ser instalado via npm em npmjs.com.
Ter o CLI do TFX instalado, sendo a versão recomendada 0.14.0 ou superior. Ele pode ser instalado globalmente via npm com o comando npm i -g tfx-cli ou conferindo mais detalhes em TFX-CLInpm i -g tfx-cli.
Preparando o ambiente de desenvolvimento
Crie uma pasta para a extensão, por exemplo, my-extension e dentro desta pasta crie a uma subpasta, por exemplo, task.
Abra o terminal na pasta criada e execute o comando npm init -y, o parâmetro -y é para aceitar todas as opções padrão. Você vai notar que foi criado um arquivo chamado package.json e nele estão as informações da extensão.
Substitua o <> por ID único de cada extensão, você pode gerar um ID aqui. Substitua o <> pelo publisher ID criado no passo 1 da etapa de publish.
Na pasta raiz da sua extensão my-extension, crie uma pasta chamada images e adicione uma imagem chamada icon.png com o tamanho de 128×128 pixels. Essa imagem será usada como ícone da sua extensão no Marketplace.
Criando a extensão
Depois de configurar o ambiente, você pode criar a extensão.
Na pasta task crie um arquivo chamado task.json e adicione o seguinte conteúdo:
Substitua o <> pelo mesmo GUID gerado no passo 8 da etapa de preparação de ambiente de desenvolvimento.
Esse arquivo descreve a extensão que será executada no pipeline. Nesse caso, a extensão ainda não faz nada, mas você pode adicionar os inputs e a lógica para executar qualquer coisa.
Na sequência crie um arquivo chamado index.js e adicione o seguinte conteúdo:
Esse arquivo é o responsável por executar a extensão. Nesse caso, ele apenas retorna uma mensagem de sucesso. Você pode adicionar a lógica para executar qualquer coisa.
Adicione na pasta task uma imagem chamada icon.png com o tamanho de 32×32 pixels. Essa imagem será usada como ícone da sua extensão no Azure Pipelines.
No terminal, execute o comando tsc, para compilar o código Typescript para Javascript. Esse comando irá gerar um arquivo chamado index.js na pasta task.
Para executar a extensão localmente, execute o comando node index.js. Você deve ver a mensagem My Extension Succeeded!.
Quando a sua extensão estiver pronta, você pode publicá-la no Marketplace. Para isso será necessário criar um editor de extensão no Marketplace.
Acesse o Marketplace e clique em Publish Extension. Após fazer o login, você será redirecionado para a página de criação de um editor de extensão. Preencha os campos e clique em Create.
Imagem 1 – Criando um editor de extensão no Portal no Marketplace
No terminal execute o comando tfx extension create –manifest-globs vss-extension.json, na pasta My-Extension. Esse comando irá gerar um arquivo chamado publishID-1.0.0.vsix, que é o arquivo que será publicado no Marketplace.
Imagem 2 – Tela de linha de comando TFX CLI, criando uma extensão
Acesse a página de publicação de extensão no Marketplace e clique New extension e seguida Azure DevOps. Selecione o arquivo my-extension-1.0.0.vsix e clique em Upload.
Imagem 3 – Tela para fazer o Upload de uma Extensão para o Marketplace
Se tudo ocorrer bem, você verá algo como a imagem abaixo.
Imagem 4 – Exemplo de uma extensão publicada no Marketplace
Com a extensão publicada, será necessário compartilhá-la com a sua organização. Para isso, clique no menu de contexto da extensão e clique em Share/UnShare.
Imagem 5 – Menu de opções de uma extensão, com a opção de compartilhar destacada
Clique em + Organization.
Imagem 6 – Tela para compartilhar um extensão
E digite o nome da sua Organização, ao clicar fora da caixa de digitação a validação é feita e o compartilhamento é realizado.
Imagem 7 – Exemplo de como é possível compartilhar uma extensão com uma organização
Instalando a extensão na sua organização
Após publicar a extensão no Marketplace, você pode instalá-la na sua organização, para isso siga os passos abaixo.
Clique no menu de contexto da extensão e clique em View Extension.
Imagem 8 – Menu de uma extensão, opção de visualizar destacada
Você verá algo como a imagem abaixo.
Imagem 9 – Tela da extensão no Marketplace
Clique em Get it free.
Verifique se sua organização está selecionada e clique em Install.
Imagem 10 – Tela de instalação da extensão no marketplace
Se a instalação ocorrer tudo bem, você verá algo como a imagem abaixo.
Imagem 11 – Tela de confirmação de instalação da extensão
Após a instalação, você verá a extensão na lista de extensões instaladas e poderá ser utilizada nos seus pipelines.
Conclusão
O uso de extensões customizadas no Azure DevOps desbloqueiam funcionalidades que não estão disponíveis. Neste artigo, você aprendeu como criar uma extensão customizada e como publicá-la no Marketplace. Espero que tenha gostado e que possa aplicar o conhecimento adquirido em seus projetos.
This article is contributed. See the original author and article here.
Documents can contain table data. For example, earning reports, purchase order forms, technical and operational manuals, etc., contain critical data in tables. You may need to extract this table data into Excel for various scenarios.
Extract each table into a specific worksheet in Excel.
Extract the data from all the similar tables and aggregate that data into a single table.
Here, we present two ways to generate Excel from a document’s table data:
Azure Function (HTTP Trigger based): This function takes a document and generates an Excel file with the table data in the document.
Apache Spark in Azure Synapse Analytics (in case you need to process large volumes of documents).
The Azure function extracts table data from the document using Form Recognizer’s “General Document” model and generates an Excel file with all the extracted tables. The following is the expected behavior:
Each table on a page gets extracted and stored to a sheet in the Excel document. The sheet name corresponds to the page number in the document.
Sometimes, there are key-value pairs on the page that need to be captured in the table. If you need that feature, leverage the add_key_value_pairs flag in the function.
Form Recognizer extracts column and row spans, and we take advantage of this to present the data as it is represented in the actual table.
Following are two sample extractions.
Top excel is with key value pairs added to the table. Bottom one is without the key value pairs.
The Excel shown above is the extraction of table data from an earnings report. The earnings report file had multiple pages with tables, and the fourth page had two tables.
Solution
Azure Function and Synapse Spark Notebook is available here in this GIT Repository
This article is contributed. See the original author and article here.
Disclaimer
This document is not meant to replace any official documentation, including those found at docs.microsoft.com. Those documents are continually updated and maintained by Microsoft Corporation. If there is a discrepancy between this document and what you find in the Compliance User Interface (UI) or inside of a reference in docs.microsoft.com, you should always defer to that official documentation and contact your Microsoft Account team as needed. Links to the docs.microsoft.com data will be referenced both in the document steps as well as in the appendix.
All the following steps should be done with test data, and where possible, testing should be performed in a test environment. Testing should never be performed against production data.
Target Audience
Microsoft customers who want to better understand Microsoft Purview.
Document Scope
The purpose of this document (and series) is to provide insights into various user cases, announcements, customer driven questions, etc.
Topics for this blog entry
Here are the topics covered in this issue of the blog:
Sensitivity Labels relating to SharePoint Lists
Sensitivity Label Encryption versus other types of Microsoft tenant encryption
How Sensitivity Labels conflicts are resolved
How to apply Sensitivity Labels to existing SharePoint Sites
Where can I find information on how Sensitivity Labels are applied to data within a SharePoint site (i.e. File label inheritance from the Site label)
Out-of-Scope
This blog series and entry is only meant to provide information, but for your specific use cases or needs, it is recommended that you contact your Microsoft Account Team to find other possible solutions to your needs.
Sensitivity labels and SharePoint Sites – Assorted topics
Encryption Sensitivity Label Encryption versus other types of Microsoft tenant encryption
Question #1
How does the encryption of Sensitivity Labels compare to encryption in leveraged in BitLocker?
Answer #1
The following table breaks this down in detail and is taken from the following Microsoft Link.
Can you apply Sensitivity Labels to SharePoint Lists?
Answer #2
The simple answer is NO while in the list, but YES once the list is exported to a file format.
Data in the SharePoint List is stored within a SQL table in SharePoint. At the time of the writing of this blog, you cannot apply a Sensitivity Label to a SharePoint Online tables, including SharePoint Lists.
SharePoint Lists allow for exports of the data in the list to a file format. An automatic sensitivity label policy can apply a label to those file formats. Here is an (example below of those export options.
How to apply Sensitivity Labels to existing SharePoint Sites
Question #3
Can you apply Sensitivity Labels to existing SHPT sites? If so, is this, can this be automated (ex. PowerShell)
Answer #3
You can leverage PowerShell to apply SharePoint labels to multiple sites. Here is the link that explains how to accomplish this.
Look for these two sections in the link below for details:
Use PowerShell to apply a sensitivity label to multiple sites
View and manage sensitivity labels in the SharePoint admin center
If you have an existing file with an existing Sensitivity Label that is stricter than the Sensitivity Label being inherited from SharePoint Site label, which Sensitivity Label is applied to the file?
Answer #4
Please refer to the link and table below for how Sensitivity Label conflicts are handled. Notice that any Higher priority label or user applied label, would not be overridden by a site label or an automatic labeling policy.
“When SharePoint is enabled for sensitivity labels, you can configure a default label for document libraries. Then, any new files uploaded to that library, or existing files edited in the library will have that label applied if they don’t already have a sensitivity label, or they have a sensitivity label but with lower priority.
For example, you configure the Confidential label as the default sensitivity label for a document library. A user who has General as their policy default label saves a new file in that library. SharePoint will label this file as Confidential because of that label’s higher priority.”
This article is contributed. See the original author and article here.
Today, we worked on a service request that our customer got the following error message : Managed Instance needs permissions to access Azure Active Directory. You need to be a ‘Company Administrator’ or a ‘Global Administrator’ to grant ‘Read’ permissions to the Managed Instance.
Azure SQL Managed Instance needs permissions to read Azure AD to successfully accomplish tasks such as authentication of users through security group membership or creation of new users. For this to work, we need to grant the Azure SQL Managed Instance permission to read Azure AD.
We can do this using the Azure portal or PowerShell. This operation can only be executed by Global Administrator or a Privileged Role Administrator in Azure AD.
You can assign the Directory Readers role to a group in Azure AD. The group owners can then add the managed instance identity as a member of this group, which would allow you to provision an Azure AD admin for the SQL Managed Instance. That means you need to have Global Administrator or Privileged Role Administrator access to provide the read permission to the SQL MI.
Directory Reader role
In order to assign the Directory Readers role to an identity, a user with Global Administrator or Privileged Role Administrator permissions is needed. Users who often manage or deploy SQL Database, SQL Managed Instance, or Azure Synapse may not have access to these highly privileged roles. This can often cause complications for users that create unplanned Azure SQL resources, or need help from highly privileged role members that are often inaccessible in large organizations. For SQL Managed Instance, the Directory Readers role must be assigned to managed instance identity before you can set up an Azure AD admin for the managed instance. Assigning the Directory Readers role to the server identity isn’t required for SQL Database or Azure Synapse when setting up an Azure AD admin for the logical server. However, to enable an Azure AD object creation in SQL Database or Azure Synapse on behalf of an Azure AD application, the Directory Readers role is required. If the role isn’t assigned to the SQL logical server identity, creating Azure AD users in Azure SQL will fail. For more information, see Azure Active Directory service principal with Azure SQL. Supported Article: https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-directory-readers-role?view=azuresql#assigning-the-directory-readers-role
This article is contributed. See the original author and article here.
We are excited to announce several new features that will help service organizations work more efficiently than ever before. The new not-to-exceed (NTE) feature ensures that work orders stay within budget, the trade and trade coverages features enable you to stay organized and efficient when providing groups of services to customers. In addition, significant performance improvements to the asset and functional location trees let you manage larger facilities and more complex assets. These features will streamline your workflow and save time, especially when working with vendors or subcontractors to fulfill jobs. Read on to learn more about how they work and why you should try them out today!
Not-to-Exceed functionality
Phenomenal service starts with understanding customers’ needs and expectations. The new not-to-exceed feature allows you to capture your customer’s price expectations before work begins. When new information changes work estimates, your frontline workers will be warned about going over the customer’s expected cost. This provides an opportunity to seek approval for additional charges before any work is performed.
Not to exceeds can be captured manually on work orders or be automatically applied when onboarding customers onto your service network. Customers often automatically approve work that is recurring, preventative, or low cost. By capturing expectations when onboarding a customer, they will be automatically applied to work orders, cutting out back-and-forth communications about pricing and expediting job completion.
In addition, cost expectations can be set via not-to-exceed values or based on a margin of the price expectations. For example, when your organization contracts maintenance of a location to a vendor, and you expect to make a 30% margin on these maintenance work orders. The system can automatically apply the appropriate price not-to-exceed to the work order for the customer and apply a cost not-to-exceed 30% less than the price, ensuring the vendor contacts you if costs are higher. This ensures your margin expectations are met by coordinating all stakeholders directly on the work order.
Trade and Trade Coverage
Set up trades to organize the services you offer and expedite work order setup. Instead of combing through hundreds of incident types when creating a work order, start with the trade. By applying a trade, the work order gains context about the job, letting it filter to only relevant incidents, and apply more appropriate not-to-exceeds. In addition, you can describe the services each customer should and should not receive as trade coverages. For example, you might provide roofing, pest control, and appliance services to the owner of a property, but only appliance services to the tenant. If a customer requests work that they are not covered for, the system will provide a warning for a chance to contact the customer or apply special pricing before moving forward. By setting up trades and trade coverages, the work order experience is streamlined to be more relevant to the customer’s problem.
Manage larger facilities and more complex assets
Significant performance improvements to the asset and functional location trees allow you to manage locations and assets at scale. Customers with 5000 assets will see load times go from 15 seconds down to 0.5s. In addition, asset systems that span across multiple locations are easier to manage with the updated user experience. Child assets at a different location than their parent will show both under their location and the parent asset in the tree with icons and tooltip explanations. For example, a CCTV security system at a campus spans two buildings: 83 and 86. The CCTV monitoring system at building 83 will show the child camera asset beneath it with an info icon. The CCTV camera will also show under building 86 with an info icon. Hovering over the icons will show a tooltip calling out the related asset, making for an easy search to pull up its details.
Track work order cost totals
Organizations can now track work order costs with enhanced total cards. Work order totals include the sum of all products and services with taxes. No more digging through product and service line items or creating custom fields to calculate the total cost of products and services on work orders!
Turn on new features
Go to Field Service Settings > Work Order / Booking tab, turn on these new features, and try them out today!
Recent Comments