What’s new in SynapseML v0.11

What’s new in SynapseML v0.11

This article is contributed. See the original author and article here.

Announcing SynapseML v0.11. The new version contains many new features to help you build scalable machine learning pipelines.Announcing SynapseML v0.11. The new version contains many new features to help you build scalable machine learning pipelines.


 


 


We are pleased to announce SynapseML v0.11, a new version of our open-source distributed machine learning library that simplifies and accelerates the development of scalable AI. In this release, we are excited to introduce many new features from the past year of developments well as many bug fixes and improvements. Though this post will give a high-level overview of the most salient new additions, curious readers can check out the full release notes for all of the new additions.


 


OpenAI Language Models and Embeddings


A new release wouldn’t be complete without joining the large language model (LLM) hype train and SynapseML v0.11 features a variety of new features that make large-scale LLM usage simple and easy. In particular, SynapseML v0.11 introduces three new APIs for working with foundation models: `OpenAIPrompt`, ` OpenAIEmbedding`, and `OpenAIChatCompletion`. The `OpenAIPrompt` API makes it easy to construct complex LLM prompts from columns of your dataframe. Here’s a quick example of translating a dataframe column called “Description” into emojis.


 

from synapse.ml.cognitive.openai import OpenAIPrompt

emoji_template = """
  Translate the following into emojis
  Word: {Description}
  Emoji: """

results = (OpenAIPrompt()
    .setPromptTemplate(emoji_template)
    .setErrorCol("error")
    .setOutputCol("Emoji")
    .transform(inputs))

 


 


This code will automatically look for a database column called “Description” and prompt your LLM (ChatGPT, GPT-3, GPT-4) with the created prompts. Our new OpenAI embedding classes make it easy to embed large tables of sentences quickly and easily from your Apache Spark clusters.  To learn more, see our docs on using OpenAI embeddings API and the SynapseML KNN model to create an LLM-based vector search engine directly on your spark cluster. Finally, the new OpenAIChatCompletion transformer allows users to submit large quantities of chat-based prompts to ChatGPT, enabling parallel inference of thousands of conversations at a time. We hope you find the new OpenAI integrations useful for building your next intelligent application.


 


Simple Deep Learning


SynapseML v0.11 introduces a new Simple deep learning package that allows for the training of custom text and deep vision classifiers with only a few lines of code. This package integrates the power of distributed deep network training with PytorchLightning with the simple and easy APIs of SynapseML. The new API allows users to fine-tune visual foundation models from torchvision as well as a variety of state-of-the-art text backbones from HuggingFace.


 


Here’s a quick example showing how to fine-tune custom vision networks:


 

from synapse.ml.dl import DeepVisionClassifier

train_df = spark.createDataframe([
    ("PATH_TO_IMAGE_1.jpg", 1),
    ("PATH_TO_IMAGE_2.jpg", 2)
], ["image", "label"])

deep_vision_classifier = DeepVisionClassifier(
    backbone="resnet50",
    num_classes=2,
    batch_size=16,
    epochs=2,
)

deep_vision_model = deep_vision_classifier.fit(train_df)

 


 


Keep an eye out with upcoming new releases of SynapseML featuring additional simple deep-learning algorithms that will make it easier than ever to train and deploy models at scale.


 


LightGBM v2


LightGBM is one of the most used features of SynapseML and we heard your feedback on better performance! SynapseML v0.11 introduces a completely refactored integration between LightGBM and Spark, called LightGBM v2. This integration aims for high performance by introducing a variety of new streaming APIs in the core LightGBM library to enable fast and memory-efficient data sharing between spark and LightGBM. In particular, the new “Streaming execution mode” has a >10x lower memory footprint than earlier versions of SynapseML yielding fewer memory issues and faster training. Best of all, you can use the new mode by just passing a single extra flag to your existing LightGBM models in SynapseML.


 


ONNX Model Hub


SynapseML supports a variety of new deep learning integrations with the ONNX runtime for fast, hardware-accelerated inference in all of the SynapseML languages (Scala, Java, Python, R, and .NET).  In version 0.11 we add support for the new ONNX model hub, which is an open collection of state-of-the-art pre-trained ONNX models that can be quickly downloaded and embedded into spark pipelines. This allowed us to completely deprecate and remove our old dependence on the CNTK deep learning library.  


 


To learn more about how you can embed deep networks into Spark pipelines, check out our ONNX episode in the new SynapseML video series:


 


 


Causal Learning


SynapseML v0.11 introduces a new package for causal learning that can help businesses and policymakers make more informed decisions. When trying to understand the impact of a “treatment” or intervention on an outcome, traditional approaches like correlation analysis or prediction models fall short as they do not necessarily establish causation. Causal inference aims to overcome these shortcomings by bridging the gap between prediction and decision-making. SynapseML’s causal learning package implements a technique called “Double machine learning”, which allows us to estimate treatment effects without data from controlled experiments. Unlike regression-based approaches, this approach can model non-linear relationships between confounders, treatment, and outcome. Users can run the DoubleMLEstimator using a simple code snippet like the one below:


 

from pyspark.ml.classification import LogisticRegression
from synapse.ml.causal import DoubleMLEstimator

dml = (DoubleMLEstimator()
      .setTreatmentCol("Treatment")
      .setTreatmentModel(LogisticRegression())
      .setOutcomeCol("Outcome")
      .setOutcomeModel(LogisticRegression())
      .setMaxIter(20))

dmlModel = dml.fit(dataset)

 


 


For more information, be sure to check out Dylan Wang’s guided tour of the DoubleMLEstimator on the SynapseML video series:


 


Vowpal Wabbit v2


Finally, SynapseML v0.11 introduces Vowpal Wabbit v2, the second-generation integration between the Vowpal Wabbit (VW) online optimization library and Apache Spark. With this update, users can work with Vowpal wabbit data directly using the new “VowpalWabbitGeneric” model. This makes working with Spark easier for existing VW users. This more direct integration also adds support for new cost functions and use cases including “multi-class” and “cost-sensitive one against all” problems. The update also introduces a new progressive validation strategy and a new Contextual Bandit Offline policy evaluation notebook to demonstrate how to evaluate VW models on large datasets.


 


Conclusion


In conclusion, we are thrilled to share the new SynapseML library with you with you and hope you will find that it simplifies your distributed machine learning pipelines.  This blog only covered the highlights, so be sure to check out the full release notes for all the updates and new features. Whether you are working with large language models, training custom classifiers, or performing causal inference, SynapseML makes it easier and faster to develop and deploy machine learning models at scale.


 


Learn more


Field Service Palm Springs: Modernize service operations

Field Service Palm Springs: Modernize service operations

This article is contributed. See the original author and article here.

We’re excited to return to Field Service Palm Springs from April 25 through April 27, 2023, at the JW Marriott Desert Springs Resort & Spa.

We will showcase how Connected Field Service helps leaders:

  • Move beyond the costly break/fix model to a proactive, predictive model.
  • Unlock the power of data and use Internet of Things (IoT), machine learning, and AI.
  • Transform their field operations and improve customer experience.

This year, we are hosting a thought leadership luncheon with our partner Hitachi Solutions to discuss the benefits of a connected field service and how to use data to remain competitive, and continuously improve business performance and customer experiences in an increasingly challenging environment.

Field service organizations manage hundreds of technicians with varying expertise, experiences, and skills. With 80 percent of consumers more likely to make a purchase from a brand that provides personalized experiences, organizations have come to realize how important quality service is to remain resilient despite uncertainty.1 Employees are working from remote or distributed locations, reducing the amount of personalized interaction. Meanwhile, remote monitoring of IoT devices continues to transform service from a cost center to a revenue generator.

Connected Field Service is the ability to add connected devices, powered by the Internet of Things (IoT), and uses cloud capabilities to augment your existing field service operations. It enables organizations to transform the way they provide service from a costly, reactive break-fix model to a proactive, and in some cases, even predictive service model through the holistic combination of IoT diagnostics, scheduling, asset maintenance, and inventory on the same platform.

IoT has brought a new level of efficiency to the field service industry, helping service professionals address issues more proactively and minimize downtime. As McKinsey researchers predict, IoT applications could generate a value of over $470 billion annually by 2025 by enhancing operations across various industries.2

By integrating IoT signals across the enterprise, a connected field service helps organizations predict and resolve customer issues before the customer is aware, thereby ensuring consistent and dependable customer operations through hassle-free and preemptive field service.

Four Connected Field Service solutions

Connected Field Service combines four innovative Microsoft solutions that enable service leaders to digitally transform service organizations:

1. Microsoft Dynamics 365 Field Service: Optimizes service operations and inventory management

  • Reduces downtime by enabling service organizations to rapidly dispatch technicians
  • Helps service teams ensure a first-time fix by selecting the right technicians and parts for each call
  • Increases service efficiency by optimizing service call assignments, routes, and scheduling
  • Increases customer satisfaction by ensuring technicians are aware of service preferences

2. Azure IoT Remote Monitoring: Gathers data from connected assets

  • Helps technicians identify and repair malfunctioning assets before damage occurs
  • Reduces the need for service calls by enabling technicians to remotely diagnose equipment issues
  • Arms technicians with the diagnostic information they need to ensure a first-time fix
  • Enables service organizations to analyze equipment failure patterns to improve maintenance strategies

3. Microsoft Azure IoT Predictive Maintenance: Transforms asset data into insights

  • Reduces downtime by enabling technicians to anticipate and preempt equipment failures
  • Limits unnecessary maintenance by aligning equipment service strategies to observed patterns
  • Increases efficiency by enabling teams to service assets when the right parts and people are available
  • Enables organizations to explore new business models using insights from service data

4. Microsoft Dynamics 365 Sales: Identifies upsell and cross-sell opportunities

  • Provides service technicians with upsell and cross-sell recommendations
  • Enables team members in non-sales roles to advance deals with step-by-step guidance
  • Enables sales teams and service technicians to access customer information and sales resources in non-office environments
  • Drives visibility into product and parts usage across the organization

Connected Field Service becomes a reality with Microsoft. Service leaders can better manage costs, enhance service delivery, and increase customer satisfaction (CSAT) by proactively resolving customer issues before the customer is aware. Take advantage of smart, internet-ready devices that can detect and diagnose issues, integrating with field service management (FSM) software like Dynamics 365 Field Service to automatically initiate troubleshooting and, when needed, create work orders to dispatch technicians for onsite service. Learn how you can use technology to schedule preventative maintenance based on consumption rather than rely on a regimented schedule. Best of all, enjoy the flexibility of implementing the solution in stages so your team can ramp up via a natural progression. Learn more about the latest Dynamics 365 Field Service features.

Engage with Microsoft at Field Service Palm Springs 2023

We invite you to join us, along with our partners, to discover how Connected Field Service using Dynamics 365 Field Service and IoT can help create a seamless service experience that enhances customer experiences, increases cost savings, and improves efficiency.

Register for Field Service Palm Springs and visit the Microsoft booth (101/103) where you can meet with Dynamics 365 Field Service experts to discuss how connected data enables better experiences across your organization.

About Field Service Palm Springs

For 20 years, Field Service Palm Springs has become the must-attend conference for service executives. From early IoT concepts to AI, Field Service is where innovative ideas spread, and future strategies are created. Today, Field Service is a global event, with major conferences in Palm Springs, Amelia Island, San Diego, Amsterdam, and Singapore.

Since 2003, the top service and support minds have gathered in Palm Springs in April for the flagship Field Service conference. With forward-looking content and unique session formats that ensure you learn and network most effectively, Field Service is designed to help you achieve service excellence and drive profitability.

Close-up of two hands holding a tablet

Microsoft Dynamics 365 Field Service

Optimize service operations and inventory management.


Sources

1 Forbes, 50 Stats Showing The Power of Personalization, 2020

2 FieldCircle, How To Utilize IoT in The Field Service Industry?

The post Field Service Palm Springs: Modernize service operations appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Manage attribute-based omnichannel sales pricing 

Manage attribute-based omnichannel sales pricing 

This article is contributed. See the original author and article here.

Pricing is one of the fundamental tools to boost supply chain profits by better matching supply and demand. Many businesses have started to reform their pricing strategies in recent years as a result of the growth of e-commerce and the constantly changing business environments in order to improve pricing transparency, supply chain agility and margin optimization.

We are launching the Public Preview of Pricing management within Dynamics 365 Supply Chain Management from 10.0.33 to support sales managers managing and execute the attribute-based omnichannel sales pricing.

Why attribute-based omnichannel pricing?

  • Transaction to omnichannel pricing:

Traditional business-to-business (B2B) organizations are increasingly considering switching to omnichannel sales and selling directly to end customers in order to have greater control over price and margins. The omnichannel transformation results in significant modifications to pricing models and rules.

By offering an omnichannel price engine, a central place to manage pricing rules and automated omnichannel pricing execution, Dynamics 365 Supply Chain Management aids B2B business in the transition to omnichannel pricing.

  • Transaction to attribute-based pricing:

Working with the marketing and product manager to comprehend the product differentiating features, target customer segments and other pricing sensitivity elements is one of the important responsibilities of the sales managers. The package types, the delivery mode and the expected receipt date could be one of the pricing differentiators. By giving business the ability to convert data from customers, product, and orders into price attributes and building pricing on different pricing structure,  Dynamics 365 Supply Chain Management supports business to adopt the attribute-based pricing model.

What is Pricing management?

Dynamics 365 Supply Chain Management Pricing Management leverages the Commerce Scale Unit (CSU) to help traditional B2B companies embrace omnichannel pricing. Pricing management enables attribute-based pricing for the price components that are across the sales pricing structures, including product base price, sales trade agreement price, discounts, charges and rebate management.  

How Pricing management supports business flows:

  1. DESIGN your pricing component types using price attributes.
  2. CONSTRUCT your pricing structure with pricing components, such as margin elements.
  3. MANAGE price markup based on product standard cost (for manufactured products) or vendor price catalog (for trading products).
  4. SIMULATE pricing rules and impacts.
  5. EXECUTE pricing calculation across channels.
  6. MONITOR promotion fund consumption with control.

  • Flexible data model for building price attributes. Price attributes can be based on categorized product pricing differentiators, customer groups and order types.  
  • Central place to offer, manage and calculate pricing. Boost pricing transparency across channels, which is essential for aligning pricing strategies across multiple channels.  
  • Manage complex pricing structureswith price component breakdowns. When you place an order, the pricing details reflect the pricing structure for you to understand the pricing calculation sequence and price breakdowns for future in-depth analysis.
  • Establish the sophisticated pricing with pricing simulator to evaluate the impact. When converting from B2B pricing to B2B and B2C pricing, consider discount concurrency, bundle sales, mandatory sales items, and bonus free item pricing rules.  
  • Fund control to ensure you don’t avoid margin leakage from fund consumption.  
  • Real-time cross channel pricing execution with the pricing engine to quickly determine pricing while considering a variety of commercial aspects, such as the item’s general base price, the price of a sales trade agreements, long-term discount agreements, short-term promotion discounts, and retrospective rebate calculations for each sales order. 
  • External applications can retrieve calculated pricing by leverage the Commerce Scale Unit (CSU)based Pricing APIs.

Next steps:

If your organization on the journal transition to attribute-based omnichannel selling pricing, consider taking the next step with Pricing Management within Dynamics 365 Supply Chain Management.

Get an overview of Pricing management by reading the document.

If you are a potential customer or partner and want to learn more, contact the https://learn.microsoft.com/en-us/dynamics365/supply-chain/pricing-management/price-attributes-overview product team directly by email.

Also check a series of demo video in Pricing management Yammer.

Not yet a Supply Chain Management customer? Take a guided tour


The post Manage attribute-based omnichannel sales pricing  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service 

Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service 

This article is contributed. See the original author and article here.

Your brand is the face of your business. And often, the live chat widget on your website is the first point of contact for your customers. Having a strong brand for your customer service products can build trust and credibility, differentiate yourself from competitors, ensure consistency in communication, and create a positive emotional connection with customers.  

We are excited to announce our upgraded live chat widget that allows you to customize every detail of the widget to match your brand identity. From the font and color scheme to the iconography, you can now own every pixel of the widget and ensure that it represents your brand in the best possible way. 

Three customized chat widgets, each representing a different branding style

Style every component of the live chat widget to reflect your brand 

When you update your environment with the latest release, you can use our live chat script tag customization to edit the design of the live chat widget through CSS-represented styling. It is easier than ever to create a branded look for your chat widget. You can choose the font, color, style, and size of every component of the chat widget to reflect your brand. The image below shows examples of chat widget components and the different ways you can change them.

Editable elements in the default chat button and chat container

Learn more

Watch a quick video introduction.

To update your chat widget and customize every detail, please check out our public documentation here to learn more. 

For more advanced customization options, try the custom chat widget here, where you can customize the functionalities as well.  

The post Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Como criar uma extensão customizada para o Azure DevOps

Como criar uma extensão customizada para o Azure DevOps

This article is contributed. See the original author and article here.

Como criar uma extensão customizada para o Azure DevOps


Em alguns casos, é necessário criar uma extensão personalizada para o Azure DevOps, seja para adicionar funcionalidades que não estão disponíveis nativamente ou para modificar alguma funcionalidade existente que não atenda às necessidades do projeto. Neste artigo, mostraremos como criar uma extensão personalizada para o Azure DevOps e como publicá-la no Marketplace do Azure DevOps.


Antes de começar certifique:



  • Ter uma conta no Azure DevOps. Caso ainda não tenha uma, você pode criar uma seguindo as instruções disponíveis aqui.

  • Ter um editor de código instalado, como o Visual Studio Code, que pode ser baixado em code.visualstudio.com.

  • Ter a versão LTS do Node.js instalada, disponível para download em, nodejs.org. Ter o compilador de TypeScript instalado, sendo a versão recomendada 4.0.2 ou superior. Ele pode ser instalado via npm em npmjs.com.

  • Ter o CLI do TFX instalado, sendo a versão recomendada 0.14.0 ou superior. Ele pode ser instalado globalmente via npm com o comando npm i -g tfx-cli ou conferindo mais detalhes em TFX-CLI npm i -g tfx-cli.


Preparando o ambiente de desenvolvimento




  1. Crie uma pasta para a extensão, por exemplo, my-extension e dentro desta pasta crie a uma subpasta, por exemplo, task.




  2. Abra o terminal na pasta criada e execute o comando npm init -y, o parâmetro -y é para aceitar todas as opções padrão. Você vai notar que foi criado um arquivo chamado package.json e nele estão as informações da extensão.


        {
    “name”: “my-extension”,
    “version”: “1.0.0”,
    “description”: “”,
    “main”: “index.js”,
    “scripts”: {
    “build”: “tsc ./index.ts”,
    },
    “keywords”: [],
    “author”: “”,
    “license”: “ISC”
    }



  3. Adicione a azure-pipelines-task-lib como dependência da extensão, execute o comando npm i azure-pipelines-task-lib –save-dev.




  4. Adicione também as tipificações do TypeScript, execute o comando npm i @types/node –save-dev e npm i @types/q –save-dev.




  5. Crie um arquivo .gitignore na pasta raiz da extensão e adicione o seguinte conteúdo:


    node_modules



  6. Instale o compilador de TypeScript, execute o comando npm i typescript –save-dev.




  7. Crie um arquivo tsconfig.json na pasta raiz da extensão e adicione o seguinte conteúdo:


        {
    “compilerOptions”: {
    “target”: “es6”, /* Specify ECMAScript target version: ‘ES3’ (default), ‘ES5’, ‘ES2015’, ‘ES2016’, ‘ES2017’, ‘ES2018’, ‘ES2019’, ‘ES2020’, or ‘ESNEXT’. */
    “module”: “commonjs”, /* Specify module code generation: ‘none’, ‘commonjs’, ‘amd’, ‘system’, ‘umd’, ‘es2015’, ‘es2020’, or ‘ESNext’. */
    “strict”: true, /* Enable all strict type-checking options. */
    “esModuleInterop”: true, /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies ‘allowSyntheticDefaultImports’. */
    “skipLibCheck”: true, /* Skip type checking of declaration files. */
    “forceConsistentCasingInFileNames”: true /* Disallow inconsistently-cased references to the same file. */
    }
    }



  8. Crie um arquivo chamado vss-extension.json na pasta raiz da extensão my-extension e adicione o seguinte conteúdo:


        {
    “manifestVersion”: 1,
    “id”: “<>”,
    “version”: “1.0.0”,
    “publisher”: “<>”,
    “name”: “My Extension”,
    “description”: “My Extension”,
    “public”: false,
    “categories”: [
    “Azure Pipelines”
    ],
    “targets”: [
    {
    “id”: “Microsoft.VisualStudio.Services”
    }
    ],
    “icons”: {
    “default”: “images/icon.png”
    },
    “files”: [
    {
    “path”: “task”
    }
    ],
    “contributions”: [
    {
    “id”: “my-extension”,
    “description”: “My Extension”,
    “type”: “ms.vss-distributed-task.task”,
    “targets”: [
    “ms.vss-distributed-task.tasks”
    ],
    “properties”: {
    “name”: “my-extension”
    }
    }
    ]
    }

    Substitua o <> por ID único de cada extensão, você pode gerar um ID aqui. Substitua o <> pelo publisher ID criado no passo 1 da etapa de publish.




  9. Na pasta raiz da sua extensão my-extension, crie uma pasta chamada images e adicione uma imagem chamada icon.png com o tamanho de 128×128 pixels. Essa imagem será usada como ícone da sua extensão no Marketplace.




Criando a extensão


Depois de configurar o ambiente, você pode criar a extensão.




  1. Na pasta task crie um arquivo chamado task.json e adicione o seguinte conteúdo:


        {
    “$schema”: “https://raw.githubusercontent.com/Microsoft/azure-pipelines-task-lib/master/tasks.schema.json”,
    “id”: “<>”,
    “name”: “My Extension”,
    “friendlyName”: “My Extension”,
    “description”: “My Extension”,
    “helpMarkDown”: “”,
    “category”: “Utility”,
    “visibility”: [
    “Build”,
    “Release”
    ],
    “author”: “Your Name”,
    “version”: {
    “Major”: 1,
    “Minor”: 0,
    “Patch”: 0
    },
    “groups”: [],
    “inputs”: [],
    “execution”: {
    “Node16”: {
    “target”: “index.js”
    }
    }
    }

    Substitua o <> pelo mesmo GUID gerado no passo 8 da etapa de preparação de ambiente de desenvolvimento.


    Esse arquivo descreve a extensão que será executada no pipeline. Nesse caso, a extensão ainda não faz nada, mas você pode adicionar os inputs e a lógica para executar qualquer coisa.




  2. Na sequência crie um arquivo chamado index.js e adicione o seguinte conteúdo:


        const tl = require(‘azure-pipelines-task-lib/task’);

    async function run() {
    try {
    tl.setResult(tl.TaskResult.Succeeded, ‘My Extension Succeeded!’);
    }
    catch (err) {
    if (err instanceof Error) {
    tl.setResult(tl.TaskResult.Failed, err.message);
    }
    }
    }

    run();


    Esse arquivo é o responsável por executar a extensão. Nesse caso, ele apenas retorna uma mensagem de sucesso. Você pode adicionar a lógica para executar qualquer coisa.




  3. Adicione na pasta task uma imagem chamada icon.png com o tamanho de 32×32 pixels. Essa imagem será usada como ícone da sua extensão no Azure Pipelines.




  4. No terminal, execute o comando tsc, para compilar o código Typescript para Javascript. Esse comando irá gerar um arquivo chamado index.js na pasta task.




  5. Para executar a extensão localmente, execute o comando node index.js. Você deve ver a mensagem My Extension Succeeded!.


        C:tempmy-extensiontask> node index.js
    ##vso[task.debug]agent.TempDirectory=undefined
    ##vso[task.debug]agent.workFolder=undefined
    ##vso[task.debug]loading inputs and endpoints
    ##vso[task.debug]loading INPUT_CLEANTARGETFOLDER
    ##vso[task.debug]loading INPUT_CLIENTID
    ##vso[task.debug]loading INPUT_CLIENTSECRET
    ##vso[task.debug]loading INPUT_CONFLICTBEHAVIOUR
    ##vso[task.debug]loading INPUT_CONTENTS
    ##vso[task.debug]loading INPUT_DRIVEID
    ##vso[task.debug]loading INPUT_failOnEmptySource
    ##vso[task.debug]loading INPUT_FLATTENFOLDERS
    ##vso[task.debug]loading INPUT_SOURCEFOLDER
    ##vso[task.debug]loading INPUT_TARGETFOLDER
    ##vso[task.debug]loading INPUT_TENANTID
    ##vso[task.debug]loaded 11
    ##vso[task.debug]Agent.ProxyUrl=undefined
    ##vso[task.debug]Agent.CAInfo=undefined
    ##vso[task.debug]Agent.ClientCert=undefined
    ##vso[task.debug]Agent.SkipCertValidation=undefined
    ##vso[task.debug]task result: Succeeded
    ##vso[task.complete result=Succeeded;]My Extension Succeeded!
    C:tempmy-extensiontask>



Publicando a extensão no Marketplace


Quando a sua extensão estiver pronta, você pode publicá-la no Marketplace. Para isso será necessário criar um editor de extensão no Marketplace.




  1. Acesse o Marketplace e clique em Publish Extension. Após fazer o login, você será redirecionado para a página de criação de um editor de extensão. Preencha os campos e clique em Create.


    Criando um editor de extensão




  2. No terminal execute o comando tfx extension create –manifest-globs vss-extension.json, na pasta My-Extension. Esse comando irá gerar um arquivo chamado publishID-1.0.0.vsix, que é o arquivo que será publicado no Marketplace.


    CreateExtension




  3. Acesse a página de publicação de extensão no Marketplace e clique New extension e seguida Azure DevOps. Selecione o arquivo my-extension-1.0.0.vsix e clique em Upload.


    UploadExtension


    Se tudo ocorrer bem, você verá algo como a imagem abaixo.


    ExtensionPublished




  4. Com a extensão publicada, será necessário compartilhá-la com a sua organização. Para isso, clique no menu de contexto da extensão e clique em Share/UnShare.


    ShareExtension


    Clique em + Organization.


    ShareExtension1


    E digite o nome da sua Organização, ao clicar fora da caixa de digitação a validação é feita e o compartilhamento é realizado.


    ShareExtension2




Instalando a extensão na sua organização


Após publicar a extensão no Marketplace, você pode instalá-la na sua organização, para isso siga os passos abaixo.




  1. Clique no menu de contexto da extensão e clique em View Extension.


    InstallExtension


    Você verá algo como a imagem abaixo.


    InstallExtension1




  2. Clique em Get it free.




  3. Verifique se sua organização está selecionada e clique em Install.


    InstallExtension2


    Se a instalação ocorrer tudo bem, você verá algo como a imagem abaixo.


    InstallExtension3


    Após a instalação, você verá a extensão na lista de extensões instaladas e poderá ser utilizada nos seus pipelines.




Conclusão


O uso de extensões customizadas no Azure DevOps desbloqueiam funcionalidades que não estão disponíveis. Neste artigo, você aprendeu como criar uma extensão customizada e como publicá-la no Marketplace. Espero que tenha gostado e que possa aplicar o conhecimento adquirido em seus projetos.


Referências



  1. Criar uma organização

  2. Referência de manifesto de extensão

  3. Build/Release Task Exemplos

  4. Extensões de pacote e publicação

Extracting Table data from documents into an Excel Spreadsheet

Extracting Table data from documents into an Excel Spreadsheet

This article is contributed. See the original author and article here.

Documents can contain table data. For example, earning reports, purchase order forms, technical and operational manuals, etc., contain critical data in tables. You may need to extract this table data into Excel for various scenarios.



  • Extract each table into a specific worksheet in Excel.

  • Extract the data from all the similar tables and aggregate that data into a single table.


Here, we present two ways to generate Excel from a document’s table data:



  1. Azure Function (HTTP Trigger based): This function takes a document and generates an Excel file with the table data in the document.

  2. Apache Spark in Azure Synapse Analytics (in case you need to process large volumes of documents).


The Azure function extracts table data from the document using Form Recognizer’s “General Document” model and generates an Excel file with all the extracted tables. The following is the expected behavior:



  • Each table on a page gets extracted and stored to a sheet in the Excel document. The sheet name corresponds to the page number in the document.

  • Sometimes, there are key-value pairs on the page that need to be captured in the table. If you need that feature, leverage the add_key_value_pairs flag in the function.

  • Form Recognizer extracts column and row spans, and we take advantage of this to present the data as it is represented in the actual table.


 


Following are two sample extractions.









Pic3.png Pic4.png

Top excel is with key value pairs added to the table. Bottom one is without the key value pairs.


 









Pic1.png Pic2.png







The Excel shown above is the extraction of table data from an earnings report. The earnings report file had multiple pages with tables, and the fourth page had two tables. 






 

 








 




 


Solution


Azure Function and Synapse Spark Notebook is available here in this GIT Repository 



  • Deployment Steps 


  • Sample Data: The repository has two sample documents to work with:


  • Note on the Excel output: 

    • If there is a page in the main document with no tables, no sheet will be created for that page.

    • The code has been updated to remove the extracted text from check boxes (“:selected:”, “:unselected:”) in the table.

    • If a cell does not have any alphanumeric text, it will be skipped. Please update the code to reflect different behavior.




 


How to leverage this Solution



  • Use this solution to generate an Excel file as mentioned above.

  • Integrate this with Power Automate so that end-users can use this seamlessly from O365 (email, SharePoint, or Teams).

  • Customize this to generate an aggregated table.


 


Contributors: Ben Ufuk Tezcan, Vinod Kurpad, Matt Nelson, Nicolas Uthurriague , Sreedhar Mallangi

Microsoft Purview in the Real World (April 21, 2023) – Sensitivity Labels and SharePoint Sites

Microsoft Purview in the Real World (April 21, 2023) – Sensitivity Labels and SharePoint Sites

This article is contributed. See the original author and article here.

James_Havens_1-1682100919511.png


 


Disclaimer


This document is not meant to replace any official documentation, including those found at docs.microsoft.com.  Those documents are continually updated and maintained by Microsoft Corporation.  If there is a discrepancy between this document and what you find in the Compliance User Interface (UI) or inside of a reference in docs.microsoft.com, you should always defer to that official documentation and contact your Microsoft Account team as needed.  Links to the docs.microsoft.com data will be referenced both in the document steps as well as in the appendix.


 


All the following steps should be done with test data, and where possible, testing should be performed in a test environment.  Testing should never be performed against production data.


 


Target Audience


Microsoft customers who want to better understand Microsoft Purview.


 


 


Document Scope


The purpose of this document (and series) is to provide insights into various user cases, announcements, customer driven questions, etc.


 


Topics for this blog entry


Here are the topics covered in this issue of the blog:



  • Sensitivity Labels relating to SharePoint Lists

  • Sensitivity Label Encryption versus other types of Microsoft tenant encryption

  • How Sensitivity Labels conflicts are resolved

  • How to apply Sensitivity Labels to existing SharePoint Sites

  • Where can I find information on how Sensitivity Labels are applied to data within a SharePoint site (i.e. File label inheritance from the Site label)


 


Out-of-Scope


This blog series and entry is only meant to provide information, but for your specific use cases or needs, it is recommended that you contact your Microsoft Account Team to find other possible solutions to your needs.


 


Sensitivity labels and SharePoint Sites – Assorted topics


 


Encryption Sensitivity Label Encryption versus other types of Microsoft tenant encryption


 


 


Question #1


How does the encryption of Sensitivity Labels compare to encryption in leveraged in BitLocker?


 


Answer #1


The following table breaks this down in detail and is taken from the following Microsoft Link.


Encryption in Microsoft 365 – Microsoft Purview (compliance) | Microsoft Learn


 


James_Havens_0-1682101199234.png


 


Sensitivity Labels relating to SharePoint Lists


 


 


Question #2


Can you apply Sensitivity Labels to SharePoint Lists?


 


Answer #2


The simple answer is NO while in the list, but YES once the list is exported to a file format.


 


Data in the SharePoint List is stored within a SQL table in SharePoint.  At the time of the writing of this blog, you cannot apply a Sensitivity Label to a SharePoint Online tables, including SharePoint Lists.


 


SharePoint Lists allow for exports of the data in the list to a file format.  An automatic sensitivity label policy can apply a label to those file formats. Here is an (example below of those export options.


 


James_Havens_1-1682101270872.png


 


 


How to apply Sensitivity Labels to existing SharePoint Sites


 


Question #3


Can you apply Sensitivity Labels to existing SHPT sites?  If so, is this, can this be automated (ex. PowerShell)


 


Answer #3


You can leverage PowerShell to apply SharePoint labels to multiple sites.  Here is the link that explains how to accomplish this.


Look for these two sections in the link below for details:



  • Use PowerShell to apply a sensitivity label to multiple sites

  • View and manage sensitivity labels in the SharePoint admin center


 


 


Use sensitivity labels with Microsoft Teams, Microsoft 365 Groups, and SharePoint sites – Microsoft Purview (compliance) | Microsoft Learn


 


How Sensitivity Labels conflicts are resolved


 


Question #4


If you have an existing file with an existing Sensitivity Label that is stricter than the Sensitivity Label being inherited from SharePoint Site label, which Sensitivity Label is applied to the file? 


 


Answer #4


Please refer to the link and table below for how Sensitivity Label conflicts are handled.  Notice that any Higher priority label or user applied label, would not be overridden by a site label or an automatic labeling policy.


 


Configure a default sensitivity label for a SharePoint document library – Microsoft Purview (compliance) | Microsoft Learn


 


James_Havens_2-1682101300207.png


 


File label inheritance from the Site label


 


Question #5


Where can you find the documentation on SharePoint Site labels and how label inheritance applies to files in that SharePoint site?


 


Answer #5


 


Here are 2 links that can help you with Sensitivity Labels and how they relate to SharePoint sites:


 



 



 


 


When it comes to default Sensitivity Labels for SharePoint sites/libraries (what I have called “label inheritance” above, this link is of use.


 



 


“When SharePoint is enabled for sensitivity labels, you can configure a default label for document libraries. Then, any new files uploaded to that library, or existing files edited in the library will have that label applied if they don’t already have a sensitivity label, or they have a sensitivity label but with lower priority.


 


For example, you configure the Confidential label as the default sensitivity label for a document library. A user who has General as their policy default label saves a new file in that library. SharePoint will label this file as Confidential because of that label’s higher priority.”


 


 


Appendix and Links



 



 



 



 



 



 



 



 



 



 


 

Lesson Learned #344:Managed Instance needs permissions to access Azure Active Directory.

This article is contributed. See the original author and article here.

Today, we worked on a service request that our customer got the following error message : Managed Instance needs permissions to access Azure Active Directory. You need to be a ‘Company Administrator’ or a ‘Global Administrator’ to grant ‘Read’ permissions to the Managed Instance.


 


Azure SQL Managed Instance needs permissions to read Azure AD to successfully accomplish tasks such as authentication of users through security group membership or creation of new users. For this to work, we need to grant the Azure SQL Managed Instance permission to read Azure AD.


 


We can do this using the Azure portal or PowerShell. This operation can only be executed by Global Administrator or a Privileged Role Administrator in Azure AD.


 


You can assign the Directory Readers role to a group in Azure AD. The group owners can then add the managed instance identity as a member of this group, which would allow you to provision an Azure AD admin for the SQL Managed Instance. That means you need to have Global Administrator or Privileged Role Administrator access to provide the read permission to the SQL MI.


 


Directory Reader role


 


In order to assign the Directory Readers role to an identity, a user with Global Administrator or Privileged Role Administrator permissions is needed. Users who often manage or deploy SQL Database, SQL Managed Instance, or Azure Synapse may not have access to these highly privileged roles. This can often cause complications for users that create unplanned Azure SQL resources, or need help from highly privileged role members that are often inaccessible in large organizations.
For SQL Managed Instance, the Directory Readers role must be assigned to managed instance identity before you can set up an Azure AD admin for the managed instance.
 
Assigning the Directory Readers role to the server identity isn’t required for SQL Database or Azure Synapse when setting up an Azure AD admin for the logical server. However, to enable an Azure AD object creation in SQL Database or Azure Synapse on behalf of an Azure AD application, the Directory Readers role is required. If the role isn’t assigned to the SQL logical server identity, creating Azure AD users in Azure SQL will fail. For more information, see Azure Active Directory service principal with Azure SQL.
 
Supported Article: https://learn.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-directory-readers-role?view=azuresql#assigning-the-directory-readers-role

Field Service Wave 1 2023 Features 

Field Service Wave 1 2023 Features 

This article is contributed. See the original author and article here.

We are excited to announce several new features that will help service organizations work more efficiently than ever before. The new not-to-exceed (NTE) feature ensures that work orders stay within budget, the trade and trade coverages features enable you to stay organized and efficient when providing groups of services to customers. In addition, significant performance improvements to the asset and functional location trees let you manage larger facilities and more complex assets. These features will streamline your workflow and save time, especially when working with vendors or subcontractors to fulfill jobs. Read on to learn more about how they work and why you should try them out today! 

Not-to-Exceed functionality

Phenomenal service starts with understanding customers’ needs and expectations. The new not-to-exceed feature allows you to capture your customer’s price expectations before work begins. When new information changes work estimates, your frontline workers will be warned about going over the customer’s expected cost. This provides an opportunity to seek approval for additional charges before any work is performed. 

Not to exceeds can be captured manually on work orders or be automatically applied when onboarding customers onto your service network. Customers often automatically approve work that is recurring, preventative, or low cost. By capturing expectations when onboarding a customer, they will be automatically applied to work orders, cutting out back-and-forth communications about pricing and expediting job completion. 

In addition, cost expectations can be set via not-to-exceed values or based on a margin of the price expectations. For example, when your organization contracts maintenance of a location to a vendor, and you expect to make a 30% margin on these maintenance work orders. The system can automatically apply the appropriate price not-to-exceed to the work order for the customer and apply a cost not-to-exceed 30% less than the price, ensuring the vendor contacts you if costs are higher. This ensures your margin expectations are met by coordinating all stakeholders directly on the work order. 

Trade and Trade Coverage 

Set up trades to organize the services you offer and expedite work order setup. Instead of combing through hundreds of incident types when creating a work order, start with the trade. By applying a trade, the work order gains context about the job, letting it filter to only relevant incidents, and apply more appropriate not-to-exceeds. In addition, you can describe the services each customer should and should not receive as trade coverages. For example, you might provide roofing, pest control, and appliance services to the owner of a property, but only appliance services to the tenant. If a customer requests work that they are not covered for, the system will provide a warning for a chance to contact the customer or apply special pricing before moving forward. By setting up trades and trade coverages, the work order experience is streamlined to be more relevant to the customer’s problem. 

Manage larger facilities and more complex assets 

Significant performance improvements to the asset and functional location trees allow you to manage locations and assets at scale. Customers with 5000 assets will see load times go from 15 seconds down to 0.5s. In addition, asset systems that span across multiple locations are easier to manage with the updated user experience. Child assets at a different location than their parent will show both under their location and the parent asset in the tree with icons and tooltip explanations. For example, a CCTV security system at a campus spans two buildings: 83 and 86. The CCTV monitoring system at building 83 will show the child camera asset beneath it with an info icon. The CCTV camera will also show under building 86 with an info icon. Hovering over the icons will show a tooltip calling out the related asset, making for an easy search to pull up its details. 

 

Track work order cost totals 

Organizations can now track work order costs with enhanced total cards. Work order totals include the sum of all products and services with taxes. No more digging through product and service line items or creating custom fields to calculate the total cost of products and services on work orders! 

Turn on new features 

Go to Field Service Settings > Work Order / Booking tab, turn on these new features, and try them out today! 

Learn More 

Not-to-exceed documentation 

Trade documentation 

The post Field Service Wave 1 2023 Features  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.