SharePoint community call – April 13th, 2021

SharePoint community call – April 13th, 2021

This article is contributed. See the original author and article here.

The SharePoint community monthly call is our general monthly review of the latest SharePoint news  (news, tools, extensions, features, capabilities, content and training), engineering priorities and community recognition for Developers, IT Pros and Makers.  This monthly community call happens on the second Tuesday of each month. You can download recurrent invite from https://aka.ms/sp-call.


 


 


Call Summary:


Visit the new Microsoft 365 PnP Community hub at Microsoft Tech Communities!  Preview the new Microsoft 365 Extensibility look book gallery, attend one of a growing list of Sharing is Caring events.  The Microsoft 365 Update – Community (PnP) | April 2021 is now available.  SPFx v1.12.1 with Node v14 and Gulp4 support was released for public beta today and GA slated for end of April.  In this call, quickly addressed developer and non-developer entries in UserVoice.  


 


A huge thank you to the record number of contributors and organizations actively participating in this PnP Community. You continue to amaze.  The host of this call was Vesa Juvonen (Microsoft) @vesajuvonen.  Q&A took place in the chat throughout the call. 


 


15th-april-together-mode.gif


 


 


Getting started with Microsoft Viva Connections Desktop – an employee centric app in Teams with one stop access to internet resources, global search, contextual actions, and company branded experience.   A SharePoint home site powered by Microsoft Teams, backed by Microsoft security, privacy and compliance.   No additional licensing.  Familiar extensible platform that will include mobile this summer.   Create Viva Connections app package in PowerShell, upload package to Teams Admin Center.    


 


Actions: 


 



  • Register for livestream and for a regional watch party:


  • Try public beta of SPFx v1.12.1, access through npm.

  • Complete the Developer Success Survey – https://aka.ms/developersuccess

  • Join the M365 customer success platform panel – https://aka.ms/SuccessPanel

  • Register for Sharing is Caring Events:

    • First Time Contributor Session – April 27th   (EMEA, APAC & US friendly times available)

    • Community Docs Session – April

    • PnP – SPFx Developer Workstation Setup – April 29th

    • PnP SPFx Samples – Solving SPFx version differences using Node Version Manager – April 15th

    • First Time Presenter – April 21st

    • More than Code with VSCode – April 14th & 28th

    • Maturity Model Practitioners – April 20th

    • PnP Office Hours – 1:1 session – Register



  • Download the recurrent invite for this call – https://aka.ms/sp-call.


 


You can check the latest updates in the monthly summary and at aka.ms/spdev-blog.


This call was delivered on Tuesday, April 13, 2021. The call agenda is reflected below with direct links to specific sections.  You can jump directly to a specific topic by clicking on the topic’s timestamp which will redirect your browser to that topic in the recording published on the Microsoft 365 Community YouTube Channel.


 


Call Agenda:


 



  • SharePoint community update with latest news and roadmap – 2:47

  • UserVoice status for non-dev focused SharePoint entries – 8:36

  • UserVoice status for dev focused SharePoint Framework entries – 9:45 

  • Community contributors and companies which have been involved in the past month – 12:50 

  • Topic: Getting started with Microsoft Viva Connections Desktop  Tejas Mehta (Microsoft) | @tpmehta and Prateek Dudeja (Microsoft) | @PrateekDudeja4 – 16:16


 


The full recording of this session is available from Microsoft 365 & SharePoint Community YouTube channel – http://aka.ms/m365pnp-videos.


 



  • Presentation slides used in this community call are found at OneDrive.


 


Resources: 


Additional resources on covered topics and discussions.


 



 


Additional Resources: 


 



 


Upcoming calls | Recurrent invites:


 



 


Too many links, can’t remember” – not a problem… just one URL is enough for all Microsoft 365 community topics – http://aka.ms/m365pnp.


 


“Sharing is caring”




SharePoint Team, Microsoft – 14th of April 2021

Manual migration from classic Cloud Service to Cloud Service Extended Support with ARM template

Manual migration from classic Cloud Service to Cloud Service Extended Support with ARM template

This article is contributed. See the original author and article here.

The Cloud Service Extended Support is a new service type which is similar to classic Cloud Service. The biggest difference between them is that the new Cloud Service Extended Support is ARM (Azure Resource Manager) based resource and can be used with ARM features such as tags, policy, RBAC, ARM template.


 


About the migration from the classic Cloud Service to Cloud Service Extended Support, Azure officially provided a way called in-place migration. The detailed information can be found at: https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/in-place-migration-portal.


 


In this blog, we will present how we can manually create a new Cloud Service Extended Support and deploy the same project into this new service. The classic Cloud Service project will have following features and after migration, all these features will be kept:



  1. Remote Desktop

  2. SSL certificate for HTTPS endpoints

  3. Using the same IP address before and after migration


The main advantage of manual migration


Before how to do this manual migration, let us highlight the main advantage of the manual migration:



  • The name of the new Cloud Service Extended Support can be decided by yourself. You can use a user-friendly name such as CSEStest.

  • Both of manual and in-place migration need the modification of the project code. During manual migration process, this modification is already included. With in-place migration process, it may be more difficult for you to modify the code.

  • This manual migration process is using ARM template to deploy new resources. You can do some changes by your own idea such as enabling RDP Extension which is not enabled in classic Cloud Service. But the in-place migration does not allow you to do so. It will keep the same configuration.


Before you begin


There will be some additional points to do before we start the migration. Please have a check of following points carefully since it may cause unexpected issue if they are not matched:



  1. Follow the “Before you begin” part of the document to check if you are an administrator/coadministrator of the subscription.

  2. In subscription page of Azure Portal, check the resource providers Microsoft.Compute, Microsoft.Network and Microsoft.Storage are already registered.


Example of resource provider registrationExample of resource provider registration


 



  1. We should have a running classic Cloud Service and its whole project code. If it is using certificate for any purpose (for HTTPS endpoint in this blog), that certificate in .pfx format and its password are also needed for the deployment.


With above 3 conditions, there should not be any other permission issue for this manual migration process. And for this process, a container in storage account is also required. If you do not have one yet, please follow this document to create one storage account and follow next 2 screenshots to create a new container.


https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal#create-a-storage-account-1


Create container 1Create container 1


 


Create container 2Create container 2


 


Then, let us move on the main process.


Reserve the IP address of the classic Cloud Service and upgrade it to be used for Cloud Service Extended Support


In this example, my classic Cloud Service is testcstocses in resource group cstocses, in East US region.



  1. Use PowerShell command to keep the current IP address as a classic Reserved IP, with name ReservedIPCSES. The location must be the same as your classic Cloud Service location.


 

New-AzureReservedIP -ReservedIPName ReservedIPCSES -ServiceName testcstocses -Location "East US"

 


Keep the IP to classic reserved IPKeep the IP to classic reserved IP


 



  1. Follow document to upgrade the generated classic Reserved IP to basic SKU Public IP (There is bug in script in official document)


https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-public-ip-address-upgrade?tabs=option-upgrade-powershell%2Coption-migrate-powershell#upgrade-migrate-a-classic-reserved-ip-to-a-static-public-ip


a. Verify if the classic Reserved IP is still associated with the classic Cloud Service. If yes, check if we can remove the association. (By design, the newly generated classic reserved IP should be still associated with classic Cloud Service)


 

## Variables for the command ##
$name = 'ReservedIPCSES'

## This section is only needed if the Reserved IP is not already disassociated from any Cloud Services ##
$service = 'testcstocses'
Remove-AzureReservedIPAssociation -ReservedIPName $name -ServiceName $service

$validate = Move-AzureReservedIP -ReservedIPName $name -Validate
$validate.ValidationMessages

 


PowerShell commands to verify association between classic Cloud Service and generated reserved IPPowerShell commands to verify association between classic Cloud Service and generated reserved IP


b. If the result in above screenshot is Succeeded, then run the following command to remove the link.


 

Move-AzureReservedIP -ReservedIPName $name -Prepare
Move-AzureReservedIP -ReservedIPName $name -Commit

 


Upgrade classic Reserved IP to basic tierUpgrade classic Reserved IP to basic tier


 


The new generated Basic SKU Public IP will be in a new resource group called {publicipname}-Migrated.


Migrated basic tier reserved IPMigrated basic tier reserved IP


 



  1. Set a DNS name on this Public IP. (Optional but recommended since the new Cloud Service Extended Support will not offer a DNS name as classic Cloud Service)


Configure DNS name on public IPConfigure DNS name on public IP


 



  1. Move the Public IP into the original resource group.


Move public IP to specific resource group 1Move public IP to specific resource group 1


 


Move public IP to specific resource group 2Move public IP to specific resource group 2


 



  1. (Optional) If your original classic Cloud Service is using any certificate, create a Key Vault in the same region (East US in this example) and upload the .pfx format certificate.


Create Key Vault 1Create Key Vault 1


 


Create Key Vault 2Create Key Vault 2


 


Do not forget to check this checkbox “Azure Virtual Machines for deployment” in Access policy page.


Create Key Vault 3Create Key Vault 3


 


After creation of Key Vault, import the certificate.


Upload certificate into Key Vault 1Upload certificate into Key Vault 1


 


Upload certificate into Key Vault 2Upload certificate into Key Vault 2


 


Upload certificate into Key Vault resultUpload certificate into Key Vault result


 



  1. Follow the official document to modify the classic Cloud Service code to make them match Cloud Service Extended Support requirement.


https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-prerequisite


https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-template


The yellow part is about the usage of the certificate. It is not necessary for all Cloud Service project.


The green part is the important information which we will use in following steps. Please take a note of it.


 


.csdef file


<?xml version=”1.0″ encoding=”utf-8″?>


<ServiceDefinition name=”AzureCloudService2″ xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition” schemaVersion=”2015-04.2.6“>


    <WebRole name=”WebRole1” vmsize=”Standard_D1_V2“>


        <Sites>


            <Site name=”Web”>


                <Bindings>


                    <Binding name=”Endpoint1″ endpointName=”Endpoint1″ />


                    <Binding name=”HttpsIn” endpointName=”HttpsIn” />


                </Bindings>


            </Site>


        </Sites>


        <Endpoints>


            <InputEndpoint name=”Endpoint1″ protocol=”http” port=”80″ />


            <InputEndpoint name=”HttpsIn” protocol=”https” port=”443″ certificate=”Certificate1″ />


        </Endpoints>


        <Certificates>


            <Certificate name=”Certificate1″ storeLocation=”LocalMachine” storeName=”My” permissionLevel=”limitedOrElevated”/>


        </Certificates>


    </WebRole>


</ServiceDefinition>


 


.cscfg file


<?xml version=”1.0″ encoding=”utf-8″?>


<ServiceConfiguration serviceName=”AzureCloudService2″ xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration” osFamily=”6″ osVersion=”*” schemaVersion=”2015-04.2.6“>


    <Role name=”WebRole1“>


        <Instances count=”1” />


        <Certificates>


            <Certificate name=”Certificate1″ thumbprint=”909011xxxxxxxxxx712303838613″ thumbprintAlgorithm=”sha1″ />


        </Certificates>


    </Role>


    <NetworkConfiguration>


        <VirtualNetworkSite name=”cstocsesvnet” />


        <AddressAssignments>


            <InstanceAddress roleName=”WebRole1″>


                <Subnets>


                    <Subnet name=”WebRole1_subnet” />


                </Subnets>


            </InstanceAddress>


            <ReservedIPs>


                <ReservedIP name=”ReservedIPCSES” />


            </ReservedIPs>


        </AddressAssignments>


    </NetworkConfiguration>


</ServiceConfiguration>


 


The thumbprint of the certificate can be found in Key Vault, Certificates page.


Thumbprint of the certificate in Key VaultThumbprint of the certificate in Key Vault


 



  1. Package the project as we do for classic Cloud Service project. Then copy out the .cspkg and .cscfg files.


Package in Visual StudioPackage in Visual Studio


 


Package result for a classic Cloud Service project (certificate isn't from package process)Package result for a classic Cloud Service project (certificate isn’t from package process)


 



  1. Upload the .cscfg file and .cspkg file into a container of storage account.


Upload .cspkg and .cscfg to Storage containerUpload .cspkg and .cscfg to Storage container


 



  1. After uploading, generate the SAS URL of these 2 files one by one. We can click on the file, switch to Generate SAS, click on Generate SAS token and URL and find the needed SAS URL at the end of the page.


Generate SAS token of .cscfg and .cspkgGenerate SAS token of .cscfg and .cspkg


 


The generated SAS token should be like:


https://storageforcses.blob.core.windows.net/test/AzureCloudService2.cspkg?sp=r&st=2021-04-02T10:47:04Z&se=2021-04-02T18:47:04Z&spr=https&sv=2020-02-10&sr=b&sig=osktC5FtJpI1uX28D2UMtJaZVi8FmhW6kpIHH%2FuFTUU%3D


         


https://storageforcses.blob.core.windows.net/test/ServiceConfiguration.Cloud.cscfg?sp=r&st=2021-04-02T10:48:12Z&se=2021-04-02T18:48:12Z&spr=https&sv=2020-02-10&sr=b&sig=8BmMScBU%2Bm6hRkKtUoiRNs%2F2NHYiHay8qxJq5TM%2BkGU%3D



  1. (Optional) If you use Key Vault to save the certificate, please visit the Certificate page and click 2 times on the uploaded certificate. You’ll find a URL at the end of the page with format:


https://{keyvaultname}.vault.azure.net/secrets/{certificatename}/{id}


Find secret URL of certificate 1Find secret URL of certificate 1


 


Find secret URL of certificate 2Find secret URL of certificate 2


 


Make a note of this URL for using it in next step. The following is my example: https://cstocses.vault.azure.net/secrets/csescert/e2f6ab1744374de38ae831ba8896edb9


         


Also, please make a note of the subscription ID, name of resource group where Key Vault is deployed and the Key Vault service name. These will also be used in next step.



  1. Please modify the following ARM template and parameters. And then save them into JSON format files. In my test, I saved into template.json and parameter.json.


 


Tips: The yellow parts are the optional parts. If you do not use any certificate, you can remove it from both template and parameter files. The green parts are the information noted from .csdef and .cscfg files. Please make sure they are the same and correct.


 


ARM template: (Except the above tips about certificate, no need to modify the ARM template file)


{


  “$schema”https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#,


  “contentVersion”“1.0.0.0”,


  “parameters”: {


    “cloudServiceName”: {


      “type”“string”,


      “metadata”: {


        “description”“Name of the cloud service”


      }


    },


    “location”: {


      “type”“string”,


      “metadata”: {


        “description”“Location of the cloud service”


      }


    },


    “deploymentLabel”: {


      “type”“string”,


      “metadata”: {


        “description”“Label of the deployment”


      }


    },


    “packageSasUri”: {


      “type”“securestring”,


      “metadata”: {


        “description”“SAS Uri of the CSPKG file to deploy”


      }


    },


    “configurationSasUri”: {


      “type”“securestring”,


      “metadata”: {


        “description”“SAS Uri of the service configuration (.cscfg)”


      }


    },


    “roles”: {


      “type”“array”,


      “metadata”: {


        “description”“Roles created in the cloud service application”


      }


    },


    “vnetName”: {


      “type”“string”,


      “defaultValue”“csesVNet”,


      “metadata”: {


        “description”“Name of vitual network”


      }


    },


    “subnetSetting”: {


      “type”“array”,


      “metadata”: {


        “description”“Setting of subnets”


      }


    },


    “publicIPName”: {


      “type”“string”,


      “defaultValue”“contosocsIP”,


      “metadata”: {


        “description”“Name of public IP address”


      }


    },


    “upgradeMode”: {


      “type”“string”,


      “defaultValue”“Auto”,


      “metadata”: {


        “UpgradeMode”“UpgradeMode of the CloudService”


      }


    },


    “secrets”: {


      “type”“array”,


      “metadata”: {


        “description”“The key vault id and certificates referenced in the .cscfg file”


      }


    },


    “rdpPublicConfig”: {


      “type”“string”,


      “metadata”: {


        “description”“Public config of remote desktop extension”


      }


    },


    “rdpPrivateConfig”: {


      “type”“securestring”,


      “metadata”: {


        “description”“Private config of remote desktop extension”


      }


    }


  },


  “variables”: {


    “cloudServiceName”“[parameters(‘cloudServiceName’)]”,


    “subscriptionID”“[subscription().subscriptionId]”,


    “lbName”“[concat(variables(‘cloudServiceName’), ‘LB’)]”,


    “lbFEName”“[concat(variables(‘cloudServiceName’), ‘LBFE’)]”,


    “resourcePrefix”“[concat(‘/subscriptions/’variables(‘subscriptionID’), ‘/resourceGroups/’resourceGroup().name‘/providers/’)]”


  },


  “resources”: [


    {


      “apiVersion”“2019-08-01”,


      “type”“Microsoft.Network/virtualNetworks”,


      “name”“[parameters(‘vnetName’)]”,


      “location”“[parameters(‘location’)]”,


      “properties”: {


        “addressSpace”: {


          “addressPrefixes”: [


            “10.0.0.0/16”


          ]


        },


        “subnets”“[parameters(‘subnetSetting’)]”


      }


    },


    {


      “apiVersion”“2020-10-01-preview”,


      “type”“Microsoft.Compute/cloudServices”,


      “name”“[variables(‘cloudServiceName’)]”,


      “location”“[parameters(‘location’)]”,


      “tags”: {


        “DeploymentLabel”“[parameters(‘deploymentLabel’)]”,


        “DeployFromVisualStudio”“true”


      },


      “dependsOn”: [


        “[concat(‘Microsoft.Network/virtualNetworks/’parameters(‘vnetName’))]”


      ],


      “properties”: {


        “packageUrl”“[parameters(‘packageSasUri’)]”,


        “configurationUrl”“[parameters(‘configurationSasUri’)]”,


        “upgradeMode”“[parameters(‘upgradeMode’)]”,


        “roleProfile”: {


          “roles”“[parameters(‘roles’)]”


        },


        “networkProfile”: {


          “loadBalancerConfigurations”: [


            {


              “id”“[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/loadBalancers/’variables(‘lbName’))]”,


              “name”“[variables(‘lbName’)]”,


              “properties”: {


                “frontendIPConfigurations”: [


                  {


                    “name”“[variables(‘lbFEName’)]”,


                    “properties”: {


                      “publicIPAddress”: {


                        “id”“[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/publicIPAddresses/’parameters(‘publicIPName’))]”


                      }


                    }


                  }


                ]


              }


            }


          ]


        },


        “osProfile”: {


          “secrets”“[parameters(‘secrets’)]”


        },


        “extensionProfile”: {


          “extensions”: [


            {


              “name”“RDPExtension”,


              “properties”: {


                “autoUpgradeMinorVersion”true,


                “publisher”“Microsoft.Windows.Azure.Extensions”,


                “type”“RDP”,


                “typeHandlerVersion”“1.2.1”,


                “settings”“[parameters(‘rdpPublicConfig’)]”,


                “protectedSettings”“[parameters(‘rdpPrivateConfig’)]”


              }


            }


          ]


        }


    }


   }


  ]


}


          Parameters:


 


Tips:



For example:


                   “roles”: {


                                    “value”: [


                                        {


                                            “name”: “WebRole1”,


                                            “sku”: {


                                                “name”: “Standard_D1_v2”,


                                                “tier”: “Standard”,


                                                “capacity”: “1”


                                            }


                                        },


                                         {


                                            “name”: “WorkerRole1”,


                                            “sku”: {


                                                “name”: “Standard_D1_v2”,


                                                “tier”: “Standard”,


                                                “capacity”: “2”


                                            }


                                        }


                                    ]


                      },


                        …


                      “subnetSetting”: {


                          “value”: [


                              {


                                “name”: “WebRole1_subnet”,


                                “properties”: {


                                  “addressPrefix”: “10.0.0.0/24”


                                 }


                              },


                              {


                                “name”: “WorkerRole1_subnet”,


                                “properties”: {


                                  “addressPrefix”: “10.0.1.0/24”


                                 }


                              }


                            ]


                         },



  • In the secrets part, sourceVault is the resource URI of your Key Vault. It’s constructed by /subscriptions/{subscription-id}/resourceGroups/{resourcegroup-name}/providers/Microsoft.KeyVault/vaults/{keyvault-name}  And the certificateUrl is the one we noted in step 10.



  • In rdpPublicConfig and rdpPrivateConfig, we only need to change the username and password we want to use to enable RDP. For example, here I use “admin” as username and “Password” as password.


{


    “$schema”https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#,


    “contentVersion”“1.0.0.0”,


    “parameters”: {


        “cloudServiceName”: {


            “value”“cstocses”


        },


        “location”: {


            “value”“eastus”


        },


        “deploymentLabel”: {


            “value”“deployment label of cstocses by ARM template”


        },


        “packageSasUri”: {


            “value”https://storageforcses.blob.core.windows.net/test/AzureCloudService2.cspkg?sp=r&st=2021-04-02T10:47:04Z&se=2021-04-02T18:47:04Z&spr=https&sv=2020-02-10&sr=b&sig=osktC5FtJpI1uX28D2UMtJaZVi8FmhW6kpIHH%2FuFTUU%3D


        },


        “configurationSasUri”: {


            “value”https://storageforcses.blob.core.windows.net/test/ServiceConfiguration.Cloud.cscfg?sp=r&st=2021-04-02T10:48:12Z&se=2021-04-02T18:48:12Z&spr=https&sv=2020-02-10&sr=b&sig=8BmMScBU%2Bm6hRkKtUoiRNs%2F2NHYiHay8qxJq5TM%2BkGU%3D


        },


        “roles”: {


            “value”: [


                {


                    “name”WebRole1,


                    “sku”: {


                        “name”Standard_D1_v2,


                        “tier”“Standard”,


                        “capacity”1


                    }


                }


            ]


        },


        “vnetName”: {


            “value”cstocsesVNet


        },


        “subnetSetting”: {


            “value”: [


                {


                  “name”WebRole1_subnet,


                  “properties”: {


                    “addressPrefix”“10.0.0.0/24”


                  }


                }


              ]


        },


        “publicIPName”: {


            “value”ReservedIPCSES


        },


        “upgradeMode”: {


            “value”“Auto”


        },


        “secrets”: {


            “value”: [


              {


                “sourceVault”: {


                  “id”“/subscriptions/4f27bec7-26bd-40f7-af24-5962a53d921e/resourceGroups/cstocses/providers/Microsoft.KeyVault/vaults/cstocses”


                },


                “vaultCertificates”: [


                  {


                    “certificateUrl”https://cstocses.vault.azure.net/secrets/csescert/e2f6ab1744374de38ae831ba8896edb9


                  }


                ]


              }


            ]


        },


        “rdpPublicConfig”: {


          “value”“<PublicConfig>rn  <UserName>admin</UserName>rn  <Expiration>4/2/2022 12:00:00 AM</Expiration>rn</PublicConfig>”


        },


        “rdpPrivateConfig”: {


          “value”“<PrivateConfig>rn  <Password>Password</Password>rn</PrivateConfig>”


        }


    }


}



  1. Use PowerShell command to deploy the ARM template. (Not necessary by PowerShell. You can also use Azure Portal or Azure CLI to deploy the template)


https://docs.microsoft.com/en-us/powershell/module/az.resources/new-azresourcegroupdeployment?view=azps-5.7.0


 


Please remember to replace the resource group name and the path of the template and parameter JSON file in the command before running it. 


            


 


 


 


 


 

New-AzResourceGroupDeployment -ResourceGroupName "cstocses" -TemplateFile "C:UsersjerryzDesktopCSES testdemotemplate.json" -TemplateParameterFile "C:UsersjerryzDesktopCSES testdemoparameter.json"

 


 


 


 


 


 


ARM template deployment resultARM template deployment result


 


 


Result: (The classic Cloud Service is deleted)


All created resources in this processAll created resources in this process


 


 


                  

New ways to train custom language models – effortlessly!

New ways to train custom language models – effortlessly!

This article is contributed. See the original author and article here.



Haim Sabo  Senior Software Engineer at Video Indexer, AEDPLS 

*This article was originally published on July 18, 2019, on Microsoft Azure blogs.

 



Video Indexer (VI), the AI service for Azure Media Services enables the customization of language models by allowing customers to upload examples of sentences or words belonging to the vocabulary of their specific use case. Since speech recognition can sometimes be tricky, VI enables you to train and adapt the models for your specific domain. Harnessing this capability allows organizations to improve the accuracy of the Video Indexer generated transcriptions in their accounts.


Over the past few months, we have worked on a series of enhancements to make this customization process even more effective and easy to accomplish. Enhancements include automatically capturing any transcript edits done manually or via API as well as allowing customers to add closed caption files to further train their custom language models.


The idea behind these additions is to create a feedback loop where organizations begin with a base out-of-the-box language model and improve its accuracy gradually through manual edits and other resources over a period of time, resulting with a model that is fine-tuned to their needs with minimal effort.


Accounts’ custom language models and all the enhancements this blog shares are private and are not shared between accounts.


In the following sections, I will drill down on the different ways that this can be done.


Improving your custom language model using transcript updates


Once a video is indexed in VI, customers can use the Video Indexer portal to introduce manual edits and fixes to the automatic transcription of the video. This can be done by clicking on the Edit button at the top right corner of the Timeline pane of a video to move to edit mode, and then simply update the text, as seen in the image below.


 


HaimSabo_0-1615982726876.png

 


The changes are reflected in the transcript, captured in a text file From transcript edits, and automatically inserted into the language model used to index the video. If you were not already using a custom language model, the updates will be added to a new Account Adaptations language model created in the account.


You can manage the language models in your account and see the From transcript edits files by going to the Language tab in the content model customization page of the VI website.


Once one of the From transcript edits files is opened, you can review the old and new sentences created by the manual updates, and the differences between them as shown below.


HaimSabo_1-1615982726984.png

 


All that is left is to do is click on Train to update the language model with the latest changes. From that point on, these changes will be reflected in all future videos indexed using that model. Of course, you do not have to use the portal to train the model, the same can be done via the Video Indexer train language model API. Using the API can open new possibilities such as allowing you to automate a recurring training process to leverage ongoing updates.


HaimSabo_2-1615982727230.png

 


There is also an update video transcript API that allows customers to update the entire transcript of a video in their account by uploading a VTT file that includes the updates. As a part of the new enhancements, when a customer uses this API, Video Indexer also adds the transcript that the customers uploaded to the relevant custom model automatically in order to leverage the content as training material. For example, calling update video transcript for a video titled “Godfather” will result with a new transcript file named “Godfather” in the custom language model that was used to index that video.


Improving your custom language model using closed caption files


Another quick and effective way to train your custom language model is to leverage existing closed captions files as training material. This can be done manually, by uploading a new closed caption file to an existing model in the portal, as shown in the image below, or by using the create language model and update language model APIs to upload a VTT, SRT or TTML files (similarly to what was done until now with TXT files.)


 


HaimSabo_3-1615982727119.png

 


Once uploaded, VI cleans up all the metadata in the file and strips it down to the text itself. You can see the before and after results in the following table.


 

























Type Before After
VTT

NOTE Confidence: 0.891635
00:00:02.620 –> 00:00:05.080
but you don’t like meetings before 10 AM.


but you don’t like meetings before 10 AM.
SRT

2
00:00:02,620 –> 00:00:05,080
but you don’t like meetings before 10 AM.


but you don’t like meetings before 10 AM.
TTML

<!– Confidence: 0.891635 –>
<p begin=”00:00:02.620″ end=”00:00:05.080″>but you don’t like meetings before 10 AM.</p>


but you don’t like meetings before 10 AM.

From that point on, all that is left to do is review the additions to the model and click Train or use the train language model API to update the model.


Next Steps


The new additions to the custom language models training flow make it easy for you and your organization to get more accurate transcription results easily and effortlessly. Now, it is up to you to add data to your custom language models, using any of the ways we have just discussed, to get more accurate results for your specific content next time you index your videos.


Have questions or feedback? We would love to hear from you! Use our UserVoice page to help us prioritize features or use Video Indexer’s Stackoverflow page for any questions you have around Video Indexer.




Getting started with Video Indexer v2 API

Getting started with Video Indexer v2 API

This article is contributed. See the original author and article here.

Getting started with Video Indexer v2 API


Itai Norman Program Manager, AEDPLS, Video Indexer.


 


ItaiNorman_3-1616320970596.png


Video Indexer is an Azure service, that enables you to easily extracts business insights from your media files by using around 30 AI models. Using this service, you can enrich your videos with meta-data, transcription, and translation to more the 50 different languages, search in videos for faces, people, topics, spoken words, and much more.


Video Indexer can be leveraged in three ways: via a REST API, via a full flagged portal experience, and via embeddable customizable widgets, that enable customers to leverage the portal experience in their own applications. In this blog we will focus on how to get to know and get started with the Video Indexer REST API easily, using our brand-new developer portal! Via this portal, you can explore and try out all different APIs, find code samples and easily navigate between all Video Indexer’s communities and documentation resources.


In this blog, I will walk you through few basic steps:



First thing first – get a Video Indexer account
Before starting to use Video Indexer’s REST API, you should have a Video Indexer account, you can start with a free trial account; Just sign-in with one of the following: AAD account, Personal Microsoft account, or a Google account and you are good to go!
Alternatively, you can get connected to your own Azure Subscription. If someone to their Video Indexer account, you could use that account as well.


ItaiNorman_4-1616320970598.png


 


 


 


Register to Video Indexer’s API


Now we are ready to start using the Video Indexer developer portal. You will be welcomed with the Video Indexer developer portal landing page, just press the ‘Sign-in’ button on the top-right corner of the ‘Get Started’ button, to authenticate. When the authentication process is finished you will be connected to the developer-portal as a signed-in user, and you will see the following tabs in your top-right menu: ‘Home’, ’APIs’, ’Developer Support’, ‘Profile’, ’Sign out’.


 


Getting familiar with the developer portal


Before moving to an example of using Video Indexer’s API, let’s briefly explore what’s available for you in the portal.


The first tab is the APIs tab. It has a list of all APIs calls in VI’s environment, by choosing an API call you will get a sample of how the request would look like, a documentation table, and an option to try the API call through the portal.


On the ‘try it’ blade you will be able to enter the relevant parameters and get a glance at what would the API call and response look like.


TryitBlog.png


 In the Home tab, you can find links to our documentation around main functionalities such as model customization and how to embed widgets. A three easy steps guide, on how to use Video Indexer’s API and a short code sample.


 



The Developer support page includes everything you need to learn about and get support for: GitHub repository, Stackoverflow page, ‘CodePen’ link with widgets code samples, and Video Indexer’s tech community relevant blog posts. Also available in the developer support, Video Indexer’s FAQ page, a ‘User Voice’, where you can suggest Video indexer capabilities and features that may be helpful for your business needs, and a general documentation link.


 


DevSupportBlog.PNG


 


 


 


Using the API


 


This is really what we are here for, right? So, let’s see how to start using the VI API…


Getting the account id and access token.


 


In this section, I’ll walk you through the basic flow that lets you get media insights from Video Indexer, with only three calls: Getting an ‘AccountId’ and ‘Access Token’, uploading a file and retrieving the insight JSON file, and I’ll use the developer portal to do so.


 


 


 


First, open the developer portal, under the ‘APIs’ tab go to the ‘Get Accounts Authorization’.


ItaiNorman_8-1616320970614.png


 


Enter the query parameters:



  • Location – ‘trial’ (or for non-trial account enter the region for your account)

  • Generate Access Token: true.


At the bottom of the ‘Try it’ blade hit ‘Send’.


 


ItaiNorman_9-1616320970616.png


 


You should receive a 200 ok response with the appropriate data.


ItaiNorman_10-1616320970621.png


 


 


 


 


Open a Notepad or similar app to keep the different values of the response, you will need it in a minute… form the response save the: ‘Id’ which is the ‘Account Id’, and the ‘Access Token’.


 


Upload video


ItaiNorman_11-1616320970622.png


 


 


Now we have everything we need to upload the first video. Go to ‘Upload video’
enter the query parameters:



  • Location: ‘trial’ (For non-trial account enter the region e.g. “eastus”).

  • AccountId: GUID you received it in your ‘Get Accounts Authorization’.

  • Name: give the video a name.

  • VideoUrl: The full path of the video you want to upload, note, that video Indexer support a wide range of video and audio file formats, but it has to be a URL for a file (and not a HTTP link for example)

  • Access Token: the ‘Access Token’ we got at the previous step.


, and hit ‘Send’.


Form the ‘upload video’ response fetch the ‘id’ – which is the Video Indexer id for the file you just uploaded. We will need it in the next step.


 


Getting Insights


Once the indexing completed (which might take a few minutes), we can use the third call, to see what Video Indexer was able to extract. Go to ‘Get Video Index’


 


ItaiNorman_12-1616320970623.png


 


 


enter the query parameters:



  • Location – ‘trial’ (For non-trial account enter the region e.g. “eastus”).

  • AccountId: GUID you received it in your ‘Get Accounts Authorization’.

  • VideoId: we received from the last API call.

  • Access Token: the ‘Access Token’ we got at the previous step.


, and hit ‘Send’.


And Walla… all the insights will be returned in a Json format file.


 


ItaiNorman_13-1616320970631.png


 


 


Congratulations! you just got your first set of insights in a readable JSON format.


So, what’s next?
Use these relevant articles to learn more on how to use Video Indexer:



Use these links to read more technical articles around Video Indexer:



 


 


 


 


 


 

Combine the Power of Video Indexer and Computer Vision

Combine the Power of Video Indexer and Computer Vision

This article is contributed. See the original author and article here.

* The blog was first published at December 2019 by Anika Zaman in Azure blogs


  


We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service’s Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place.


Let’s look at some of the use cases we can enable with this new introduction.


 


Using keyframes to get image description automatically


You can automate the process of “captioning” different visual shots of your video through the image description model within Computer Vision, in order to make the content more accessible to people with visual impairments. This model provides multiple description suggestions along with confidence values for an image. You can take the descriptions of each high-resolution keyframe and stitch them together to create an audio description track for your video.


1.jpg


Pic 1: An example of using the image description model to create a description of a high resolution key frame.


 


Using Keyframes to get logo detection


While Video Indexer detects brands in speech and visual text, it does not support brands detection from logos yet. Instead, you can run your keyframes through Computer Vision’s logo-based brands detection model to detect instances of logos in your content.


This can also help you with brand safety as you now know and can control the brands showing up in your content. For example, you might not want to showcase the logo of a company directly competing with yours. Also, you can now monetize on the brands showing up in your content through sponsorship agreements or contextual ads.


Furthermore, you can cross-reference the results of this model for you keyframe with the timestamp of your keyframe to determine when exactly a logo is shown in your video and for how long. For example, if you have a sponsorship agreement with a content creator to show your logo for a certain period of time in their video, this can help determine if the terms of the agreement have been upheld.


Computer Vision’s logo detection model can detect and recognize thousands of different brands out of the box. However, if you are working with logos that are specific to your use case or otherwise might not be a part of the out of the box logos database, you can also use Custom Vision to build a custom object detector and essentially train your own database of logos by uploading and correctly labeling instances of the logos relevant to you.


2.png


Pic 2: An example of log detection using Computer Vision’s logo-based brands detection


 


Using keyframes with other Computer Vision and Custom Vision offerings


The Computer Vision APIs provide different insights in addition to image description and logo detection, such as object detection, image categorization, and more. The possibilities are endless when you use high-resolution keyframes in conjunction with these offerings.


For example, the object detection model in Computer Vision gives bounding boxes for common out of the box objects that are already detected as part of Video Indexer today. You can use these bounding boxes to blur out certain objects that don’t meet your standards.


3.jpg


Pic 3: An example of object detection model in Computer Vision applied on a high resolution key frame


 


High-resolution keyframes in conjunction with Custom Vision can be leveraged to achieve many different custom use cases. For example, you can train a model to determine what type of car (or even what breed of cat) is showing in a shot. Maybe you want to identify the location or the set where a scene was filmed for editing purposes. If you have objects of interest that may be unique to your use case, use Custom Vision to build a custom classifier to tag visuals or a custom object detector to tag and provide bounding boxes for visual objects.


Try it for yourself


These are just a few of the new opportunities enabled by the availability of high-resolution keyframes in Video Indexer. Now, it is up to you to get additional insights from your video by taking the keyframes from Video Indexer and running additional image processing using any of the Vision models we have just discussed. You can start doing this by first uploading your video to Video Indexer and taking the high-resolution keyframes after the indexing job is complete and second creating an account and getting started with the Computer Vision API and Custom Vision.


Have questions or feedback? We would love to hear from you. Use our UserVoice page to help us prioritize features, leave a comment below or email VISupport@Microsoft.com for any questions.