Threat Actors Targeting Cybersecurity Researchers

This article is contributed. See the original author and article here.

Google and Microsoft recently published reports on advanced persistent threat (APT) actors targeting cybersecurity researchers. The APT actors are using fake social media profiles and legitimate-looking websites to lure security researchers into visiting malicious websites to steal information, including exploits and zero-day vulnerabilities. APT groups often use elaborate social engineering and spear phishing schemes to trick victims into running malicious code through malicious links and websites.

CISA recommends cybersecurity practitioners to guard against this specific APT activity and review the following reports for more information:

Additionally, CISA strongly encourages cybersecurity practitioners use sandbox environments that are isolated from trusted systems or networks when examining untrusted code or websites. 

Global AI Student Conference – Online, April 24th

Global AI Student Conference – Online, April 24th

This article is contributed. See the original author and article here.

shwars_0-1618404920105.png


Global AI Student Conference is an online streaming event, organized by Global AI Community and Microsoft Learn Student Ambassadors. The conference is being held for a second time; last time it was attended by more than 2500 people from all around the globe. You can check our sessions from the previous conference to get inspired to the upcoming event.


The main audience for this conference are students: both those who are just doing their first steps in AI, as well as more experienced ones. In fact, we call it “a conference organized for students, by students” – because it is largely organized by Microsoft Learn Student Ambassadors community. 


All conference sessions are 30 minutes each, to accommodate comfortable online viewing. They can be grouped into 3 categories:



  • 9 introductory sessions on AI, ranging from different ML algorithms, to Low Code/No Code ways to train your neural network model using visual tools. This section is coordinated by Kunal Kushwaha, founder or “Code for Cause” YouTube channel, where you can find a lot of introductory materials on AI and ML. 

  • 5 research sessions, in which students will describe their own projects in the area of AI and ML. You can check out complete schedule

  • 2 roundtables:

    • In “How students can start with research, and why it is important” session we will discuss the best ways for students to start their research career. We will hear stories from students, doing research internships at large IT companies, as well as from university professors.

    • A session “Grow your skills and empower others as a Microsoft Learn Student Ambassador” will focus more on Student Ambassador program. You will hear from Global program director, Pablo Veramendi, as well as from student ambassadors themselves.




We believe this conference is a good way for students both to make their first steps into the world of AI, and to get inspired by what other students are doing. Therefore we encourage our readers to share this news with students, and we would welcome them to join the conference as well.  It is recommended to register, but one can also join live streaming on conference site on Saturday, April 24th!

FAQ's in a Document Card

FAQ's in a Document Card

This article is contributed. See the original author and article here.

I recently had a need for a web part for a frequently asked questions list that was a bit more interactive than what I could find on the internet. So I decided to build one and once it was finished I thought it was a great resource to share.


 


Here’s what the web part looks like and an idea how it functions 


updateFAQgif.gif


 


TLDR: For a tutorial of how to build the whole thing from scratch check out this video: https://www.youtube.com/watch?v=oIr-rgGvUUk


 


A main driver for creating this web part was wanting something that didn’t look like SharePoint or an intranet. I spent some time looking for examples and inspiration from code samples, the Look Book, and intranet examples but I felt like I kept landing on the same accordion look and feel. Don’t get me wrong, I appreciate the accordion, I’ve even added it into my web part when you’re viewing all questions under a specific category – but was this really my only option? So I started looking at external sites and finally found something I thought was cool enough to build and the react-docCard-faq idea was born!


 


Now that I had a general idea of my data (questions and answers) and I’ve picked a layout I need to break out my questions into sub groups, something to separate them out by. That’s where the categories comes in, questions are grouped into categories. But I wasn’t finished there, what if, you don’t want to show every category. It could be that open enrollment questions only need displayed around that time or you’re launching a new product and only want to highlight it for a short period of time! So instead of always displaying every category in the FAQ list you can select which categories you want to view from the property pane when you add the web part to the page. 


Use cases: 



  • A site can host a single FAQ list and only display certain categories on specific pages.

  • Season change, Holiday’s, Enrollment time – all reasons you might want to change which categories (or how many) you’re showing


 

Once I had the idea that you might want to change which categories are showing and not just always display the same categories, the featured toggle came in. Adding an additional boolean column let’s us not only select which questions will be showing in main display, but it makes it incredibly easy for anyone managing the list (not the site) to add or change which questions will display in the document card.


 


Why in the world did I do a document card for FAQ? A few reasons for the specific design that includes so much white space. 


First, I wanted something that doesn’t look like a standard intranet/internal site. The layout gives a modern, professional feel that could be on any external facing site. 


 


Second, not every department/group/team knows what to share on their site or page but everyone needs FAQ’s! Because you control how many categories and questions are on the page, it can take up more or less depending on how you lay it out. 


Third, it’s a great way to drive adoption to your questions. Tired of answering the same 3-6 questions over and over? Direct questions to your SharePoint site where all the answers are! Added bonus, no one has to wait on you to get back from vacation for an 1 sentence answer 


 

Lastly, it drives whoever’s maintaining the list to keep it up to date. It’s not a set of boring questions and answers to go stale. With category options, a featured toggle, and the ability to write beautiful answers using rich text you’re empowered to take your FAQ to the next level! 


 


I hope I’ve inspired you to think outside of the box and get creative!


Code: https://github.com/pnp/sp-dev-fx-webparts/tree/master/samples/react-doccard-faq



Extract business insights with Live Video Analytics and Intel OpenVINO using Intel NUC devices.

Extract business insights with Live Video Analytics and Intel OpenVINO using Intel NUC devices.

This article is contributed. See the original author and article here.

In this technical blogpost we’re going to talk about the powerful combination of Azure Live Video Analytics (LVA) 2.0 and Intel OpenVINO DL Streamer Edge AI Extension. In our sample setup we will use an Intel NUC device as our edge device. You can read through this blogpost to get an understanding of the setup but it requires some technical skills to repeat the same steps so we rely on existing tutorials and samples as much as possible. We will show the seamless integration of this combination where we will use LVA to create and manage the media pipeline on the edge device and extract metadata using the Intel OpenVINO DL – Edge AI Extension module which is also managed through a single deployment manifest using Azure IoT Edge Hub. For this blogpost we will use an Intel NUC from the 10th generation but it can run on any Intel device. We will look at the specifications and performance of this little low power device and how well it performs as an edge device for LVA and AI inferencing by Intel DL Streamer. The device will receive a simulated camera stream and we will use gRPC as the protocol to feed images to the inference service from the camera feed at the actual framerate (30fps).


 


The Intel OpenVINO Model Server (OVMS) and Intel Video Analytics Serving (VA Serving) can utilize the iGPU of the Intel NUC device. The Intel DL Streamer – Edge AI Extension we are using here is based on Intel’s VA Serving with native support for Live Video Analytics. We will show you how easy it is to enable iGPU for the AI inferencing thanks to this native support from Intel.



Live Video Analytics (LVA) is a platform for building AI-based video solutions and applications. You can generate real-time business insights from video streams, processing data near the source and applying the AI of your choice. Record videos of interest on the edge or in the cloud and combine them with other data to power your business decisions.


lva-overview.png


LVA was designed to be a flexible platform where you can plugin AI services of your choice. These can be from Microsoft, the open source community or your own. To further extend this flexibility we have designed the service to allow integration with existing AI models and frameworks. One of these integrations is the OpenVINO DL Streamer Edge AI Extension Module.


The Intel OpenVINO™ DL Streamer – Edge AI Extension module is a based on Intel’s Video Analytics Serving (VA Serving) that serves video analytics pipelines built with OpenVINO™ DL Streamer. Developers can send decoded video frames to the AI extension module which performs detection, classification, or tracking and returns the results. The AI extension module exposes gRPC APIs.


 


vas-overview.png


 


Setting up the environment and pipeline


We will walk through the steps to set up LVA 2.0 with Intel DL Streamer Edge AI Extension module and set it up on my Intel NUC device. I will use the three different pipelines offered by the Intel OpenVINO DL Streamer Edge AI Extension module. These include Object Detection, Classification and Tracking.


lva-architecture.png


Once you’ve deployed the Intel OpenVINO DL Streamer Edge AI Extension module you will be able to use the different pipelines by setting environment variables, PIPELINE_NAME, PIPELINE_VERSION in the deployment manifest. The supported pipelines are:





















PIPELINE_NAME PIPELINE_VERSION
object_detection person_vehicle_bike_detection
object_classification vehicle_attributes_recognition
object_tracking person_vehicle_bike_tracking

 


The hardware used for the demo


For this test I purchased an Intel NUC Gen10 for around $1200 USD. The Intel NUC is a small form device with good performance to power ratio. It puts full size PC power in the palm of your hands, so it is convenient as a powerful edge device for LVA. It comes in different configurations so you can trade off performance vs costs. In addition, it comes as a ready-to-run, Performance Kit or just the NUC boards for custom applications. I went for the most powerful i7 Performance Kit and ordered the maximum allowed memory separately. The full specs are:



  • Intel NUC10i7FNH – 6 cores at 4.7Ghz

  • 200GB M.2 SSD

  • 64GB DDR4 memory

  • Intel® UHD Graphics for 10th Gen Intel® Processors


nuc-01.png


nuc-02.png


 


Let’s set everything up


These steps expect that you have already set up your LVA environment by using one of our quickstart tutorials. This includes:



  • Visual Studio Code with all extensions mentioned in the quickstart tutorials

  • Azure Account

  • Azure IoT Edge Hub

  • Azure Media Services Account


In addition to the prerequisites for the LVA tutorials, we also need an Intel device where we will run LVA and extend it with the Intel OpenVINO DL Streamer Edge AI Extension Module.



  1. Connect your Intel device and install Ubuntu. In my case I will be using Ubuntu 20.10

  2. Once we have the OS installed follow these instructions to set up IoT Edge Runtime

  3. Install Intel GPU tools: Intel GPU Tools: sudo apt-get install intel-gpu-tools (optional)

  4. Now install LVA. Assuming you already have a LVA set up, you can start with this step


When you’re done with these steps your Intel device should be visible in your IoT Extension in VS Code.


vsc-iot-extension.png


Now I’m going to follow this tutorial to set up the Intel OpenVINO DL Streamer Edge AI Extension module: https://aka.ms/lva-intel-openvino-dl-streamer-tutorial


Once you’ve completed these steps you should have:



  1. Intel edge device with IoT Edge Runtime connected to IoT Hub

  2. Intel edge device with LVA deployed

  3. Intel OpenVINO DL Streamer Edge AI Extension module deployed


 


The use cases


Now that we have our setup up and running Let’s go through some of the use cases where this setup can help you. We’ll use the sample videos we have available to us and observe the results we get from the module..
Since the Intel NUC is a very small form factor it can easily be deployed in close proximity to a video source like an IP camera. It is also very quiet and does not generate a lot of heat. You can mount it above a ceiling, behind a door, on top or inside a bookshelf or underneath a desk to name a few examples. I will be using sample videos like a recording of a parking lot and a cafeteria. You can imagine a situation where we have this NUC located at these venues to analyze the camera feed.


 


Highway Vehicle Classification and Event Based Recording


Let’s imagine a use case where I’m concerned about the specific vehicle type and color using a specific piece of highway and want to know and see the video frames where these vehicles appear. We can use LVA together with the Intel DL Streamer – Edge AI Extension module to analyze a highway and trigger on a specific combination of vehicle type, color and confidence level. For instance a white van with a confidence above 0.8. Within LVA we can deploy a custom module like this objectsEventFilter module. The module will create a trigger to the Signal Gate node when these three directives are met. This will create an Azure Media Services asset which we can playback from the cloud. The diagram looks like this:


full-topology.png


When we run the pipeline the rtsp source is split into the signal gate node that will hold a buffer of the video and it is send to the gRPC Extension node. The gRPC Extension will create images out of the video frames and feed into the Intel DL Streamer – Edge AI Extension module. When using the classification pipeline it will return inference results containing type attributes. These are forwarded as IoT messages and will feed into the objectsEventFilter module. We can filter on specific attributes to send a IoT message trigger the Signal Gate node with an Azure Media Services Asset as result.
In the inference results you will see a message like this:


 

{
      "type": "entity",
      "entity": {
        "tag": {
          "value": "vehicle",
          "confidence": 0.8907926
        },
        "attributes": [
          {
            "name": "color",
            "value": "white",
            "confidence": 0.8907926
          },
          {
            "name": "type",
            "value": "van",
            "confidence": 0.8907926
          }
        ],
        "box": {
          "l": 0.63165444,
          "t": 0.80648696,
          "w": 0.1736759,
          "h": 0.22395049
        }
      }

 


This is meeting our objectsEventFilter module thresholds which will give the following IoT Message:


 

[IoTHubMonitor] [2:05:28 PM] Message received from [nuclva20/objectsEventFilter]:
{
  "confidence": 0.8907926,
  "color": "white",
  "type": "van"
}

 


 This will trigger the Signal Gate to open and forward the video feed to the Asset Sink node. 


 

[IoTHubMonitor] [2:05:29 PM] Message received from [nuclva20/lvaEdge]:
{
  "outputType": "assetName",
  "outputLocation": "sampleAssetFromEVR-LVAEdge-20210325T130528Z"
}

 


The Asset Sink Node will store a recording on Azure Media Services for cloud playback.


highway-playback.png


 


Deploying objectsEventFilter module


You can follow this tutorial to deploy a custom module for Event Based Recording. Only this time we will use the objectsEventFilter module instead of the objectCounter. You can copy the module code from here. The steps are the same to build and push the image to your container registry as with the objectCounter tutorial.


I will be using video samples that I upload to my device in the following location: /home/lvaadmin/samples/input/
Now they are available through the RTSP simulator module by calling rtsp://rtspsim:554/media/{filename}
Next we deploy a manifest to the device with the environment settings that specify the type of model. In this case I want to detect and classify vehicles that show up in the image.


 

"Env":[
                    "PIPELINE_NAME=object_classification",
                    "PIPELINE_VERSION=vehicle_attributes_recognition",

 


Next step is to change the “operations.json” file of the c2d-console-app to reference the rtsp file. For instance if I want to use the “co-final.mkv” I set the operations.json file to:


 

{
  "name": "rtspUrl",
  "value": "rtsp://rtspsim:554/media/co-final.mkv"
}

 


Now that I have deployed the module to my device I can invoke the media graph by executing the c2d-console-app (i.e. press F5 in VS Code)
Note: Remember to listen for event messages by clicking on “Start Monitoring Built-in Event Endpoint” in the VS Code IoT Hub Extension.


In the output window of VS Code we will see messages flowing in a json structure. For the co-final.mkv using the object tracking for vehicles, persons and bikes we see output like this:
Timestamp: of the media, we maintain the timestamp end to end so you can always relate messages across media timespan.
Entity tag: Which type of object was detected (vehicle, person or bike)
Entity Attributes: The color of the entity (white) and the type of the entity (van)
Box: The box size and location on the picture where we detected this entity.


Let’s have a look at the CPU load of the device. When we SSH into the device we can type the command “sudo htop”. This will show details of the device load like CPU/Memory.


cpu.png


We see a load of ~32% for this model on the Intel NUC. It is extracting and analyzing at 30fps. So we can safely say we can run multiple camera feeds on this small device as we have plenty of headroom. We could also trade off fps to a allow even more camera feeds density per device.


 


iGPU offload support



  1. Right-click on this template and “generate a deployment manifest”. The deployment manifest is now available in the “edge/config/” folder

  2. Right click the deployment manifest and deploy to single device, select your Intel device

  3. Now execute the same c2d-console-app again (press F5 in VS Code). After about 30 seconds you will see the same data again in your output window.


gpu.pnggpu2.png


Here you can see the iGPU is showing a load of around ~44% to run the AI tracking model. At the same time we see a 50% decrease of CPU usage compared to the first run which was using only CPU. We still observe some CPU activity because the LVA media graph still uses the CPU.


 


To summarize


In this blogpost and during the tutorial we have walked through the steps to:



  1. Deploy IoT Edge Runtime on an Intel NUC.

  2. We connected the device to our IoT Hub so we can control and manage the device using the IoT Hub together with the VS Code IoT Hub Extension.

  3. We used the LVA sample to deploy LVA onto the Intel device.

  4. In addition we took the Intel OpenVINO – Edge AI Extension Module and deployed this onto the Intel Device using IoT Hub.


This enables us to use the combination of LVA and Intel OpenVINO DL Streamer Edge AI Extension module to extract metadata from the video feed using the Intel pre-trained models. The Intel OpenVINO DL Streamer Edge AI Extension module allows us to change the pipeline by simply changing variables in the deployment manifest. It also enables us to make full use of the iGPU capabilities of the device to increase throughput, inference density (multiple camera feeds) and use more sophisticated models. With this setup you can bring powerful AI inferencing close to the camera source. The Intel NUC packs enough power to run the model for multiple camera feeds with low power consumption, low noise and in a small form factor. The inference data can be used for your business logic.


 


Call to actions


Stop typing PowerShell credentials in demos using PowerShell SecretManagement

Stop typing PowerShell credentials in demos using PowerShell SecretManagement

This article is contributed. See the original author and article here.

We all sometimes create presentations with some PowerShell demos. And often, we need to use credentials to log in to systems for example PowerShell when delivering these presentations. This can lead that we don’t use very strong passwords because we don’t want to type them during a presentation, you see the problem? So, here is how you can use the PowerShell SecretManagement and SecretStore modules to store your demo credentials on your machine.


 


Doing this is pretty simple:


 


Install the SecretManagement and SecretStore PowerShell modules.


 


 

Install-Module Microsoft.PowerShell.SecretManagement, Microsoft.PowerShell.SecretStore

 


 


Register a SecretStore to store your passwords and credentials. I this example we are using a local store to do that. Later in this blog post, we will also have a look at how you can use Azure Key Vault to store your secrets. This is handy if you are working on multiple machines.


 


 

Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault

 


 


Now we can store our credentials in the SecretStore. In this example, I am going to store the password using, and I will add some non-sensitive data as metadata to provide some additional description.


 


 

Set-Secret -name DemoAdmin01 -Secret "demoAdmin01PassWord" -Metadata @{demo = "My Hyper-V demo for Windows Server and Linux Remoting"}

 


 


 


Store Secret in PowerShell SecretStoreStore Secret in PowerShell SecretStore


Now you can start using this secret in the way you need it. In my case, it is the password of one of my admin users.


 


 

$DemoAmdin01secret = Get-Secret -Vault SecretStore -Name DemoAdmin01
$DemoAmdin01Cred = New-Object -TypeName PSCredential -ArgumentList "DemoAdmin01", $DemoAmdin01secret

 


 


These two lines, I could also store in my PowerShell profile I use for demos, or in my demo startup script. In this case, the credential object is available for you to use.


 


Use SecretStore crednetialsUse SecretStore crednetials


If you are using multiple machines and you want to keep your passwords in sync, the Azure Key Vault extension.


 


 

Install-Module Az.KeyVault
Register-SecretVault -Module Az.KeyVault -Name AzKV -VaultParameters @{ AZKVaultName = $vaultName; SubscriptionId = $subID}

 


 


Now you can store and get secrets from the Azure Key Vault and you can simply use the -Vault AzKV parameter instead of -Vault SecretStore. 


I hope this blog provides you with a short overview of how you can leverage PowerShell SecretManagement and SecretStore, to store your passwords securely. If you want to learn more about SecretManagement check out Microsoft Docs.


 


I also highly recommend that you read @Pierre Roman blog post on leveraging PowerShell SecretManagement to generalize a demo environment.

Experiencing Data Access Issue in Azure portal for Log Analytics – 04/14 – Investigating

This article is contributed. See the original author and article here.

Initial Update: Wednesday, 14 April 2021 07:55 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience data access issues and delayed or missed Log Search Alerts in North Europe region.
  • Work Around: None
  • Next Update: Before 04/14 10:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Deepika

SharePoint community call – April 13th, 2021

SharePoint community call – April 13th, 2021

This article is contributed. See the original author and article here.

The SharePoint community monthly call is our general monthly review of the latest SharePoint news  (news, tools, extensions, features, capabilities, content and training), engineering priorities and community recognition for Developers, IT Pros and Makers.  This monthly community call happens on the second Tuesday of each month. You can download recurrent invite from https://aka.ms/sp-call.


 


 


Call Summary:


Visit the new Microsoft 365 PnP Community hub at Microsoft Tech Communities!  Preview the new Microsoft 365 Extensibility look book gallery, attend one of a growing list of Sharing is Caring events.  The Microsoft 365 Update – Community (PnP) | April 2021 is now available.  SPFx v1.12.1 with Node v14 and Gulp4 support was released for public beta today and GA slated for end of April.  In this call, quickly addressed developer and non-developer entries in UserVoice.  


 


A huge thank you to the record number of contributors and organizations actively participating in this PnP Community. You continue to amaze.  The host of this call was Vesa Juvonen (Microsoft) @vesajuvonen.  Q&A took place in the chat throughout the call. 


 


15th-april-together-mode.gif


 


 


Getting started with Microsoft Viva Connections Desktop – an employee centric app in Teams with one stop access to internet resources, global search, contextual actions, and company branded experience.   A SharePoint home site powered by Microsoft Teams, backed by Microsoft security, privacy and compliance.   No additional licensing.  Familiar extensible platform that will include mobile this summer.   Create Viva Connections app package in PowerShell, upload package to Teams Admin Center.    


 


Actions: 


 



  • Register for livestream and for a regional watch party:


  • Try public beta of SPFx v1.12.1, access through npm.

  • Complete the Developer Success Survey – https://aka.ms/developersuccess

  • Join the M365 customer success platform panel – https://aka.ms/SuccessPanel

  • Register for Sharing is Caring Events:

    • First Time Contributor Session – April 27th   (EMEA, APAC & US friendly times available)

    • Community Docs Session – April

    • PnP – SPFx Developer Workstation Setup – April 29th

    • PnP SPFx Samples – Solving SPFx version differences using Node Version Manager – April 15th

    • First Time Presenter – April 21st

    • More than Code with VSCode – April 14th & 28th

    • Maturity Model Practitioners – April 20th

    • PnP Office Hours – 1:1 session – Register



  • Download the recurrent invite for this call – https://aka.ms/sp-call.


 


You can check the latest updates in the monthly summary and at aka.ms/spdev-blog.


This call was delivered on Tuesday, April 13, 2021. The call agenda is reflected below with direct links to specific sections.  You can jump directly to a specific topic by clicking on the topic’s timestamp which will redirect your browser to that topic in the recording published on the Microsoft 365 Community YouTube Channel.


 


Call Agenda:


 



  • SharePoint community update with latest news and roadmap – 2:47

  • UserVoice status for non-dev focused SharePoint entries – 8:36

  • UserVoice status for dev focused SharePoint Framework entries – 9:45 

  • Community contributors and companies which have been involved in the past month – 12:50 

  • Topic: Getting started with Microsoft Viva Connections Desktop  Tejas Mehta (Microsoft) | @tpmehta and Prateek Dudeja (Microsoft) | @PrateekDudeja4 – 16:16


 


The full recording of this session is available from Microsoft 365 & SharePoint Community YouTube channel – http://aka.ms/m365pnp-videos.


 



  • Presentation slides used in this community call are found at OneDrive.


 


Resources: 


Additional resources on covered topics and discussions.


 



 


Additional Resources: 


 



 


Upcoming calls | Recurrent invites:


 



 


Too many links, can’t remember” – not a problem… just one URL is enough for all Microsoft 365 community topics – http://aka.ms/m365pnp.


 


“Sharing is caring”




SharePoint Team, Microsoft – 14th of April 2021

Manual migration from classic Cloud Service to Cloud Service Extended Support with ARM template

Manual migration from classic Cloud Service to Cloud Service Extended Support with ARM template

This article is contributed. See the original author and article here.

The Cloud Service Extended Support is a new service type which is similar to classic Cloud Service. The biggest difference between them is that the new Cloud Service Extended Support is ARM (Azure Resource Manager) based resource and can be used with ARM features such as tags, policy, RBAC, ARM template.


 


About the migration from the classic Cloud Service to Cloud Service Extended Support, Azure officially provided a way called in-place migration. The detailed information can be found at: https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/in-place-migration-portal.


 


In this blog, we will present how we can manually create a new Cloud Service Extended Support and deploy the same project into this new service. The classic Cloud Service project will have following features and after migration, all these features will be kept:



  1. Remote Desktop

  2. SSL certificate for HTTPS endpoints

  3. Using the same IP address before and after migration


The main advantage of manual migration


Before how to do this manual migration, let us highlight the main advantage of the manual migration:



  • The name of the new Cloud Service Extended Support can be decided by yourself. You can use a user-friendly name such as CSEStest.

  • Both of manual and in-place migration need the modification of the project code. During manual migration process, this modification is already included. With in-place migration process, it may be more difficult for you to modify the code.

  • This manual migration process is using ARM template to deploy new resources. You can do some changes by your own idea such as enabling RDP Extension which is not enabled in classic Cloud Service. But the in-place migration does not allow you to do so. It will keep the same configuration.


Before you begin


There will be some additional points to do before we start the migration. Please have a check of following points carefully since it may cause unexpected issue if they are not matched:



  1. Follow the “Before you begin” part of the document to check if you are an administrator/coadministrator of the subscription.

  2. In subscription page of Azure Portal, check the resource providers Microsoft.Compute, Microsoft.Network and Microsoft.Storage are already registered.


Example of resource provider registrationExample of resource provider registration


 



  1. We should have a running classic Cloud Service and its whole project code. If it is using certificate for any purpose (for HTTPS endpoint in this blog), that certificate in .pfx format and its password are also needed for the deployment.


With above 3 conditions, there should not be any other permission issue for this manual migration process. And for this process, a container in storage account is also required. If you do not have one yet, please follow this document to create one storage account and follow next 2 screenshots to create a new container.


https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal#create-a-storage-account-1


Create container 1Create container 1


 


Create container 2Create container 2


 


Then, let us move on the main process.


Reserve the IP address of the classic Cloud Service and upgrade it to be used for Cloud Service Extended Support


In this example, my classic Cloud Service is testcstocses in resource group cstocses, in East US region.



  1. Use PowerShell command to keep the current IP address as a classic Reserved IP, with name ReservedIPCSES. The location must be the same as your classic Cloud Service location.


 

New-AzureReservedIP -ReservedIPName ReservedIPCSES -ServiceName testcstocses -Location "East US"

 


Keep the IP to classic reserved IPKeep the IP to classic reserved IP


 



  1. Follow document to upgrade the generated classic Reserved IP to basic SKU Public IP (There is bug in script in official document)


https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-public-ip-address-upgrade?tabs=option-upgrade-powershell%2Coption-migrate-powershell#upgrade-migrate-a-classic-reserved-ip-to-a-static-public-ip


a. Verify if the classic Reserved IP is still associated with the classic Cloud Service. If yes, check if we can remove the association. (By design, the newly generated classic reserved IP should be still associated with classic Cloud Service)


 

## Variables for the command ##
$name = 'ReservedIPCSES'

## This section is only needed if the Reserved IP is not already disassociated from any Cloud Services ##
$service = 'testcstocses'
Remove-AzureReservedIPAssociation -ReservedIPName $name -ServiceName $service

$validate = Move-AzureReservedIP -ReservedIPName $name -Validate
$validate.ValidationMessages

 


PowerShell commands to verify association between classic Cloud Service and generated reserved IPPowerShell commands to verify association between classic Cloud Service and generated reserved IP


b. If the result in above screenshot is Succeeded, then run the following command to remove the link.


 

Move-AzureReservedIP -ReservedIPName $name -Prepare
Move-AzureReservedIP -ReservedIPName $name -Commit

 


Upgrade classic Reserved IP to basic tierUpgrade classic Reserved IP to basic tier


 


The new generated Basic SKU Public IP will be in a new resource group called {publicipname}-Migrated.


Migrated basic tier reserved IPMigrated basic tier reserved IP


 



  1. Set a DNS name on this Public IP. (Optional but recommended since the new Cloud Service Extended Support will not offer a DNS name as classic Cloud Service)


Configure DNS name on public IPConfigure DNS name on public IP


 



  1. Move the Public IP into the original resource group.


Move public IP to specific resource group 1Move public IP to specific resource group 1


 


Move public IP to specific resource group 2Move public IP to specific resource group 2


 



  1. (Optional) If your original classic Cloud Service is using any certificate, create a Key Vault in the same region (East US in this example) and upload the .pfx format certificate.


Create Key Vault 1Create Key Vault 1


 


Create Key Vault 2Create Key Vault 2


 


Do not forget to check this checkbox “Azure Virtual Machines for deployment” in Access policy page.


Create Key Vault 3Create Key Vault 3


 


After creation of Key Vault, import the certificate.


Upload certificate into Key Vault 1Upload certificate into Key Vault 1


 


Upload certificate into Key Vault 2Upload certificate into Key Vault 2


 


Upload certificate into Key Vault resultUpload certificate into Key Vault result


 



  1. Follow the official document to modify the classic Cloud Service code to make them match Cloud Service Extended Support requirement.


https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-prerequisite


https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-template


The yellow part is about the usage of the certificate. It is not necessary for all Cloud Service project.


The green part is the important information which we will use in following steps. Please take a note of it.


 


.csdef file


<?xml version=”1.0″ encoding=”utf-8″?>


<ServiceDefinition name=”AzureCloudService2″ xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition” schemaVersion=”2015-04.2.6“>


    <WebRole name=”WebRole1” vmsize=”Standard_D1_V2“>


        <Sites>


            <Site name=”Web”>


                <Bindings>


                    <Binding name=”Endpoint1″ endpointName=”Endpoint1″ />


                    <Binding name=”HttpsIn” endpointName=”HttpsIn” />


                </Bindings>


            </Site>


        </Sites>


        <Endpoints>


            <InputEndpoint name=”Endpoint1″ protocol=”http” port=”80″ />


            <InputEndpoint name=”HttpsIn” protocol=”https” port=”443″ certificate=”Certificate1″ />


        </Endpoints>


        <Certificates>


            <Certificate name=”Certificate1″ storeLocation=”LocalMachine” storeName=”My” permissionLevel=”limitedOrElevated”/>


        </Certificates>


    </WebRole>


</ServiceDefinition>


 


.cscfg file


<?xml version=”1.0″ encoding=”utf-8″?>


<ServiceConfiguration serviceName=”AzureCloudService2″ xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration” osFamily=”6″ osVersion=”*” schemaVersion=”2015-04.2.6“>


    <Role name=”WebRole1“>


        <Instances count=”1” />


        <Certificates>


            <Certificate name=”Certificate1″ thumbprint=”909011xxxxxxxxxx712303838613″ thumbprintAlgorithm=”sha1″ />


        </Certificates>


    </Role>


    <NetworkConfiguration>


        <VirtualNetworkSite name=”cstocsesvnet” />


        <AddressAssignments>


            <InstanceAddress roleName=”WebRole1″>


                <Subnets>


                    <Subnet name=”WebRole1_subnet” />


                </Subnets>


            </InstanceAddress>


            <ReservedIPs>


                <ReservedIP name=”ReservedIPCSES” />


            </ReservedIPs>


        </AddressAssignments>


    </NetworkConfiguration>


</ServiceConfiguration>


 


The thumbprint of the certificate can be found in Key Vault, Certificates page.


Thumbprint of the certificate in Key VaultThumbprint of the certificate in Key Vault


 



  1. Package the project as we do for classic Cloud Service project. Then copy out the .cspkg and .cscfg files.


Package in Visual StudioPackage in Visual Studio


 


Package result for a classic Cloud Service project (certificate isn't from package process)Package result for a classic Cloud Service project (certificate isn’t from package process)


 



  1. Upload the .cscfg file and .cspkg file into a container of storage account.


Upload .cspkg and .cscfg to Storage containerUpload .cspkg and .cscfg to Storage container


 



  1. After uploading, generate the SAS URL of these 2 files one by one. We can click on the file, switch to Generate SAS, click on Generate SAS token and URL and find the needed SAS URL at the end of the page.


Generate SAS token of .cscfg and .cspkgGenerate SAS token of .cscfg and .cspkg


 


The generated SAS token should be like:


https://storageforcses.blob.core.windows.net/test/AzureCloudService2.cspkg?sp=r&st=2021-04-02T10:47:04Z&se=2021-04-02T18:47:04Z&spr=https&sv=2020-02-10&sr=b&sig=osktC5FtJpI1uX28D2UMtJaZVi8FmhW6kpIHH%2FuFTUU%3D


         


https://storageforcses.blob.core.windows.net/test/ServiceConfiguration.Cloud.cscfg?sp=r&st=2021-04-02T10:48:12Z&se=2021-04-02T18:48:12Z&spr=https&sv=2020-02-10&sr=b&sig=8BmMScBU%2Bm6hRkKtUoiRNs%2F2NHYiHay8qxJq5TM%2BkGU%3D



  1. (Optional) If you use Key Vault to save the certificate, please visit the Certificate page and click 2 times on the uploaded certificate. You’ll find a URL at the end of the page with format:


https://{keyvaultname}.vault.azure.net/secrets/{certificatename}/{id}


Find secret URL of certificate 1Find secret URL of certificate 1


 


Find secret URL of certificate 2Find secret URL of certificate 2


 


Make a note of this URL for using it in next step. The following is my example: https://cstocses.vault.azure.net/secrets/csescert/e2f6ab1744374de38ae831ba8896edb9


         


Also, please make a note of the subscription ID, name of resource group where Key Vault is deployed and the Key Vault service name. These will also be used in next step.



  1. Please modify the following ARM template and parameters. And then save them into JSON format files. In my test, I saved into template.json and parameter.json.


 


Tips: The yellow parts are the optional parts. If you do not use any certificate, you can remove it from both template and parameter files. The green parts are the information noted from .csdef and .cscfg files. Please make sure they are the same and correct.


 


ARM template: (Except the above tips about certificate, no need to modify the ARM template file)


{


  “$schema”https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#,


  “contentVersion”“1.0.0.0”,


  “parameters”: {


    “cloudServiceName”: {


      “type”“string”,


      “metadata”: {


        “description”“Name of the cloud service”


      }


    },


    “location”: {


      “type”“string”,


      “metadata”: {


        “description”“Location of the cloud service”


      }


    },


    “deploymentLabel”: {


      “type”“string”,


      “metadata”: {


        “description”“Label of the deployment”


      }


    },


    “packageSasUri”: {


      “type”“securestring”,


      “metadata”: {


        “description”“SAS Uri of the CSPKG file to deploy”


      }


    },


    “configurationSasUri”: {


      “type”“securestring”,


      “metadata”: {


        “description”“SAS Uri of the service configuration (.cscfg)”


      }


    },


    “roles”: {


      “type”“array”,


      “metadata”: {


        “description”“Roles created in the cloud service application”


      }


    },


    “vnetName”: {


      “type”“string”,


      “defaultValue”“csesVNet”,


      “metadata”: {


        “description”“Name of vitual network”


      }


    },


    “subnetSetting”: {


      “type”“array”,


      “metadata”: {


        “description”“Setting of subnets”


      }


    },


    “publicIPName”: {


      “type”“string”,


      “defaultValue”“contosocsIP”,


      “metadata”: {


        “description”“Name of public IP address”


      }


    },


    “upgradeMode”: {


      “type”“string”,


      “defaultValue”“Auto”,


      “metadata”: {


        “UpgradeMode”“UpgradeMode of the CloudService”


      }


    },


    “secrets”: {


      “type”“array”,


      “metadata”: {


        “description”“The key vault id and certificates referenced in the .cscfg file”


      }


    },


    “rdpPublicConfig”: {


      “type”“string”,


      “metadata”: {


        “description”“Public config of remote desktop extension”


      }


    },


    “rdpPrivateConfig”: {


      “type”“securestring”,


      “metadata”: {


        “description”“Private config of remote desktop extension”


      }


    }


  },


  “variables”: {


    “cloudServiceName”“[parameters(‘cloudServiceName’)]”,


    “subscriptionID”“[subscription().subscriptionId]”,


    “lbName”“[concat(variables(‘cloudServiceName’), ‘LB’)]”,


    “lbFEName”“[concat(variables(‘cloudServiceName’), ‘LBFE’)]”,


    “resourcePrefix”“[concat(‘/subscriptions/’variables(‘subscriptionID’), ‘/resourceGroups/’resourceGroup().name‘/providers/’)]”


  },


  “resources”: [


    {


      “apiVersion”“2019-08-01”,


      “type”“Microsoft.Network/virtualNetworks”,


      “name”“[parameters(‘vnetName’)]”,


      “location”“[parameters(‘location’)]”,


      “properties”: {


        “addressSpace”: {


          “addressPrefixes”: [


            “10.0.0.0/16”


          ]


        },


        “subnets”“[parameters(‘subnetSetting’)]”


      }


    },


    {


      “apiVersion”“2020-10-01-preview”,


      “type”“Microsoft.Compute/cloudServices”,


      “name”“[variables(‘cloudServiceName’)]”,


      “location”“[parameters(‘location’)]”,


      “tags”: {


        “DeploymentLabel”“[parameters(‘deploymentLabel’)]”,


        “DeployFromVisualStudio”“true”


      },


      “dependsOn”: [


        “[concat(‘Microsoft.Network/virtualNetworks/’parameters(‘vnetName’))]”


      ],


      “properties”: {


        “packageUrl”“[parameters(‘packageSasUri’)]”,


        “configurationUrl”“[parameters(‘configurationSasUri’)]”,


        “upgradeMode”“[parameters(‘upgradeMode’)]”,


        “roleProfile”: {


          “roles”“[parameters(‘roles’)]”


        },


        “networkProfile”: {


          “loadBalancerConfigurations”: [


            {


              “id”“[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/loadBalancers/’variables(‘lbName’))]”,


              “name”“[variables(‘lbName’)]”,


              “properties”: {


                “frontendIPConfigurations”: [


                  {


                    “name”“[variables(‘lbFEName’)]”,


                    “properties”: {


                      “publicIPAddress”: {


                        “id”“[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/publicIPAddresses/’parameters(‘publicIPName’))]”


                      }


                    }


                  }


                ]


              }


            }


          ]


        },


        “osProfile”: {


          “secrets”“[parameters(‘secrets’)]”


        },


        “extensionProfile”: {


          “extensions”: [


            {


              “name”“RDPExtension”,


              “properties”: {


                “autoUpgradeMinorVersion”true,


                “publisher”“Microsoft.Windows.Azure.Extensions”,


                “type”“RDP”,


                “typeHandlerVersion”“1.2.1”,


                “settings”“[parameters(‘rdpPublicConfig’)]”,


                “protectedSettings”“[parameters(‘rdpPrivateConfig’)]”


              }


            }


          ]


        }


    }


   }


  ]


}


          Parameters:


 


Tips:



For example:


                   “roles”: {


                                    “value”: [


                                        {


                                            “name”: “WebRole1”,


                                            “sku”: {


                                                “name”: “Standard_D1_v2”,


                                                “tier”: “Standard”,


                                                “capacity”: “1”


                                            }


                                        },


                                         {


                                            “name”: “WorkerRole1”,


                                            “sku”: {


                                                “name”: “Standard_D1_v2”,


                                                “tier”: “Standard”,


                                                “capacity”: “2”


                                            }


                                        }


                                    ]


                      },


                        …


                      “subnetSetting”: {


                          “value”: [


                              {


                                “name”: “WebRole1_subnet”,


                                “properties”: {


                                  “addressPrefix”: “10.0.0.0/24”


                                 }


                              },


                              {


                                “name”: “WorkerRole1_subnet”,


                                “properties”: {


                                  “addressPrefix”: “10.0.1.0/24”


                                 }


                              }


                            ]


                         },



  • In the secrets part, sourceVault is the resource URI of your Key Vault. It’s constructed by /subscriptions/{subscription-id}/resourceGroups/{resourcegroup-name}/providers/Microsoft.KeyVault/vaults/{keyvault-name}  And the certificateUrl is the one we noted in step 10.



  • In rdpPublicConfig and rdpPrivateConfig, we only need to change the username and password we want to use to enable RDP. For example, here I use “admin” as username and “Password” as password.


{


    “$schema”https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#,


    “contentVersion”“1.0.0.0”,


    “parameters”: {


        “cloudServiceName”: {


            “value”“cstocses”


        },


        “location”: {


            “value”“eastus”


        },


        “deploymentLabel”: {


            “value”“deployment label of cstocses by ARM template”


        },


        “packageSasUri”: {


            “value”https://storageforcses.blob.core.windows.net/test/AzureCloudService2.cspkg?sp=r&st=2021-04-02T10:47:04Z&se=2021-04-02T18:47:04Z&spr=https&sv=2020-02-10&sr=b&sig=osktC5FtJpI1uX28D2UMtJaZVi8FmhW6kpIHH%2FuFTUU%3D


        },


        “configurationSasUri”: {


            “value”https://storageforcses.blob.core.windows.net/test/ServiceConfiguration.Cloud.cscfg?sp=r&st=2021-04-02T10:48:12Z&se=2021-04-02T18:48:12Z&spr=https&sv=2020-02-10&sr=b&sig=8BmMScBU%2Bm6hRkKtUoiRNs%2F2NHYiHay8qxJq5TM%2BkGU%3D


        },


        “roles”: {


            “value”: [


                {


                    “name”WebRole1,


                    “sku”: {


                        “name”Standard_D1_v2,


                        “tier”“Standard”,


                        “capacity”1


                    }


                }


            ]


        },


        “vnetName”: {


            “value”cstocsesVNet


        },


        “subnetSetting”: {


            “value”: [


                {


                  “name”WebRole1_subnet,


                  “properties”: {


                    “addressPrefix”“10.0.0.0/24”


                  }


                }


              ]


        },


        “publicIPName”: {


            “value”ReservedIPCSES


        },


        “upgradeMode”: {


            “value”“Auto”


        },


        “secrets”: {


            “value”: [


              {


                “sourceVault”: {


                  “id”“/subscriptions/4f27bec7-26bd-40f7-af24-5962a53d921e/resourceGroups/cstocses/providers/Microsoft.KeyVault/vaults/cstocses”


                },


                “vaultCertificates”: [


                  {


                    “certificateUrl”https://cstocses.vault.azure.net/secrets/csescert/e2f6ab1744374de38ae831ba8896edb9


                  }


                ]


              }


            ]


        },


        “rdpPublicConfig”: {


          “value”“<PublicConfig>rn  <UserName>admin</UserName>rn  <Expiration>4/2/2022 12:00:00 AM</Expiration>rn</PublicConfig>”


        },


        “rdpPrivateConfig”: {


          “value”“<PrivateConfig>rn  <Password>Password</Password>rn</PrivateConfig>”


        }


    }


}



  1. Use PowerShell command to deploy the ARM template. (Not necessary by PowerShell. You can also use Azure Portal or Azure CLI to deploy the template)


https://docs.microsoft.com/en-us/powershell/module/az.resources/new-azresourcegroupdeployment?view=azps-5.7.0


 


Please remember to replace the resource group name and the path of the template and parameter JSON file in the command before running it. 


            


 


 


 


 


 

New-AzResourceGroupDeployment -ResourceGroupName "cstocses" -TemplateFile "C:UsersjerryzDesktopCSES testdemotemplate.json" -TemplateParameterFile "C:UsersjerryzDesktopCSES testdemoparameter.json"

 


 


 


 


 


 


ARM template deployment resultARM template deployment result


 


 


Result: (The classic Cloud Service is deleted)


All created resources in this processAll created resources in this process


 


 


                  

Google Releases Security Updates for Chrome

This article is contributed. See the original author and article here.

Google has updated the stable channel for Chrome to 89.0.4389.128 for Windows, Mac, and Linux. This version addresses vulnerabilities that an attacker could exploit to take control of an affected system. 

CISA encourages users and administrators to review the Chrome release and apply the necessary changes.

New ways to train custom language models – effortlessly!

New ways to train custom language models – effortlessly!

This article is contributed. See the original author and article here.



Haim Sabo  Senior Software Engineer at Video Indexer, AEDPLS 

*This article was originally published on July 18, 2019, on Microsoft Azure blogs.

 



Video Indexer (VI), the AI service for Azure Media Services enables the customization of language models by allowing customers to upload examples of sentences or words belonging to the vocabulary of their specific use case. Since speech recognition can sometimes be tricky, VI enables you to train and adapt the models for your specific domain. Harnessing this capability allows organizations to improve the accuracy of the Video Indexer generated transcriptions in their accounts.


Over the past few months, we have worked on a series of enhancements to make this customization process even more effective and easy to accomplish. Enhancements include automatically capturing any transcript edits done manually or via API as well as allowing customers to add closed caption files to further train their custom language models.


The idea behind these additions is to create a feedback loop where organizations begin with a base out-of-the-box language model and improve its accuracy gradually through manual edits and other resources over a period of time, resulting with a model that is fine-tuned to their needs with minimal effort.


Accounts’ custom language models and all the enhancements this blog shares are private and are not shared between accounts.


In the following sections, I will drill down on the different ways that this can be done.


Improving your custom language model using transcript updates


Once a video is indexed in VI, customers can use the Video Indexer portal to introduce manual edits and fixes to the automatic transcription of the video. This can be done by clicking on the Edit button at the top right corner of the Timeline pane of a video to move to edit mode, and then simply update the text, as seen in the image below.


 


HaimSabo_0-1615982726876.png

 


The changes are reflected in the transcript, captured in a text file From transcript edits, and automatically inserted into the language model used to index the video. If you were not already using a custom language model, the updates will be added to a new Account Adaptations language model created in the account.


You can manage the language models in your account and see the From transcript edits files by going to the Language tab in the content model customization page of the VI website.


Once one of the From transcript edits files is opened, you can review the old and new sentences created by the manual updates, and the differences between them as shown below.


HaimSabo_1-1615982726984.png

 


All that is left is to do is click on Train to update the language model with the latest changes. From that point on, these changes will be reflected in all future videos indexed using that model. Of course, you do not have to use the portal to train the model, the same can be done via the Video Indexer train language model API. Using the API can open new possibilities such as allowing you to automate a recurring training process to leverage ongoing updates.


HaimSabo_2-1615982727230.png

 


There is also an update video transcript API that allows customers to update the entire transcript of a video in their account by uploading a VTT file that includes the updates. As a part of the new enhancements, when a customer uses this API, Video Indexer also adds the transcript that the customers uploaded to the relevant custom model automatically in order to leverage the content as training material. For example, calling update video transcript for a video titled “Godfather” will result with a new transcript file named “Godfather” in the custom language model that was used to index that video.


Improving your custom language model using closed caption files


Another quick and effective way to train your custom language model is to leverage existing closed captions files as training material. This can be done manually, by uploading a new closed caption file to an existing model in the portal, as shown in the image below, or by using the create language model and update language model APIs to upload a VTT, SRT or TTML files (similarly to what was done until now with TXT files.)


 


HaimSabo_3-1615982727119.png

 


Once uploaded, VI cleans up all the metadata in the file and strips it down to the text itself. You can see the before and after results in the following table.


 

























Type Before After
VTT

NOTE Confidence: 0.891635
00:00:02.620 –> 00:00:05.080
but you don’t like meetings before 10 AM.


but you don’t like meetings before 10 AM.
SRT

2
00:00:02,620 –> 00:00:05,080
but you don’t like meetings before 10 AM.


but you don’t like meetings before 10 AM.
TTML

<!– Confidence: 0.891635 –>
<p begin=”00:00:02.620″ end=”00:00:05.080″>but you don’t like meetings before 10 AM.</p>


but you don’t like meetings before 10 AM.

From that point on, all that is left to do is review the additions to the model and click Train or use the train language model API to update the model.


Next Steps


The new additions to the custom language models training flow make it easy for you and your organization to get more accurate transcription results easily and effortlessly. Now, it is up to you to add data to your custom language models, using any of the ways we have just discussed, to get more accurate results for your specific content next time you index your videos.


Have questions or feedback? We would love to hear from you! Use our UserVoice page to help us prioritize features or use Video Indexer’s Stackoverflow page for any questions you have around Video Indexer.