Zero Trust, The Essentials video series

Zero Trust, The Essentials video series

This article is contributed. See the original author and article here.

This video series shows how you can adopt a Zero Trust approach for security and benefit from the core ways in which Microsoft can help. In the past, your defenses may have been focused on protecting network access with on-premises firewalls and VPNs, assuming everything inside the network was safe. But as corporate data footprints have expanded to sit outside your corporate network, to live in the Cloud or a hybrid across both, the Zero Trust security model has evolved to address a more holistic set of attack vectors.


 


Screen Shot 2021-06-21 at 1.51.24 PM.png


 


Based on the principles of “verify explicitly”, “apply least privileged access” and “always assume breach”, Zero Trust establishes a comprehensive control plane across multiple layers of defense:



  • Identity

  • Endpoints

  • Applications

  • Network

  • Infrastructure

  • Data


Introduction to Zero Trust 


Identity:


Join our host, Jeremy Chapman, as he unpacks the foundational layer of the model with identity. As the primary control plane for Zero Trust, it acts as the front door for people, service accounts, and devices as each requests access to resources. Identity is at the core of the Zero Trust concepts of never trust, always verify and grant the appropriate level of access through the principle of least privilege.


 


Zero Trust | Identity


Endpoints & Applications:


See how you can apply Zero Trust principles and policies to your endpoints and apps; the conduits for users to access your data, network, and resources. For Zero Trust, endpoints refer to the devices people use every day — both corporate or personally owned computers and mobile devices. The prevalence of remote work means devices can be connected from anywhere and the controls you apply should be correlated to the level of risk at those endpoints.


 


For corporate managed endpoints that run within your firewall or your VPN, you will still want to use principles of Zero Trust: Verify explicitly, apply least privileged access, and assume breach.Jeremy Chapman walks through your options, controls, and recent updates to implement the Zero Trust security model.


 


Zero Trust | Endpoints & Applications

Microsoft Viva, The Essentials video series

Microsoft Viva, The Essentials video series

This article is contributed. See the original author and article here.

Screen Shot 2021-06-21 at 12.56.08 PM.png


 


This series of videos shows team leaders and admin the underlying tech and options for enabling and configuring the four core modules of Microsoft Viva. Viva is the new employee experience platform that connects learning, insights, resources, and communication. It has a unique set of curated and AI-enriched experiences built on top of and integrated with the foundational services of Microsoft 365.


 



Introduction to Microsoft Viva 


 


Microsoft Viva’s 4 core modules:



  • Viva Topics — builds a knowledge system for your organization

  • Viva Connections — boosts employee engagement

  • Viva Learning — creates a central hub to discover learning content and build new skills

  • Viva Insights — recommends actions to help improve productivity and wellbeing


 


Microsoft Viva Topics:


Viva Topics builds a system that transforms information into knowledge and actively delivers it to you in the context of your work. As many of us are working remotely or in more hybrid office environments, it can be harder to stay informed. With Topics, we connect you to the knowledge and the people closest to it. CJ Tan, Lead Program Manager, joins host Jeremy Chapman to cover the overall experience for users, knowledge managers, and admins.


 


Microsoft Viva Topics


 


Microsoft Viva Connections:


Viva Connections is specifically about boosting employee engagement. This spans everyone in your organization, from everyday users, specific groups in departments, to frontline workers. It expands on your SharePoint home site and newsfeed and is designed to offer a destination that delivers personalized news, conversations, and commonly used resources. Adam Harmetz, lead engineer, joins host Jeremy Chapman to walk through the user experience, how to set it up, and options for personalizing information sharing by role.


 


Microsoft Viva Connections


 


Microsoft Viva Learning:


With Viva Learning, you have a center for personalized skill development that offers a unique social experience where learning content is available in the flow of work. It recommends and manages the progress of key trainings all from one place and is built on top of SharePoint, Microsoft Search, Microsoft Teams, Microsoft Graph, and Substrate. Swati Jhawar, Principal Program Manager for Microsoft Viva, joins Jeremy Chapman to share options for setup, learning content curation, and integration with your existing learning management system.


 


Microsoft Viva Learning


 


Microsoft Viva Insights:


With hybrid work at home and in the office as the new normal, Viva Insights gives individuals, managers, and leaders the insight to develop healthier work habits and a better work environment. It is an intelligent experience designed to leverage MyAnalytics, Workplace Analytics, and Exchange Online to deliver insights that recommend actions to help prioritize well-being and productivity. Engineering leader, Kamal Janardhan, joins Jeremy Chapman for a deep dive and a view of your options for configuration.


 


Microsoft Viva Insights

Deploy PyTorch models with TorchServe in Azure Machine Learning online endpoints

This article is contributed. See the original author and article here.

With our recent announcement of support for custom containers in Azure Machine Learning comes support for a wide variety of machine learning frameworks and servers including TensorFlow Serving, R, and ML.NET. In this blog post, we’ll show you how to deploy a PyTorch model using TorchServe.


The steps below reference our existing TorchServe sample here.


 


Export your model as a .mar file


To use TorchServe, you first need to export your model in the “Model Archive Repository” (.mar) format. Follow the PyTorch quickstart to learn how to do this for your PyTorch model.


Save your .mar file in a directory called “torchserve.”


 


Construct a Dockerfile


In the existing sample, we have a two-line Dockerfile:


 


 

FROM pytorch/torchserve:latest-cpu

CMD ["torchserve","--start","--model-store","$MODEL_BASE_PATH/torchserve","--models","densenet161.mar","--ts-config","$MODEL_BASE_PATH/torchserve/config.properties"]

 


 


Modify this Dockerfile to pass the name of your exported model from the previous step for the “–models” argument.


 


Build an image


Now, build a Docker image from the Dockerfile in the previous step, and store this image in the Azure Container Registry associated with your workspace:


 


 

WORKSPACE=$(az config get --query "defaults[?name == 'workspace'].value" -o tsv)
ACR_NAME=$(az ml workspace show -w $WORKSPACE --query container_registry -o tsv | cut -d'/' -f9-)

if [[ $ACR_NAME == "" ]]
then
    echo "ACR login failed, exiting"
    exit 1
fi

az acr login -n $ACR_NAME
IMAGE_TAG=${ACR_NAME}.azurecr.io/torchserve:8080
az acr build $BASE_PATH/ -f $BASE_PATH/torchserve.dockerfile -t $IMAGE_TAG -r $ACR_NAME

 


 


Test locally


Ensure that you can serve your model by doing a local test. You will need to have Docker installed for this to work. Below, we show you how to run the image, download some sample data, and send a test liveness and scoring request.


 


 

# Run image locally for testing
docker run --rm -d -p 8080:8080 --name torchserve-test 
    -e MODEL_BASE_PATH=$MODEL_BASE_PATH 
    -v $PWD/$BASE_PATH/torchserve:$MODEL_BASE_PATH/torchserve $IMAGE_TAG

# Check Torchserve health
echo "Checking Torchserve health..."
curl http://localhost:8080/ping

# Download test image
echo "Downloading test image..."
wget https://aka.ms/torchserve-test-image -O kitten_small.jpg

# Check scoring locally
echo "Uploading testing image, the scoring is..."
curl http://localhost:8080/predictions/densenet161 -T kitten_small.jpg

docker stop torchserve-test

 


 


Create endpoint YAML


Create a YAML file that specifies the properties of the managed online endpoint you would like to create. In the example below, we specify the location of the model we will use as well as the Azure Virtual Machine size to use when deploying.


 


 

$schema: https://azuremlsdk2.blob.core.windows.net/latest/managedOnlineEndpoint.schema.json
name: torchserve-endpoint
type: online
auth_mode: aml_token
traffic:
  torchserve: 100

deployments:
  - name: torchserve
    model:
      name: torchserve-densenet161
      version: 1
      local_path: ./torchserve
    environment_variables:
      MODEL_BASE_PATH: /var/azureml-app/azureml-models/torchserve-densenet161/1
    environment:
      name: torchserve
      version: 1
      docker:
        image: {{acr_name}}.azurecr.io/torchserve:8080
      inference_config:
        liveness_route:
          port: 8080
          path: /ping
        readiness_route:
          port: 8080
          path: /ping
        scoring_route:
          port: 8080
          path: /predictions/densenet161
    instance_type: Standard_F2s_v2
    scale_settings:
      scale_type: manual
      instance_count: 1
      min_instances: 1
      max_instances: 2

 


 


Create endpoint


Now that you have tested locally and you have a YAML file, you can create your endpoint:


 


 

az ml endpoint create -f $BASE_PATH/$ENDPOINT_NAME.yml -n $ENDPOINT_NAME

 


 


Send a scoring request


Once your endpoint finishes deploying, you can send it unlabeled data for scoring:


 


 

# Get accessToken
echo "Getting access token..."
TOKEN=$(az ml endpoint get-credentials -n $ENDPOINT_NAME --query accessToken -o tsv)

# Get scoring url
echo "Getting scoring url..."
SCORING_URL=$(az ml endpoint show -n $ENDPOINT_NAME --query scoring_uri -o tsv)
echo "Scoring url is $SCORING_URL"

# Check scoring
echo "Uploading testing image, the scoring is..."
curl -H "Authorization: {Bearer $TOKEN}" -T kitten_small.jpg $SCORING_URL

 


 


Delete resources


Now that you have successfully created and tested your TorchServe endpoint, you can delete it.


 


 

# Delete endpoint
echo "Deleting endpoint..."
az ml endpoint delete -n $ENDPOINT_NAME --yes

# Delete model
echo "Deleting model..."
az ml model delete -n $AML_MODEL_NAME --version 1

 


 


Next steps


Read our documentation to learn more and see our other samples.


 

Join in the Azure Sentinel Hackathon 2021!

Join in the Azure Sentinel Hackathon 2021!

This article is contributed. See the original author and article here.

Hackathon Banner.png


 


Today, we are announcing the 2nd annual Hackathon for Azure Sentinel! This hackathon challenges security experts around the globe to build end-to-end cybersecurity solutions for Azure Sentinel that delivers enterprise value by collecting data, managing security, detecting, hunting, investigating, and responding to constantly evolving threats. We invite you to participate in this hackathon for a chance to solve this challenge and win a piece of the $19000 cash prize pool*. This online hackathon runs from June 21st to Oct 4th, 2021, and is open to individuals, teams, and organizations globally.



Azure Sentinel provides a platform for security analysts and threat hunters of various levels to not only leverage existing content like workbooks (dashboard), playbooks (workflow orchestrations), analytic rules (detections), hunting queries, etc. but also build custom content and solutions  as well. Furthermore, Azure Sentinel also provides APIs for integrating different types of applications to connect with Azure Sentinel data and insights. Here are few examples of end-to-end solutions that unlocks the potential of Azure Sentinel and drives enterprise value.




 You can discover more examples by reviewing content and solutions in the Azure Sentinel GitHub repo and blogs. You can refer to the last year’s Azure Sentinel Hackathon for ideas too!


 


Prizes


In addition to learning more about Azure Sentinel and delivering cybersecurity value to enterprises, this hackathon offers the following awesome prizes for top projects:



  • First Place (1) – $10,000 USD cash prize  

  • Second Place (1) – $4000 USD cash prize

  • Runners Up (2) – $1500 USD cash prize each 

  • Popular Choice (1) – $1000 USD cash prize

  • The first 10 eligible submissions also qualify to receive $100 each.


Note: Refer to the Hackathon official rules for details on project types that qualify for each prize category


In addition, the four winning projects will be heavily promoted on Microsoft blogs and social media so that your creative projects are widely known to all. The criteria for judging consist of quality of the idea, value to enterprise and technical implementation. Refer to the Azure Sentinel Hackathon website for further details and get started.


 


Judging Panel


Judging commences immediately after the hackathon submission window closes on October 4th, 2021. We’ll announce the winners on or before October 27th, 2021. Our judging panel currently includes the following influencers and experts in the cybersecurity community.



  • Ann Johnson – Corporate Vice President, Cybersecurity Solutions Group, Microsoft

  • Vasu Jakkal – Corporate Vice President, Microsoft Security, Compliance and Identity

  • John Lambert – Distinguished Engineer and General Manager, Microsoft Threat Intelligence Center

  • Nick Lippis – Co-Founder, Co-Chair ONUG

  • Andrii Bezverkhyi – CEO & founder of SOC Prime, inventor of Uncoder.IO


 


 Next Steps



Let the #AzureSecurityHackathon begin!


 


*No purchase necessary. Open only to new and existing Devpost users who are the age of majority in their country. Game ends October 4th, 2021 at 9:00 AM Pacific Time. Refer to the official rules for details. 


 

Migrate & Modernize Linux VMs and Databases into Azure

Migrate & Modernize Linux VMs and Databases into Azure

This article is contributed. See the original author and article here.

Look at Azure as a platform for running your Linux virtual machine and open source database workloads. Check out options for how you can lift and shift existing VMs and databases to Azure and modernize them using cloud native approaches. Matt McSpirit, from the Azure engineering team, joins Jeremy Chapman to show how Azure supports open source platforms across operating systems, with different Linux distros as well as their publishers and open source databases.


 


Screen Shot 2021-06-21 at 11.58.11 AM.png


 


Azure has been working with Red Hat, SUSE, Canonical, Flat Car, Elastic and HashiCorp, and open source databases like MySQL, Postgres, Cassandra, MariaDB for years. More than 60% of our marketplace solutions run on Linux, and we support open source native PaaS services, as well. Beyond the workload level, we contribute back to the upstream Linux and Kubernetes communities that many of the modern and cloud native architectures rely on.


 



 





QUICK LINKS:


01:09 — Run Linux VMs in Azure


03:01 — Move an open source app from on-prem into Azure


06:04 — How to migrate VMs


07:36 — How to move database into Azure


10:52 — Repackage your VM to run as a container


12:40 — Configure an app


13:31 — Other options


14:48 — Wrap up


 


Link References:


To find information related to Linux running on Azure, check out https://azure.com/Linux


Go to Azure migrate and test out a migration at https://aka.ms/azuremigrate


Find the tools to migrate your data stores at https://aka.ms/datamigration


Deploy Red Hat solutions on Azure at https://Azure.com/RedHat


Run SUSE Linux on Azure at https://Azure.com/SUSE


For more on Azure, go to https://Azure.com/AzureMigrate


 


Unfamiliar with Microsoft Mechanics?


We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


Keep getting this insider knowledge, join us on social:











– Up next, we’ll look at Azure as a platform for running your Linux virtual machine and open source database workloads, from your options for how you can lift and shift existing VMs and databases to Azure, and also modernize them using cloud-native approaches. So, today I’m joined by Matt McSpirit from the Azure engineering team, and also no stranger to Microsoft Mechanics. Welcome back.


 


– Hi, thanks for letting me back on.


 


– And thanks for joining us today. So, we wanted to do a show based on Azure support for open source platforms across operating systems, with different Linux distros and also their publishers and open source databases. You might be surprised that more than 50% of the workloads running in Azure are actually running on Linux VMs.


 


– Yeah, that’s right. And we’ve been working with Red Hat, SUSE, Canonical, Flat Car, Elastic and HashiCorp, and open source databases like MySQL, Postgres, Cassandra, MariaDB, and more for years. And it’s actually more than 60% of our marketplace solutions that run on Linux, and we support open source native PaaS services too. Then beyond the workload level, we also contribute back to the upstream Linux and Kubernetes communities that many of the modern and cloud native architectures rely on.


 


– Okay. So let’s unpack this a bit more starting with your options then that we have for running infrastructure in VMs. So, if you’re a Linux shop maybe running multiple platforms, what’s the best way to think about running Linux VMs in Azure?


 


– Well, nearly every business or organization out there is running on multiple platforms. So there isn’t really a concept of a pure Linux shop or a Microsoft shop these days. And both platforms have a lot of pros and cons, and we’ve done a ton of work for performance, reliability, manageability, and security to make Azure the best home for running any open source workload. Now, starting at the foundational level; as I mentioned, we’re working with the leading Linux distros to optimize the kernels for Azure, including tuning the kernel for a Hypervisor, providing updates and patches and performance enhancements for running open source databases. And we also work closely with Red Hat for managed services like Azure Red Hat OpenShift, and SUSE with SAP enhancements. So we want to make sure when you bring your workloads to Azure, there’s benefit in every step of the way, from onboarding to operation, and you gain more security than you may have had on premises, in your private cloud or in another cloud. And whether you’re starting green field or bringing what you’ve already got running through Azure, we’ve got you covered.


 


– Okay, so spinning up a couple of VMs, I think from Azure is pretty straightforward. But, what if you got dozens or hundreds of VMs that constitute your apps, how would I bring those into Azure?


 


– Well, Azure Migrate is, as we know, your one-stop shop in Azure for bringing in virtual machines, databases, complete applications, even entire VMware sites into Azure. And of course, you can rebuild or rehydrate everything using automation for the apps you install in VMs running in Azure. And those will work the same as you’d expect. But unless you’ve fully automated that process, you’ll likely save a ton of time using Azure Migrate.


 


– Great. So, let’s make this real though. Can you show us how you’d move an open source app then from an on-prem system into Azure?


 


– Sure. Now, first I’ll start by showing you our app called Airsonic. It’s an open source Java app that you can find on GitHub and it’s used to host podcasts, as you can see here. Now, it’s running in an on-prem VMware environment and consists of a frontend VM running on Apache Tomcat on Ubuntu, and a backend VM with MySQL also running on Ubuntu. And I want to migrate and modernize the app. So here’s what I’m going to do. We want to start by lifting and shifting the frontend VM into Azure. Now, as I mentioned, the backend database is running in MySQL on a Linux VM. And instead of lifting and shifting that, I’m going to migrate the data directly from in the VM, into Azure MySQL, a PaaS service, so that I don’t have to worry about managing that VM once the data’s in Azure. And finally, we’ll take the frontend VM from the first step and containerize it so that it runs as a container in the Azure Kubernetes Service. And this step is all about modernization and being able to take advantage of cloud native, scalable compute. And of course, that last step of containerizing the app, well, you can do that from anywhere. It doesn’t need to be currently residing in Azure. I just wanted to start by showing you a lift and shift VM-to-VM migration, because it’s probably where most people will start. So, I’m in Azure Migrate, I’ve already performed a VM discovery on my on-premises, VMware environment. And you can see we’ve got hundreds of Linux virtual machines here.


 


– Right. And by the way, if you want to see how that process works for Azure Migrate, we’ve got a complete step-by-step guide. So check out our recent show on Azure virtual machine migration using Azure Migrate. Now the process is the same by the way, for both Linux as well as Windows.


 


– Absolutely. So in my case, since I’ve already run the discovery, I just need to search for the VMs I want to migrate. So in this case, I’m going to search for Woodgrove, and you’ll see that these two VMs that make up our app both are running Ubuntu with two cores and four gigs of RAM. And if I click into software inventory, for this one you’ll see everything running in each machine. I can also see dependencies, which is all the TCP/IP traffic connecting to our VM. This way I can ensure I migrate everything I need to. Now looking at Port 3306, you’ll see my SQL server. And if I switch tabs to my assessment and click into assess Azure VM, then edit our assessment properties, you’ll see all of the options for basic properties, VM size and pricing. Now I’m going to close that and next, I’ll create an assessment using those two VMs. I’ll give it a name, AirsonicPAYG, pay as you go. Now I’ll create a group with my two VMs and I’ll call it Airsonic. Then I’ll search for Woodgrove again, select my two VMs and finally hit create assessment. Now, if I click into the assessment, you’ll see the details for Azure readiness, with cost details for storage and compute. And when I click into Azure readiness, I can also see the recommended VM size, in my case both the standard F1. If I click into cost details, you’ll see the cost estimates broken down by compute and storage. Now I can tweak all of these values, but everything looks good to me and we can start migrating.


 


– So now that you have the assessment complete, how do you go about doing the actual migration?


 


– Well, with Azure Migrate, you could move both these VMs and the tools even scale to thousands of VMs if you’ve got a load of apps. But in my case, I’m only replicating the one VM for the front end. So, I’m still in Azure Migrate and below the assessment tools are the migration tools. So I’m going to click into replicate. I’m going to choose my virtualization platform, in this case VMware, select my appliance and hit next. And here I could import my assessment to migrate. But since I just want to migrate the one VM, I’ll specify the settings manually. I’m going to search again for Woodgrove and choose the Airsonic frontend VM and hit next. And here I need to enter standard options in Azure as the target settings, like location, subscription, VNet, etc. Now to save time everything’s filled in, and so I’m going to hit next. Now I can choose my VM size and I’ll just pick a standard D2a_v4 and hit next. I’ll keep all selected in the disks tab, and hit next. And now I’m ready to replicate. So let’s go ahead and do that. That’ll save the contents of the VM into my storage account. And back on the Azure Migrate tab, you’ll see our replication has started. If I click into it I can test from here, but to save a step, I’m just going to go straight into Migrate. Select my server and hit migrate, and in just a moment, it’s now a running clone of my original VM. And as you can see, it’s running now in Azure. Here, I could just continue using the on-premise database if I needed to keep it on-prem, but I’d just need to make sure this VM could reach it, and then just redirect the app’s DNS settings to this new IP.


 


– Okay. So in this case, your app is still running, but now you’ve got the front end as a VM in Azure, but the database is left running as VM in your on-prem environment. So, how would I move the database then into Azure?


 


– Well, we’ve got tools that can help with that as well. So, instead of replicating and migrating as a VM, I’m going to convert it to the PaaS service. That way in the future, I don’t need to worry about managing that underlying VM. So let’s do that. We’ll use the Azure Database Migration Service to migrate our databases contents to Azure MySQL. Now DMS works for both MySQL and Postgres. The first thing I needed to do in this case was create a MySQL instance in Azure, which I have done in advance. Now in the MySQL workbench, I set up a tab for my source VMs database on-prem and one for my target in Azure. Knowing the source, I can see all of my tables, but in the target you’ll see there aren’t any tables. Now using the CLI I’ll run a MySQL dump against my source database. Enter the source database password. And with this command, I can see it’s already created a dump, and this is a pretty small database. Now I just need to import the dump I just created into our target database in Azure with this command containing the address. I’ll enter the target’s database password, and this is just copying a schema and a table structure over to the target instance, but not the data yet. And once my dump has moved into the target, I can go back to the my SQL workbench, and in the target database tab I’ll refresh and I can now see all of my tables are there. But they’re still empty, so let’s fill them. So next I’ll head over to the database migration service in Azure. I’ve already created a DMS instance in advance, but now I’ll create a migration project. I’ll give it a name, Airsonic test migration. I’ll choose the target server type and you’ll see additional options here, including Postgres and Mongo, but we’re using MySQL. And now I’ll hit create and run activity. It’s going to ask me for source details, server name or IP. I’ll enter the IP. I’m going to keep Port 3306 into my username, root, and my password, and then I’ll move to the target settings. So I’m going to paste in the server name, its address, my username, Airsonic admin and password. And now I can choose the databases I want to migrate. I only need to move the Airsonic one. So next, I’ll configure the migration settings and you’ll see DMS found our 36 tables. I’ll take a look at everything and keep all the tables selected and move on to the summary. And now I just need to give the activity a name, Airsonic migration 1, and hit start migration. Now I can monitor everything here, but since our database is only a little bigger than two MB, if I hit refresh, you’ll see it’s complete. Now, if I click into it, you’ll see my tables are all complete as well. So now, it’s up and running in our Azure instance. And if you have a large database, we’ve got super fast, parallelized, Azure DMS migration for MySQL, where we’ve been able to burst up to 188 GiB per hour, which is great to minimize the servicing window. And if you need to, you can even migrate deltas between the first and final migration. Now, finally, we would normally update the connection strings so our app knows about the new location of our database. But we’re going to wait, because I want to modernize the front end to run in containers, and that’s going to take just a few minutes. But this will also open up better scalability, and I won’t need to maintain the Ubuntu VM.


 


– So how do you repackage or convert then your front end VM and everything inside of it so it can run as a container in Azure without having to rewrite the app?


 


– So for that, I can use another tool from Azure Migrate called app containerization to containerize the front end VM without needing to rewrite the application. So under explore more in the Web apps to Container section, I’ll click into app containerization. And you’ll see this works with ASP.NET apps running in AKS and Java web apps running on Apache Tomcat. Now from there, I’ll download the tool and with it running, I’ll need to specify the app type, in our case, Java. I’m going to walk through the pre-recs and make sure SSH is enabled, then continue. And now I need to log in with my Azure credentials, select my tenant and subscription and continue. And now I can enter the IP address or fully qualified domain name of my server. And I’ll use the IP here and enter the username and password. Now, once I validate connectivity, I can continue and that will discover the apps running on my server, and you’ll see it found our Airsonic app. So I just need to select it and enter the target container, Airsonic V1. And here, I’ll take a look at the parameters. I’ll keep all of these settings and check all the boxes for username, password, and URL and click apply. Now I need to make a slight edit to the app folder location to map everything correctly. So I’ll add a new folder and enter the path. Then I’ll change the target storage to a persistent volume and save my changes. And now we can move on to the build. Now I just need to choose my Azure container registry and select my container. I’ll hit build. And after a moment, it succeeds. So now I can move on to deployment specifications, where I just need to select my AKS cluster called Contoso-AKS, and then continue. Now I’ll specify my file share with subscription, storage account, and the file share. Now to save time, I’ve configured these in advance. Now in the deployment configuration, I need to configure the app. So here’s where I can check the prefix string, ports, replicas, load balancer type, and I’ll keep what’s there, and enter my username, password, and the URL. I’ll keep the storage config as it is, and I’ll hit apply. And now we’re ready to deploy. So I’ll do that, and once it’s finished, I can click into the link here and see the public IP and the resources. And just to test this out, I’m going to go into kube control and run a get service command, and you’ll see our Airsonic containers running along with the external IP and port that we just saw. So now let’s try this in the browser. I’m going to paste in the address and there’s our app, and I’ll make sure everything works and it’s looking good. So now we’ve migrated our app into Azure and even modernized it to use MySQL PaaS and cloud native containers. So it’s going to be way more scalable and easier to manage.


 


– Okay. So now that your app is running in Azure, what are some of the other things that you can do?


 


– Well, there’s a lot. But to look at just a few highlights, firstly, there are a ton of options to take advantage of for high availability. So those start with VM options for availability sets and availability zones for redundancy, all the way to disaster recovery solutions to ensure your services are as resilient as you need them to be. Now moving OpenStack into the management layer, we’ve also integrated Linux and open source databases. So, all the scaling and elasticity in Azure works for Linux, such as virtual machines scale sets, diagnostics monitoring and software update management are all built in. You can use the Azure portal to manage all of the Linux-based services. And you can take advantage of proactive security with Azure Defender and the Azure Security Center to keep your infrastructure and your data protected. Plus, there are AI and ML capabilities that can be applied to your Linux stack, enhancing your application and workloads with cognitive services or machine learning services. And if your organization uses managed Linux services, we’ve worked closely with Red Hat and SUSE to offer unique, integrated support experiences where you can raise tickets and our support team will work with Red Hat or SUSE support teams to triage cases together. And in fact, Azure is the only cloud service doing that today.


 


– Right. And these are just a few examples of how Microsoft is working with the open source community. So Matt, if anyone is watching and they want to get started, what do you recommend?


 


– To find just about everything related to Linux running on Azure, check out azure.com/linux. And once you’re ready to test out a migration, you can get to Azure migrate at aka.ms/azuremigrate. And to find the tools to migrate your data stores, check out aka.ms/datamigration. We’ve also got a ton of learning content available on Microsoft Learn.


 


– Thanks Matt, for the comprehensive overview and look at what Microsoft’s doing with the open source community and also how you’d bring your open source apps into Azure. Of course, keep watching Microsoft Mechanics for the news and deep dives in the latest tech. And be sure to subscribe if you haven’t already, and thanks so much for watching.