Extracting SAP data using OData – Part 2 – All About Parameters

Extracting SAP data using OData – Part 2 – All About Parameters

This article is contributed. See the original author and article here.







Before implementing data extraction from SAP systems please always verify your licensing agreement.

 


OData services have become one the most powerful interfaces in SAP systems. In the last episode, we’ve built a simple pipeline that extracts business information from an OData service to a data lake and makes them available for further processing and analytics. We’ve created all required resources, including linked services and datasets, and we’ve used them to define the Copy Data activity. The extraction process run without any issues, and we were able to display data from the lake.


 


But imagine you’d like to change the data source. Instead of Sales Orders, you’d like to get information about Business Partners. To make such a change, you’d have to go through all resources and modify them. You’d have to alter the URL of the OData service, target location and entity. Quite a few changes! Alternatively, you could create a new set of objects, including the Copy Data activity. Both solutions are not ideal. As your project grows, maintaining a large set of resources can become a tremendous job. Not to mention the likelihood of making a mistake!


 


Fortunately, there is a solution! The Synapse Pipelines are highly customizable, and we can use dynamic parameters. Instead of hardcoding the URL of the OData service, we can use a parameter and provide the value before the pipeline starts. You can use the same approach also to customize the target directory or entity name. Pretty much everything can be parametrized, and it’s only up to you how flexible the pipeline will be.


 


Today I’ll show you how to use parameters to customize some of the resources. It is the first step towards making the pipeline metadata-driven. In the next episode, we’ll expand the solution even further and describe how to read parameters from an external service. This way you’ll be able to add or modify the OData service without making any changes to the pipeline.


 


DEFINING PARAMETERS


 









There is a GitHub repository with source code for each episode. Learn more:


https://github.com/BJarkowski/synapse-pipelines-sap-odata-public



 


Parameters are external values that you use to replace hardcoded text. You can define them for every resource – a pipeline, a dataset or a linked service – they all accept external values. To assign parameters at the runtime, you can use expressions that I find very similar to Excel formulas. We will use this feature quite often in this blog series.


 


Let’s start by defining the initial set of parameters at the pipeline level. We will use them to set the URL, name and entity of the OData service. Open the pipeline we’ve built last time. At the bottom of the screen, you’ll notice four tabs. Click on the one named Parameters.


 


image001.png


 


When you click the Parameters tab, the window enlarges, revealing the “New” button to define parameters. Add three entries:





















Name Type
URL String
ODataService String
Entity String

 


image003.png


You can use values that are passed to the pipeline as parameters in datasets and linked services. I want all files with extracted data to be saved in the directory named after the OData service. The directory structure should look as follows:


 


 


 

/odata/<OData_Service>/<Entity>

 


 


 


For example:


 


 


 

/data/API_SALES_ORDER_SRV/A_SalesOrder

 


 


 


That’s quite easy. You define the target file location in the data lake dataset – therefore the first step is to modify it to accept external parameters. Open the resource definition and go to the Parameters tab. Create a new entry:













Name Type
Path String

image005.png


 


Now we need to define the target location using the parameter. Open the Connection tab. Replace the current value in the directory field with the following expression to reference the Path parameter. The value that we pass to this parameter will be used as the directory name.


 


 


 

@dataset().Path

 


 


 


image007.png


 


Dataset knows what to do with the value passed to the Path parameter. Now we have to maintain it. Open the Copy Data activity and go to the Sink tab. You’ll notice an additional field under Dataset properties that wasn’t there before. The parameter defined at the dataset level is now waiting for a value.


 


As my directory hierarchy should be <ODataService>/<Entity> I use the Concat expression to combine parameters defined at the pipeline level:


 


 


 

@concat(pipeline().parameters.ODataService, '/', pipeline().parameters.Entity)

 


 


 


image009.png


 


The target dataset now accepts values passed from the pipeline. We can switch to the source dataset that points to the SAP system. Similarly as before, open the OData dataset and define two parameters. They will tell which OData service and Entity should be extracted.


 

















Name Type
ODataURL String
Entity String

 


image011.png


 


The dataset can use the Entity parameter, which can replace the value in the Path field. But the OData URL parameter has to be passed to the underlying Linked Service. Provide the following expression to the Path field on the Connection tab:


 


 


 

@dataset().Entity

 


 


 


 013b.png


 


 


Adding parameters to the Linked Service is slightly more difficult, and it requires modifying the JSON definition of the resource. So far, we’ve only used the user interface. Choose Manage from the left menu and choose Linked Services. To edit the source code of the Linked Service, click the {} icon next to its name:


 


image017.png


 


There are two changes to make. The first one is to define the parameter. Enter the following piece of code just under “annotations”:


 


 


 

"parameters": {
    "ODataURL": {
        "type": "String"
    }
},

 


 


 


The second change tells the Linked Service to substitute the URL of the OData service with the value of the ODataURL parameter. Change the value of “url” property with the following expression:


 


 


 

"@{linkedService().ODataURL}"

 


 


 


For reference here is the full definition of the Linked Service:


 


 


 

{
    "name": "ls_odata_sap",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "annotations": [],
        "parameters": {
            "ODataURL": {
                "type": "String"
            }
        },
        "type": "OData",
        "typeProperties": {
            "url": "@{linkedService().ODataURL}",
            "authenticationType": "Basic",
            "userName": "bjarkowski",
            "password": {
                "type": "AzureKeyVaultSecret",
                "store": {
                    "referenceName": "ls_keyvault",
                    "type": "LinkedServiceReference"
                },
                "secretName": "s4hana"
            }
        },
        "connectVia": {
            "referenceName": "SH-IR",
            "type": "IntegrationRuntimeReference"
        }
    }
}

 


 


 


image019.png


Click Apply to save the settings. When you open the OData dataset you’ll notice the ODataURL parameter waiting for a value. Reference the dataset parameter of the same name:


 


 


 

@dataset().ODataURL

 


 


 


image021.png


The only thing left to do is to pass a value to both of the dataset parameters. Open the Copy Data activity and go to the Source tab. There are two new fields that we use to pass a value to a dataset. To pass the address of the OData service I concatenate URL and ODataService parameters defined at the pipeline level. The Entity doesn’t require any transformation.


 


 


 

ODataURL: @concat(pipeline().parameters.URL, pipeline().parameters.ODataService)
Entity: @pipeline().parameters.Entity

 


 


 


image015.png


 


Publish your changes. We’re ready for the test run! We’ve replaced three hardcoded values, and now we don’t have to modify the pipeline, or any of the resources, whenever we want to extract data from another OData service. It is a great improvement as it makes the process more generic and easy to scale.


 


EXECUTION AND MONITORING


 


To verify changes, run the pipeline twice, extracting data from two OData services. Previously, it would require us to make changes inside the pipeline. Now, whenever we start the extraction process, Synapse Studio asks us to provide the URL, OData service name and Entity. We’re not making any changes for the first run, and as before, we extract sales orders. For the second execution, use the API_BUSINESS_PARTNER OData service to get the full list of my customers.


 


image023.png


A moment of uncertainty. Have we made all the required changes?
No surprise this time, everything works as expected. We were able to extract data from both OData services. The target directory structure looks correct, and as planned, it consists of the OData and Entity name.


 


image025.png


 


The final test is to display extracted data.


 


image027.png


 


Today you’ve learnt how to use parameters to avoid hardcoding values in the pipeline. We’ve used three parameters that allow us to customize the URL, OData name and the Entity. We will build on top of this next week to make the pipeline even more agile by creating a metadata database that stores all information about OData services to fetch.

Azure App Service Automatic Scaling

This article is contributed. See the original author and article here.

Azure App Services currently provides two workflows for scaling: scale up and scale out.



  • Scale up: Get more CPU, memory, disk space, and extra features. You scale up by changing the pricing tier of the App Service plan that your app belongs to.

  • Scale out: Increase the number of VM instances that run your app. You can scale out to as many as 30 instances, depending on your pricing tier. App Service Environments in Isolated tier further increases your scale-out count to 100 instances. You can scale manually or automatically based on predefined rules and schedules.


 


These existing scaling workflows work well, but you may want to instead have the App Service platform automatically scale your web app without the hassle of defining auto-scaling rules & schedules.


 


We are introducing a new platform-managed automatic scaling feature in Azure App Services. Below is a list of key features provided by App Service’s built-in automatic scaling feature:


 



  • The App Service platform will automatically scale out the number of running instances of your application to keep up with the flow of incoming HTTP requests, and automatically scale in your application by reducing the number of running instances when incoming request traffic slows down.

  • Developers can define per web app scaling and control the minimum number of running instances per web app.

  • Developers can control the maximum number of instances that an underlying app service plan can scale out to. This ensures that connected resources like databases do not become a bottleneck once automatic scaling is triggered.

  • Enable or disable automatic scaling for existing app service plans, as well as apps within these plans.

  • Address cold start issues for your web apps with pre-warmed instances. These instances act as a buffer when scaling out your web apps.

  • Automatic scaling works with the existing Premium Pv2 and Pv3 SKUs.

  • Automatic scaling is billed on per second basis and uses the existing Pv2 and Pv3 billing meters.


  • Pre-warmed instances are also charged on per second basis using the existing Pv2 and Pv3 billing meters once it’s allocated for use by your web app [For additional details about pre-warmed instances refer to AZ Cli section below ]




  • Use Azure CLI or ARM templates to enable automatic scaling.


Suggested scenarios for automatic scaling:



  • You want your web app to scale automatically without setting up an auto-scale schedule or set of auto-scale rules based on various resource metrics.

  • You want your web apps within the same app service plan to scale differently and independently of each other.

  • A web app is connected to backend data sources like databases or legacy systems which may not be able to scale as fast as the web app. Automatic scaling allows you to set the maximum number of instances your app service plan can scale to. This helps avoid scenarios where a backend is a bottleneck to scaling and is overwhelmed by the web app.


Enable Automatic scaling using Azure CLI:


 


Step 1:


 


This step enables automatic scaling for your existing app service plan and web apps within this plan


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms [This enables automatic scaling for the app service plan named “sampleAppServicePlan”]


 


*** In some scenarios while setting the value of ElasticScaleEnabled=1 for an existing app service plan for App service Linux you may receive an error message (“Operation returned an invalid status ‘Bad Request’”). In such scenarios follow below mentioned steps:


 



  • Execute above step using the — debug flag to return details about the error


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms — debug (You can now view detailed error message which should be similar to “Message”:”Requested feature is not available in resource group <<Your Resource Group Name>>. Please try using a different resource group or create a new one.”)


 



  • You should now create a new resource group and an app service plan (It is recommended to use PV3 SKU for the new app service plan) and then set ElasticScaleEnabled=1


Step 2:


 


This step defines maximum number of instances that your app service plan can scale to


 


az resource update -g <<resource group name>> -n <<app service plan name>> — properties.maximumElasticWorkerCount=** –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan — properties.maximumElasticWorkerCount=10 –resource-type Microsoft.Web/serverfarms  [This sets the max scale out limit of app service plan named “sampleAppServicePlan”  to 10 instances]


 


*** Value of maximumElasticWorkerCount should be less than or equal to 30 (Maximum instances that a premium SKU app service plan can scale out)


*** Value of maximumElasticWorkerCount should be greater than or equal to current instance count (NumberOfWorkers) for your app service plan


 


Step 3:


 


This step enables minimum number of instances that your web app will always be available on (per app scaling)


 


az resource update -g <<resource group name>> -n <<web app name>>/config/web –set properties.minimumElasticInstanceCount=** –resource-type Microsoft.Web/sites


 


az resource update -g sampleResourceGroup -n sampleWebApp/config/web –set properties.minimumElasticInstanceCount=5 –resource-type Microsoft.Web/sites[This sets the minimum number of instances for the web app named “sampleWebApp”  to 5. In this example Web app named “sampleWebApp” is deployed to app service plan named “sampleAppServicePlan “]


 


Step 4:


 


This step enables the number of pre-warmed instances readily available for your web app to scale (buffer instances).


 


*** Default value of “preWarmedInstanceCount” is set as 1 and for most scenarios this value should remain as 1


 


az resource update -g <<resource group name>> -n <<web app name>>/config/web –set properties.preWarmedInstanceCount=** –resource-type Microsoft.Web/sites


 


az resource update -g sampleResourceGroup -n sampleWebApp/config/web –set properties.preWarmedInstanceCount=2 –resource-type Microsoft.Web/sites[This sets the number of buffer instances available for automatic scaling for the web app named “sampleWebApp” to 2]


 


*** Assuming that your web app has five always ready instances (minimumElasticInstanceCount=5) and the default of one pre-warmed instance. When your web app is idle and no HTTP requests are received, the app is provisioned and running with five instances. At this time, you aren’t billed for a pre-warmed instance as the always-ready instances aren’t used, and no pre-warmed instance is allocated. Once your web app starts receiving HTTP Requests and the five always-ready instances become active, and a pre-warmed instance is allocated and the billing for it starts. If the rate of HTTP Requests received by your web app continues to increase, the five active instances are eventually used and when App services decides to scale beyond five instances, it scales into the pre-warmed instance. When that happens, there are now six active instances, and a seventh instance is instantly provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached. No instances are pre-warmed or activated beyond the maximum.


 


Step 5:


 


This step disables automatic scaling for your existing app service plan and web apps within this plan


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=0 –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan –set properties.ElasticScaleEnabled=0 –resource-type Microsoft.Web/serverfarms [This disables automatic scaling for the app service plan named “sampleAppServicePlan”]


 


FAQ:



  • The App Service automatic scaling feature is currently in early preview.

  • Automatic scaling is currently supported for Azure App Service for Windows and Linux.(App service for Windows containers and App Service Environments do not support automatic scaling)

  • Automatic scaling can be configured via Azure CLI and ARM templates only. Azure Portal (UX) support for this feature will be enabled in a future release.

  • Automatic scaling is available only for Azure App Services Premium Pv2 and Pv3 SKUs



  • App Service’s automatic scaling feature is different than Azure Autoscale.  Automatic scaling is a new built-in feature of the App Service platform that automatically handles web app scaling decisions for you. Azure Autoscale is a pre-existing Azure feature for defining schedule-based and resource-based scaling rules for your app service plans. for your app service plans.

  • Once automatic scaling is configured, existing Azure Autoscale rules and schedules (if any) will not be honored. Applications can use either automatic scaling, or Azure Autoscale, but not both. If you disable automatic scaling for your app service plan by setting ElasticScaleEnabled=0; existing Autoscale rules if any, will be applicable once again

  • Health check should not be enabled on web apps with this automatic scaling feature turned on. Due to the rapid scaling provided by this feature, the health check requests can cause unnecessary fluctuations in HTTP traffic. Automatic scaling has its own internal health probes that are used to make informed scaling decisions.

  • You can only have Azure App Service web apps in the app service plan where you wish to enable automatic scaling. If you have existing Azure Functions apps in the same app service plan, or if you create new Azure Functions apps, then automatic scaling will be disabled. For Functions it is advised to use the Azure Functions Premium plan instead.

Increase app availability with auto-scaling | Azure Virtual Machine Scale Sets

Increase app availability with auto-scaling | Azure Virtual Machine Scale Sets

This article is contributed. See the original author and article here.

Screen Shot 2021-11-18 at 12.11.24 PM.png


 


 


Azure Virtual Machine Scale Sets lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. This is a critical service for creating and dynamically managing thousands of VMs in your environment. If you are new to the service this show will get you up to speed or if you haven’t looked at VM Scale Sets in a while we’ll show you how the service has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more.



 

 


 

 



QUICK LINKS:


00:32 — What is a virtual machine scale set?


00:47 — Centralized configuration options


02:30 — How do scale sets increase availability?


03:54 — How does autoscaling work?


04:58 — Keeping costs down with VM scale sets


05:47 — Building security into your scale set configurations


06:28 — Where you can learn more about VM scale sets


 


Link References:


To learn more, check out https://aka.ms/VMSSOverview


Watch our episode about Azure Spot VMs at https://aka.ms/EssentialsSpotVMs


 


Unfamiliar with Microsoft Mechanics?


We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


Keep getting this insider knowledge, join us on social:






Video Transcript:


-Welcome to Azure Essentials. I’m Matt McSpirit, and in the next few minutes, I’ll give you an overview of Azure virtual machine scale sets, a critical service for creating and dynamically managing thousands of VMs in your environment. Now if you are new to the service this will get you up to speed, or if you haven’t looked at VM scale sets in a while we’ll show you how it has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more.


 


-So, let’s start by addressing what is a Virtual Machine Scale Set in Azure? Well as the name implies, this Azure service lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. VM Scale Sets lays the foundation for centralized and consistent configuration of VMs in your environment. One of the primary functions is to specify a VM template with the characteristics that you need for your apps and workloads to run reliably. This includes: the VM image, with support for Windows and Linux platform images as well as your own custom images, the VM size, your networking parameters, the number of VM instances in the group, and with virtual machine extensions you can also add post-deployment configuration like monitoring, anti-malware and automation.


 


-As you set them up, there are two management modes to deploy your scale sets: Uniform Orchestration, which is optimized for large stateless workloads where your VM instances are identical. Or the newer Flexible orchestration mode, which adds more options: from running workloads with different VM types; or changing your VM sizes without redeploying your scale set; to architecting your scale sets for high availability. And the good news is, they are all easy to set up. You can define your Virtual Machine Scale Set in the Azure Portal as you just saw or with an Azure Resource Manager Template. Of course, if you prefer you can use scripting tools like Azure CLI, PowerShell, and even infrastructure as code tools like Terraform.


 


-Once set up, any new VM added to the scale set will inherit the configurations that you have defined. And it’s easy to make changes across your scale set. For example, with image-based upgrades, when a new version of a custom or marketplace image is made available, Virtual Machine Scale Sets will detect that and start upgrading the VM instances in batches, and you can use protection policies to exclude VMs that you don’t want to upgrade. Or another example of what you can do is to upgrade your existing VMs in one-go to take advantage of the latest and greatest VMs in Azure.


 


-That said, beyond consistent configurations, scale sets are used to distribute your business-critical application across multiple instances to provide high availability. And this is achieved in a number of ways. For example, you can automatically distribute up to 1,000 VM instances between availability zones in minutes. This gives you utmost availability, up to 99.99%, and helps you to mitigate any possible datacenter wide issues. Availability zones are offered in most geographies and represent physically separate locations in an Azure region composed of one or more datacenters with independent power, cooling, and networking. VMs can be automatically spread across fault domains in a region, or you can specify a fault domain as part of your VM deployment, which makes it easier to replace VMs. Now this is especially relevant for open-source databases like Cassandra or other quorum-based applications.


 


-Of course, you also have the option to replicate your VM instances to another Azure region for failover compute. And for storage redundancy, you can also back up data disks using Azure Backup. Beyond hardware failure resilience measures, to get ahead of issues before they impact your operations, you can install the application health extension on each VM instance, so that your app or workload can report application-specific health metrics to Azure. And once you enable automatic instance repair, Azure will automatically remove and replace instances in an unhealthy state, to maintain high availability.


 


-As you architect for availability with Azure VM Scale Sets you can of course also scale your applications on demand while increasing performance. Scale sets integrate with Azure load balancer for basic layer-four traffic distribution and Azure Application Gateway for more advanced layer-seven traffic distribution. This helps you to easily spread your incoming network traffic across the VMs in your scale sets. Which in turn helps you build scalable solutions while maintaining high levels of performance.


 


-You can also configure your VM scale set to auto-scale. For example, if you’re running an e-commerce site you may need to scale your front end in response to some event, like a holiday sales spike. Azure will automatically add and subtract VM instances in response to demand so that there is no decline in your app or workload experience. Under scaling, you can use metric-based auto-scaling rules and define thresholds that trigger an increase in VM instances to scale out. And likewise, you can set similar thresholds for when to scale in, taking into account a specified cool down period which allows for a buffer of time before the scale in action is triggered.


 


-And of course, you can manually scale out and in as you need to. The ability to dynamically scale your VM pool also brings numerous efficiencies as you run your workloads on Azure, because instead of pre-provisioning VMs you’re only paying for the compute resources your application needs. And for even more savings, for your interruptible workloads, you also have the flexibility of using Azure Spot VMs that take advantage of spare compute capacity in Azure as and when it’s available.


 


– You can also mix and match Azure Spot VMs with regular on-demand VMs. And if you’re worried about Spot VM evictions, the try to restore feature in Azure Virtual Machine Scale Sets, will automatically try to restore an evicted Spot VM and maintain the target VM instance count in your scale set. In fact, we covered Spot VMs as part of your cost optimization strategy, in our last Essentials overview which you can watch at aka.ms/EssentialsSpotVMs.


 


-Next, Virtual Machine Scale Sets help you improve the security posture of your applications by keeping them up-to-date. Upgrades can be performed automatically, in random order, manually, or using rolling upgrades in defined batches. In addition to image upgrades, you can also do automatic VM guest patching for critical and security updates, and this helps to ease management by safely and automatically patching virtual machines to maintain security compliance. Patch orchestration is managed by Azure and updates are rolled out sequentially across VMs in the scale set to avoid application downtime. You can also force updates on-demand. And with Automatic Extension Upgrades, critical updates are applied as they become available from publishers.


 


-So that was a quick overview of Azure Virtual Machine Scale Sets and how they can help you to create and deploy thousands of VMs in minutes. The metrics and template-based approach helps you to consistently architect your apps and workloads for auto-scaling, availability, and performance, giving you the control that you need. This lets you focus on your app instead of the complexities of managing your infrastructure. And to learn more visit aka.ms/VMSSOverview and keep watching Microsoft Mechanics for more in the series, bye for now!




Azure SQL and Azure Purview work better together

Azure SQL and Azure Purview work better together

This article is contributed. See the original author and article here.

Azure Purview lets you govern Azure SQL Databases at scale, and with ease. The following details how to register and scan your Azure SQL Database, along with how to extract lineage to view and analyze how data is being transformed. It also describes how to discover assets easily by grouping Azure SQL Database schemas and tables into Purview collections.


Register and scan
Navigate to your Purview account and click on the Data Map section to the left. You can view your data estate map and choose to view your sources in table format as well.


VishalAnil_0-1637167931506.png


 


Purview now supports 20-plus source types, ranging from Azure SQL Database, to AWS S3, to Oracle Database. Sources can be registered in two ways: by either clicking on the register button on the top left or by navigating to the collection that you’d like to register the source to and clicking on the Register quick action icon. Then click on the Azure SQL Database source tile and fill in the required details.


VishalAnil_1-1637167931524.png


 


As part of the required details, register your source to a collection of interest. In our example, we register the source to the Finance collection.


VishalAnil_2-1637167931534.png


 


Once your source is registered, the next step is to set up a scan. While setting up your scan, fill in details for the integration runtime, database name, and credential. You can also set up your scan with a collection; in our example, it’s the Audit collection under Finance. So you can now scope your scan to only the Audit tables to ensure all assets are scanned into the catalog with the right collection associated for discovery and access control.


VishalAnil_3-1637167931545.png


 


See results of the scan by clicking on View details for your source.


VishalAnil_4-1637167931561.png


 


Lineage extraction (preview)
While setting up your scan, you can now extract lineage from stored procedures and other artifacts in your Azure SQL Database source.


Learn more on how to get onboarded to the Preview program here.


VishalAnil_5-1637167931564.png


 


 


Discover—search and browse for your Azure SQL Database tables
Once a scan completes, you can discover assets either via search or browse. To search, enter keywords in the search bar on the top of the Purview studio and narrow down results by the facet filters Purview provides.


To browse, click on the browse assets tile on the catalog home page, navigate to the By collection tab and navigate to the collection that you scanned assets into. In our example, it would be Audit. If you have access to this collection, click on it to browse for your assets.


VishalAnil_6-1637167931572.png


 


 


Add business metadata to your Azure SQL database assets
You can also navigate to one of your Azure SQL tables and view details. To aid in discoverability and compliance, add descriptions and business glossary terms by clicking on the Edit button.


VishalAnil_7-1637167931580.png


 


 


Insights (preview)
Finally, view all your Azure SQL Database-related insights around assets, scans, glossary, classification, and labels by navigating to the Insights section of Purview.


VishalAnil_9-1637168039872.png


 


 


Get started today!



  • Quickly and easily create an Azure Purview account to try the generally available features.

  • Read documentation on how to register and scan an Azure SQL Database in Azure Purview.


 


 


 

Announcing the public preview of Microsoft Defender for Endpoint Mobile – Tamper protection

This article is contributed. See the original author and article here.

Mark a device non-compliant after 7 days of inactivity in the Microsoft Defender for Endpoint mobile app.


To be protected, customers must be confident that their end users’ devices are compliant with security policies. Today, end users are often able to bypass protections that are set by their organization. For example, users uninstall, disable settings/permissions, and force stop or clear storage of their Defender for Endpoint mobile app. Removing or disabling the Defender for Endpoint app can leave a mobile device more vulnerable to an attack.


We are excited to announce the public preview of tamper protection for mobile devices. This new feature helps ensure the retention of the Defender for Endpoint mobile app on users’ devices and helps protect devices persistently.  This feature detects devices that are out of protection for over 7 days, due to tampering with the Defender for Endpoint mobile app. These devices are marked non-compliant in Microsoft Intune (part of Microsoft Endpoint Manager).


 


Organizations can also set up Conditional Access policies to enforce the activation and use of the Defender for Endpoint mobile app. With these Conditional Access policies in place, users can access corporate resources only if their devices are in a compliant state. Blocked users can regain access only after the Defender for Endpoint mobile app is set up with all required permissions and the app is actively sending signals to Defender for Endpoint.


 


For this initial release we have scoped the detection of devices out of protection for 7 days. In upcoming releases, we plan to make this duration configurable by your security admin or your tenant admin.


 


How to get and configure this feature



  1. Share your Organization Tenant name and Tenant ID with Microsoft at atpm@microsoft.com, to be added to the public preview of this feature.

  2. Set up a Device compliance policy that requires Defender for Endpoint to be at or under the following machine risk score: Low (Your risk score can be set per your organization’s requirements)

  3. Set up a Conditional Access policy to block access to corporate resources on devices that are non-compliant with your device compliance policy.


 Try tamper protection for mobile devices out and let us know how it goes! We’re excited to share these new updates with you and continue to build on security capabilities across platforms. 


 


We look forward to hearing your feedback!