by Contributed | Sep 23, 2020 | Uncategorized
This article is contributed. See the original author and article here.
Java has a rich history in enterprise applications and has evolved over the past two decades to encompass a rich ecosystem with open source projects, open standards, innovation by individuals and start-ups to companies of all sizes. With the maturity of cloud computing, advancements such as automated operations for hybrid and multi-cloud are needed more now than ever before. Kubernetes is top of mind to many for migrating traditional on-premises deployment to the cloud. There’s a tradeoff, however, around new platforms, methodology, best practices, budgetary considerations, and modernization options to name a few. Great, so what’s next?
The J4K conference was conceived to help folks navigate their way through a mix of options. J4K is running four tracks to help developers enrich their experiences and skillsets.
2020 session tracks:
- Application on cloud, containers and K8s (real life examples)
- Frameworks and architectures on cloud, containers and K8s
- Developer tools for cloud, containers and K8s
- Practices and other technologies
There will be 50 exciting sessions. Keynote by Red Hat VP of Engineering – Mark Little, and Microsoft CVP of Azure Engineering – Brendan Burns. To welcome the community and make you feel at home, we have Heather VanCura, Chairperson and Director of the Java Community Process (JCP), and Josh Long, Spring Developer Advocate at VMWare. Just because it’s 2020 and in-person conferences are of the past in this strange new world, you shouldn’t miss a beat in personal growth and community fun. Join J4K with Microsoft to help us celebrate and move Java on Kubernetes forward for all.
Check out Microsoft’s keynote and sessions:
- Building portable cloud native applications with Java, Kubernetes and dapr.io
- Keynote by Brendan Burns | Wed Oct 14, 2020 | 9:00 AM – 9:45 AM
- Which as-a-Service is right for your Java apps?
- Session by Christina Compy & Theresa Nguyen | Wed Oct 14, 2020 | 3:00 PM – 3:45 PM
- Memory Efficient Java
- Session by Kirk Pepperdine | Tue Oct 13, 2020 | 4:00 PM – 4:45 PM
- The Diabolical Developer’s Guide to Picking Your Java
- Session by Martijn Verburg | Wed Oct 14, 2020 | 1:00 PM – 1:45 PM
- Run Spring Microservices at Enterprise Scale – Azure Spring Cloud
- Session by Asir Vedamuthu Selvasingh | Tue Oct 13, 2020 | 4:00 PM – 4:45 PM
J4K.io is a free event for our communities to learn about Java on Kubernetes, network with community leaders and developers, collaborate on open-source projects and gain insight into industry innovations. We’ll have a Community Leaderboard, Photo and Poll Questions contests during the event with prizes in addition to other exciting opportunities to win. Join us today by registering at https://aka.ms/join-j4k and feed your brain with Java and Kubernetes goodness. See you next month!
Helpful Links
by Contributed | Sep 23, 2020 | Azure, Technology, Uncategorized
This article is contributed. See the original author and article here.
Today, we are proud to announce the public preview of the Azure Schema Registry in Azure Event Hubs.
In many event streaming and messaging scenarios, the event/message payload contains structured data that is either being de-/serialized using a schema-driven format like Apache Avro and/or where both communicating parties want to validate the integrity of the data with a schema document as with JSON Schema.
For schema-driven formats, making the schema available to the message consumer is a prerequisite for the consumer being able to deserialize the data, at all.
The Azure Schema Registry that we are offering in public preview as a feature of Event Hubs provides a central repository for schema documents for event-driven and messaging-centric applications, and enables applications to adhere to a contract and flexibly interact with each other. The Schema Registry also provides a simple governance framework for reusable schemas and defines the relationship between schemas through a grouping construct.
With schema-driven serialization frameworks like Apache Avro, externalizing serialization metadata into shared schemas can also help to dramatically reduce the per-message overhead of type information and field names included with every data set as it is the case with tagged formats such as JSON. Having schemas stored alongside the events and inside the eventing infrastructure ensures that the metadata required for de-/serialization is always in reach and schemas cannot be misplaced.
What do I need to know about the Azure Schema Registry support in Event Hubs?
Event Hubs adds the new schema registry support as a new feature of every Event Hubs Standard namespace and in Event Hubs Dedicated.
Inside the namespace, you can now manage “schema groups” alongside the Event Hub instances (“topics” in Apache Kafka® API parlance). Each of these schema groups is a separately securable repository for a set of schemas. Schema groups can be aligned with a particular application or an organizational unit.
The security boundary imposed by the grouping mechanism help ensures that trade secrets do not inadvertently leak through metadata in situations where the namespace is shared amongst multiple partners. It also allows for application owners to manage schemas independent of other applications that share the same namespace.
Each schema group can hold multiple schemas, each with a version history. The compatibility enforcement feature which can help ensure that newer schema versions are backwards compatible can be set at the group level as a policy for all schemas in the group.
An Event Hubs Standard namespace can host 1 schema group and 25 schemas for this public preview. Event Hubs Dedicated can already host 1000 schema groups and 10000 schemas.
In spite of being hosted inside of Azure Event Hubs, the schema registry can be used universally with all Azure messaging services and any other message or events broker.
For the public preview, we provide SDKs for Java, .NET, Python, and Node/JS that can de-/serialize payloads using the Apache Avro format from and to byte streams/arrays and can be combined with any messaging and eventing service client. We will add further serialization formats with general availability of the service.
The Java client’s Apache Kafka client serializer for the Azure Schema Registry can be used in any Apache Kafka® scenario and with any Apache Kafka® based deployment or cloud service.
The figure below shows the information flow of the schema registry with Event Hubs using the Apache Kafka® serializer as an example:
Azure Schema Registry
How does the Azure Schema Registry integrate with my application?
First, you need to create an Event Hubs namespace, create a schema group, and seed the group with a schema. This can either be done through the Azure portal or through a command line client, but the producer client can also dynamically submit a schema. These steps are explained in detail in our documentation overview.
If use Apache Kafka® with the Java client today and you are already using a schema-registry backed serializer, you will find that it’s trivial to switch to the Azure Schema Registry’s Avro serializer and just by modifying the client configuration:
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
com.microsoft.azure.schemaregistry.kafka.avro.KafkaAvroSerializer.class);
props.put("schema.registry.url", azureSchemaRegistryEndpoint);
props.put("schema.registry.credential", azureIdentityTokenCredential);
You can then use the familiar, strongly typed serialization experience for sending records and schemas will be registered and made available to the consumers as you expect:
KafkaProducer<String, Employee> producer =
new KafkaProducer<String, Employee>(props);
producer.send(new ProducerRecord<>(topic, key, new Employee(name)));
The Avro serializers for all Azure SDK languages implement the same base class or pattern that is used for serialization throughout the SDK and can therefore be used with all Azure SDK clients that already have or will offer object serialization extensibility.
The serializer interface turns structured data into a byte stream (or byte array) and back. It can therefore be used anywhere and with any framework or service. Here is a snippet that uses the serializer with automatic schema registration in C#:
var employee = new Employee { Age = 42, Name = "John Doe" };
string groupName = "<schema_group_name>";
using var memoryStream = new MemoryStream();
var serializer = new SchemaRegistryAvroObjectSerializer(client, groupName, new SchemaRegistryAvroObjectSerializerOptions { AutoRegisterSchemas = true });
serializer.Serialize(memoryStream, employee, typeof(Employee), CancellationToken.None);
Open Standards
Even through schema registries are increasingly popular, there has so far been no open and vendor-neutral standard for a lightweight interface optimized for this use-case.
The most popular registry for Apache Kafka® is external to the Kafka project and vendor-proprietary, and even though it still appears to be open source, the vendor has switched the license from the Apache license to a restrictive license that dramatically limits who can touch the code, and to run the registry in a managed cloud environment. Cloud customers are locked hard into that vendor’s cloud offering, in spite of the open-source “community” façade.
Just as with our commitments to open messaging and eventing standards like AMQP, MQTT, JMS 2.0 and CloudEvents, we believe that application code that depends on a schema registry should be agile across products and environments and that customer should be able to use the same client code and protocol even if the schema registry implementation differs.
Therefore, Microsoft has submitted the interface of the Azure Schema Registry to the Cloud Native Foundation‘s “CloudEvents” project in June 2020.
The purpose of the submission is the joint standardization in the CloudEvents Working Group (WG) to ensure that the same interface can be implemented by many vendors and that clients can uniformly access their “nearest” schema registry, because the CloudEvents WG exists to jointly develop interoperable federations of eventing infrastructure. The schema registry interface is not restricted to or in any way dependent on the CloudEvents specification, but in the other direction, the CloudEvents dataschema and datacontenttype attributes already anticipate the use of a schema registry for payload encoding.
The schema registry definition in CNCF CloudEvents is expected to still evolve in the working group, and we are committed to providing a compliant implementation with the 1.0 release within a short time of it becoming final.
Notes:
- The Schema Registry feature is currently in preview and is available only in standard and dedicated tiers, not in the basic tier.
- This preview is initially available in the West Central US region and is expected to be available in all other global regions by September 29, 2020.
Next Steps
Learn more about the Azure Schema Registry in Event Hubs in the documentation.
by Contributed | Sep 23, 2020 | Azure, Technology, Uncategorized
This article is contributed. See the original author and article here.
Running SQL Server on Azure Virtual Machines provides a suite of free manageability capabilities that are available only on Azure, and which make it easier to run in a cost effective, secured and optimized manner. Microsoft introduced the Azure SQL family of database services to provide customers with a consistent, unified experience across an entire SQL portfolio and a full range of deployment options from edge to cloud. As a SQL Server customer, you can migrate your SQL workloads to SQL Server on Azure Virtual Machines while making the most of your current SQL Server license investments and benefit from the manageability benefits that SQL Server virtual machines offer today. Running SQL Server on an Azure virtual machine provides the best lift-and-shift experience for workloads where OS-level access is required.
A SQL Server instance on an Azure Virtual Machine is the cloud equivalent of an on-premises SQL Server, running your mission critical applications on Azure. Azure offers unique Windows Server and SQL Server licensing benefits through Azure Hybrid Benefit, giving you the ability to run the same SQL Server with a better total cost of ownership (TCO), with low effort and great performance. Azure also offers that ability for you to automatically manage your Windows Servers using our recently announced Azure Automanage preview offering. Read more about it here.
How can you leverage these benefits?
All of the security, manageability and cost-optimization benefits mentioned above are enabled through the SQL Server IaaS Agent Extension. To ensure that you’re receiving full manageability benefits, this extension can be installed on a SQL Server virtual machine that is already running or any new SQL Server virtual machine that you create in the future.
The screenshot below shows the different manageability options that are enabled when the SQL Server IaaS extension is enabled.
SQL IaaS Extension Capabilities
Optimize Cost
The SQL Server IaaS extension enables administrators to reduce cost and manage inventory with less effort. There are a number of cost optimization capabilities that a SQL Server administrator can leverage with SQL Server virtual machines which are made possible through our “License type” feature. As an administrator, you will have the ability to:
- See all your SQL Server virtual machine deployments in a single dashboard to make inventory management easier
- Get a snapshot of how many SQL Server on Azure Virtual Machines are leveraging Azure Hybrid Benefit
- Switch between Pay-as-you-go and Azure Hybrid Benefit licensing models to optimize the use of your licenses in the cloud
- Leverage FREE passive SQL Server core benefits that enable using Azure as a Disaster Recovery site for your on-premises SQL Server at no additional license cost for customers with Software Assurance or SQL Server subscription licenses
- Leverage FREE passive SQL Server core benefits for High Availability and Disaster Recovery scenarios for primary replicas hosted in Azure for customers with Software Assurance or SQL Server subscription licenses
- Run SQL Server Reporting Services virtual machines with Pay-as-you-go licensing or Azure Hybrid Benefits
The License type for a SQL Server virtual machine provides three options: Pay As You Go, Azure Hybrid Benefit and Disaster Recovery (as seen in the screenshot below). The Disaster Recovery toggle gives you the ability to leverage the free Disaster Recovery license type benefit without having to track the deployments separately, making license management and inventory a lot easier.
Configure hybrid DR using free SQL Server VM licenses
This benefit applies to all releases of SQL Server starting from SQL Server 2008 to SQL Server 2019 as long as you are using licenses covered with Software Assurance or subscription licenses. You can learn more about these benefits here.
As a SQL Server user, you will have the ability in the near future to deploy a SQL Server Reporting Services virtual machine for your BI reporting requirements and also leverage a flexible licensing model through Pay-as-you-go and Azure Hybrid Benefit licensing. Also, you can deploy your Power BI Report Server virtual machines with Azure Hybrid Benefit licenses which makes it easier to rehost your relational and reporting workloads into Azure.
Enhance Security
Security is a key area of interest for any database administrator who is responsible for securing and protecting business data stored in a SQL Server database. Sometimes compliance requirements drive security needs like encryption of data at rest. SQL Server on Azure Virtual Machines make security management easier through patching and easily implement encryption features for better security and compliance.
The Automated Patching feature allows a SQL Server administrator to select a maintenance window schedule for applying Important Windows Server and SQL Server updates that are distributed through the Windows Update channel. You have the ability to select the maintenance window duration and the start of the window.
Patch your Azure SQL VM using scheduled windows
The SQL connectivity feature allows you to do the following:
- Set the port for the SQL Server instance to ensure that you are able to listen on another port other than a well-known port.
- Configure the connectivity rules to make it as restrictive like allowing local connectivity only or opening it up to public internet for external applications and clients to connect to the SQL Server instance (see screenshot below).
- Enable SQL Authentication for the SQL Server instance if your applications and users require this authentication method.
- Configure Azure Key Vault for the SQL Server instance to leverage Key Vault for Transparent Database Encryption, Column Level Encryption and Always Encrypted features of SQL Server to enable encryption of data at rest and in motion.
Increase Uptime
SQL Server virtual machines provide various management capabilities to make configuration of the environment a lot easier for business continuity and disaster recovery scenarios.
The new High Availability feature allows you to create a new cluster or onboard an existing cluster, and then create the availability group, listener, and internal load balancer.
Configure High Availability easily with Azure SQL VM
Additionally, SQL Server on Azure Virtual Machines also help you reduce your cost of deploying a highly available environment if you have Software Assurance or you are using SQL Server subscription licenses. Let us consider an example where you were running a SQL Server virtual machine topology having one primary replica, one synchronous passive secondary replica and one asynchronous passive replica. You would need to license only the primary replica. It is assumed that the number of cores on all three replicas are the same. The example below uses Always On Availability Group as the High Availability and Disaster Recovery feature, but you can leverage this benefit with other SQL Server features like Failover Cluster Instances, Log Shipping, Database Mirroring and Backup and Restore. More details available here.
The Automated Backup feature allows you to setup SQL Server backups with various options like encrypting backups, set a retention period, backup system databases, configuring a manual backup schedule or setting up an automated backup. This is great for SQL Server virtual machines where you don’t want to attach a backup software but just backup all the databases on the instance to support point-in-time restore for creating copies of the database environment or to recover from user errors.
Manageability
SQL Server on Azure Virtual Machines provides a number of free manageability benefits which make administration tasks for a SQL Server instance a lot easier. In addition to that a number of best practices are surfaced through the wizards to make SQL Server virtual machines run with the most optimal configurations.
SQL Server on Azure Virtual Machines provides the ability to simplify storage configuration while setting up the virtual machine through the use of pre-configured storage profiles as seen in the screenshot below. This ensures that you are picking the right storage configuration for your data, log and tempdb files. You can read more about it here.
Configure storage settings for your Azure SQL VM
Once the SQL Server virtual machine is running, you have the option of increasing the storage capacity of your disks and the wizard will help you determine if you are picking a configuration setting which could be the victim of a capacity limit (see highlighted section in screenshot below).
Configure your storage settings on Azure SQL VM
You also have the ability to manage your SQL Server virtual machines through the SQL Server IaaS extension through the use of Azure CLI or PowerShell. Azure Runbooks or Azure Automation scripts can be created to leverage these benefits at scale across a fleet of SQL Server virtual machines.
If you leverage in-database Machine Learning for your SQL Server instances, SQL Server virtual machines provide the ability to configuring advanced analytics for your instance.
Next Steps
Running SQL Server in an Azure virtual machine gives you the same capabilities and experience you are used to with on-premises SQL Server, plus the additional ease of use and management benefits available only on Azure. Ensure that your virtual machines are running the SQL Server IaaS Extension to enable all these benefits, and check out the following links for more information about running SQL Server in Azure virtual machines.
by Scott Muniz | Sep 23, 2020 | Azure, Technology, Uncategorized
This article is contributed. See the original author and article here.
Azure Synapse Analytics brings the worlds of data integration, big data, and enterprise data warehousing together into a single service for end-to-end analytics, at cloud scale. This week at Microsoft Ignite we announced several features that bring accelerated time to insight via new built-in capabilities for both data exploration and data warehousing.
As we dive into each new feature, we will use the terminology below to identify where the feature is applicable. For the SQL capabilities in Azure Synapse Analytics, the main resource used is called a SQL pool. This resource has two consumption models: serverless and dedicated. The serverless model provides transparent compute consumption and is billed per data processed. The dedicated model allows use of dedicated compute, comes with capacity model and is billed per DWU-consumed. This new terminology will appear in the product soon.

Accelerate time to insight with:
-
Power BI performance accelerator for Azure Synapse Analytics (private preview)
Last year when we announced Azure Synapse Analytics, we promised to bring Microsoft’s data and BI capabilities together to deliver optimized experiences for our users. Today, we continue expanding on that promise with the announcement of the Power BI performance accelerator for Azure Synapse Analytics, a new self-managed process that enables automatic performance tuning for workloads and queries ran in Power BI.
As Power BI users run their queries and reports, the performance accelerator monitors those queries behind the scenes and optimizes their execution thus significantly improving query response times over the latest data. It analyzes all Power BI queries holistically and intelligently creates materialized views within the SQL engine while recognizing common query joins and aggregations patterns. As Power BI queries continue to execute, queries are automatically sped up and users observe increased query performance leading to quicker business insights. With new data being ingested into SQL tables, materialized views are automatically refreshed and maintained. Best of all, as more and more queries are being executed, the performance accelerator optimizes and adjusts the deployed materialized views to fine tune view design, all while reducing query execution times.
This feature can be enabled with a few clicks within the Synapse Studio. You can simply choose the frequency for executing the process and set the maximum storage to manage the size of the system-generated materialized views and it’s ready to start optimizing your Power BI workload.

The Power BI performance accelerator for Azure Synapse Analytics delivers a zero-management experience. It helps system administrators manage materialized views while allowing Power BI users to gain quick and up-to-date business insights.
This feature applies to dedicated model. To participate, submit your request here.
-
Azure Synapse Link for Azure Cosmos DB now includes Synapse SQL (public preview)
Azure Synapse Link connects operational data stores with high performance analytics engines in Azure Synapse Analytics. Using Synapse Link, customers can perform near real-time analytics directly over their data managed in Azure Cosmos DB without impacting the performance of their operational workloads.
Today, we are announcing the public preview of Azure Synapse Link for Azure Cosmos DB using Synapse SQL. This functionality is now available to all customers and is deployed worldwide. Customers can now use a serverless SQL pool in Azure Synapse Analytics to perform interactive analytics over Azure Cosmos DB data enabling quick insights and exploratory analysis without the need to employ complex data movement steps. Thanks to the rich T-SQL support for analytical queries and automatic schema discovery for data, it has never been easier to explore operational data by running ad-hoc and advanced analytical queries. Best of all, due to the rich and out-of-the-box ecosystem support, tools such as Power BI – and others – are just a few clicks away.

This feature applies to serverless model. To learn more, visit the Azure Synapse Link for Azure Cosmos DB documentation.
Note: this functionality will become available in the next few weeks.
-
Enhanced support for analyzing text delimited files (public preview)
Despite the availability and popularity of columnar file formats optimized for analytics, such as Parquet and ORC, most newly generated and legacy data is still in text delimited formats. With this in in mind, we are continuously improving the experience for delimited text data. To support immediate and interactive data exploration for this text data, the following enhancements are being introduced:
– Fast parser: The new delimited text parser (CSV version 2.0) provides significant performance improvement, ranging from 2X (querying smaller files) to up to 10X or more (querying larger files). This new performance improvement, based on novel parsing techniques and multi-threading, is available to all existing and newly provisioned Azure Synapse workspaces.
– Automatic schema discovery: With automatic schema discovery, OPENROWSET function can be used with CSV files without a need to define expected schema. As the system automatically derives the schema based on the data being queried, users can focus on the needed data insights leading to faster and easier data exploration.
– Transform as CSV: We have extended support for the CREATE EXTERNAL TABLE AS SELECT statement to enable storing query results in the delimited text format. This functionality enables multi-stage data transformation to be performed while keeping the data in delimited text format throughout its lifecycle.

This feature applies to serverless model. To learn more, visit the Azure Synapse SQL documentation.
Improve data loading performance and ease of use with:
-
COPY command (Generally Available)
Loading data into your data warehouse may not always be the easiest task. Defining the proper table structure to host your data, data quality problems, handling incorrect data and errors, and ingestion performance are among some of the typical issues customers face. We designed the COPY command to tackle these problems. The COPY command has become the default utility for loading data into data warehouses within Azure Synapse Analytics. In addition to bringing the COPY command into General Availability state, we have also added the following features:
– Automatic schema discovery: The whole process of defining and mapping source data into target tables is a cumbersome process, especially when tables contain large numbers of columns. To help with this, we are introducing built-in auto-schema discovery and an auto-table creation process (auto_create_table option in preview within COPY). When used, the system automatically creates the target table based on the schema of the Parquet files.
– Complex data type support: COPY command now supports loading complex data types stored in Parquet files which eliminates the previous need to manage multiple computes. When used together with the automatic schema discovery option, complex data types will automatically be mapped to nvarchar columns.
These new functionalities are also supported in partner products as well. Azure Stream Analytics, Azure Databricks, Informatica, Matillion, Fivetran, and Talend are among the products and services that support the new COPY command.
This feature applies to dedicated model. To learn more, visit the COPY documentation.
-
Fast streaming ingestion (Generally Available)
With the rise of IoT devices, both the amount and velocity of the data produced has increased dramatically. To make that data available for analysis and to reduce the time it takes to load and query this data within your data warehouse environments, we are announcing the General Availability of high throughput streaming data ingestion (and inline analytics) to dedicated SQL pools in Azure Synapse using Azure Stream Analytics. This new connector can handle ingestion rates exceeding 200MB/sec while ensuring very low latencies.
With Azure Stream Analytics, in addition to high throughput ingress, customers can use SQL to run in-line analytics such as JOINs, temporal aggregations, filtering, real-time time inferencing with pre-trained ML models, pattern matching, geospatial analytics and much more. It supports common formats such as JSON, and custom de-serialization capabilities to ingress and analyze any custom or binary streaming data formats. More details can be found in the announcement blog.
This feature applies to dedicated model. To learn more about high throughput streaming ingestion, visit our documentation.
Secure your sensitive data using:
-
Column-level Encryption (public preview)
As data gets moved to the cloud, securing your data assets is critical to building trust with your customers and partners. Azure Synapse Analytics already provides a breadth of options that can be used to handle sensitive data in a secure manner. We are expanding that support with the introduction of Column Level Encryption.
Column-level encryption (CLE) helps you implement fine-grained protection of sensitive data within a table (server-side encryption). With CLE, customers gain the ability to use different protection keys for different columns in a table, with each key having its own access permissions. The data in CLE-enforced columns is encrypted on disk, and remains encrypted in memory, until the DECRYPTBYKEY function is used to decrypt it. Azure Synapse Analytics supports using both symmetric and asymmetric keys.
This feature applies to dedicated model. To learn more, visit the Column Level Encryption documentation.
Improve productivity with expanded T-SQL support:
-
MERGE support (public preview)
During data loading processes, often there is a need to transform, prepare, and consolidate data from different and disparate data sources into a target table. Depending on the desired table state, data needs to be either inserted, updated, or deleted. Previously, this process could have been implemented using the supported T-SQL dialect. However, the process required multiple queries to be used which was costly and error prone. With the new MERGE support, Azure Synapse Analytics now addresses this need. Users can now synchronize two tables in a single step, streamlining the data processing using a single step statement while improving code readability and debugging.
This feature applies to dedicated model. For more details, see our MERGE documentation.
Note: this functionality will become available in the next few weeks.
-
Stored procedures support (public preview)
Stored procedures have long been a popular method for encapsulating data processing logic and storing it in a database. To enable customers to operationalize their SQL transformation logic over the data residing in their data lakes, we have added stored procedures support to our serverless model. These data transformation steps can easily be embedded when doing data ingestion with Azure Synapse, and other tools, for repeatable and reliable execution.
This feature applies to serverless model.
Note: this functionality will become available in the next few weeks.
-
Inline Table-Valued Functions (public preview)
Views have long been the go-to method for returning queryable table results in T-SQL. However, views do not provide the ability to parameterize their definitions. While user-defined functions (UDFs) offer the power to customize results based on arguments, only those that return scalar values had been available in Synapse SQL. By extending support for inline table-valued functions (TVFs), users can now return a table result set based on specified parameters. Query these results just as you would any table and alter its definition as you would a scalar-valued function.
This feature applies to both serverless and dedicated models. For more details, visit the CREATE FUNCTION documentation.
Note: this functionality will become available in the next few weeks, post deployment.
Try Azure Synapse Analytics today
by Contributed | Sep 23, 2020 | Azure, Technology, Uncategorized
This article is contributed. See the original author and article here.
Resource exemption will allow increased granularity for you to fine-tune recommendations by providing the ability to exempt certain resources from evaluation. With Azure Security Center’s Cloud Security Posture Management (CSPM) capabilities, we are offering a broad set of recommendations and security controls, that might be relevant for most, but not all customers. The only way to remove recommendations from the ASC dashboard and prevent them from influencing your Secure Score was to disable the whole related policy in the Azure Security Center Policy Initiative. That way, all resources were affected, meaning that once you switched off a policy, the particular recommendation that was derived from it, was never shown again for any resource. Now, with the new resource exemption capability, you can select which resource should no longer be evaluated against a particular recommendation, whereas others still are. It allows you to exempt a resource in case the resource is already healthy because it was resolved by other means that ASC does not identify, or because you’ve decided to accept the risk of not mitigating this recommendation for this resource.you have research and development environments for which you want to apply a not-so-strict baseline, whereas the rest of resources in the same subscriptions need to be fully protected. These are great examples for exempting these resources from evaluation with waiver as justification, whereas your production environments are still protected. Maybe you have a mix of Vulnerability Assessment solutions in your environment, some of which are not tracked by Azure Security Center. In this scenario, respective recommendation has already been remediated, and you can now create an exemption for these resources and select mitigated as justification.
Figure 1 – Create exemption blade
Our goal is to reduce alert-fatigue so that security analysts can focus on the highest priority recommendations. As a result, your secure score will not be impacted by exempted resources.
How does it work?
The process for exempting a resource from a recommendation is straightforward and explained in our official documentation. Basically, when working with a recommendation, you just have to click the ellipsis menu on the right side and then select create exemption.
Figure 2 – Create exemption
From a technical perspective, when you create a resource exemption, the status of the particular assessment (the evaluation of a particular resource against a given recommendation) is changed. The properties of the assessment will contain the status code “NotApplicable”, the cause “Exempt” and a description, as shown in the screenshot below.
Figure 3 – Assessment status after creating a resource exemption
At the same time, creating a resource exemption in ASC will create an Azure Policy exemption, a new capability the resource exemption feature in ASC relies on. That said, there are several ways to programmatically work with resource exemptions.
Resource exemption and automation
Knowing that an assessment is changed, and a new policy exemption is created every time you create a resource exemption in ASC, you have several ways of programmatically interacting with the resource exemption feature.
Azure Resource Graph
Assessments in Azure Security Center are continuously exported to Azure Resource Graph, giving you a great starting point for automation. ARG is a database that keeps a set of information which is gathered from different resources, giving you a very fast option to query information instead of separately pulling it from our REST APIs. Eli Koreh showed in his article how easy it is to create custom dashboards for ASC powered by ARG. You can find a list of starter query samples for Azure Resource Graph in the documentation.
To give you an idea of how to work with resource exemption in an ARG query, I’ve created the following sample, that will return a list of all resource exemptions that have been created in your environment:
SecurityResources
| where type == 'microsoft.security/assessments' and properties.status.cause == 'Exempt'
| extend assessmentKey = name, resourceId = tolower(trim(' ',tostring(properties.resourceDetails.Id))), healthStatus = properties.status.code, cause = properties.status.cause, reason = properties.status.description, displayName = properties.displayName
| project assessmentKey, id, name, displayName, resourceId, healthStatus, cause, reason
You can tune that query according to your needs, but it’s great as is for a quick overview on all exemptions you have created in your environment.
Resource Exemption and REST API
When a resource exemption in ASC is created, an exemption in the ASC Policy Initiative assignment is created. This is why you can leverage the Microsoft.Authorization/policyExemptions API to query for existing exemptions. This API is available as of version 2020-07-01-preview.
A GET call against the REST URI https://management.azure.com/subscriptions/<your sub ID>/providers/Microsoft.Authorization/policyExemptions?api-version=2020-07-01-preview will return all exemptions that have been created for a particular subscription, similar to the output shown below:
{
"value": [
{
"properties": {
"policyAssignmentId": "/subscriptions/<yourSubID>/providers/Microsoft.Authorization/policyAssignments/SecurityCenterBuiltIn",
"policyDefinitionReferenceIds": [
"diskEncryptionMonitoring"
],
"exemptionCategory": "waiver",
"displayName": "ASC-vmName-diskEncryptionMonitoring-BuiltIn",
"description": "demo justification",
"expiresOn": "2020-09-22T22:00:00Z"
},
"systemData": {
"createdBy": "someone@company.com",
"createdByType": "User",
"createdAt": "2020-09-22T09:27:23.7962656Z",
"lastModifiedBy": "someone@company.com",
"lastModifiedByType": "User",
"lastModifiedAt": "2020-09-22T09:27:23.7962656Z"
},
"id": "/subscriptions/<yourSubID>/resourceGroups/<yourRG>/providers/Microsoft.Compute/virtualMachines/<vmName>/providers/Microsoft.Authorization/policyExemptions/ASC-vmName-diskEncryptionMonitoring-BuiltIn",
"type": "Microsoft.Authorization/policyExemptions",
"name": "ASC-vmName-diskEncryptionMonitoring-BuiltIn"
}
]
}
You can find further information about Policy Exemptions in the Azure Policy documentation.
(Workflow) automation with Logic Apps
Azure Security Center’s workflow automation feature is great for both, automatically reacting on alerts and recommendations, as well as manually triggering an automation in case you need it. I have created a sample playbook to request a resource exemption, that I want to explain a bit further below:
Request a resource exemption
In order to be able to create a resource exemption in ASC, your user account needs to be granted elevated access rights, such as Security Admin. In enterprise environments that we are working with, we often see that Azure subscriptions are given to people who are responsible for their own set of resources, but security is still owned by a central team. In this scenario, you don’t want to necessarily give every resource owner the right to create a resource exemption, but still, for justified reasons, someone should be allowed to request an exemption. To make this process easier and to fit it into your approval process, I’ve created a Logic App that can manually be triggered from the recommendation blade and allows you to request an exemption. Once a resource owner is checking recommendations and wants to exempt a resource, they can do it by clicking the Trigger Logic App button. The screenshot below shows the easy process:
Figure 4 – Request resource exemption process
- Select one or several resources to be exempted from the recommendation you are currently investigating.
- Click the Trigger Logic App button
- Select the Request-ResourceExemption Logic App (or whatever name you give it when deploying it)
- Click the Trigger button
The Logic App leverages the When an ASC recommendation is created or triggered trigger and will then send an email and a Teams message to the subscription’s security contact(s). This email and message will contain information about the resource, the recommendation, and a link that, once clicked, will lead you directly to a portal view that enables you to immediately create the requested resource exemption. You can find this playbook in our Azure Security Center GitHub repository, from where you can one-click deploy it to your environment.
Figure 5 – content of the request email
You can also use this sample playbook as a starting point for customizations according to your needs. For example, if you want to create a ServiceNow or JIRA ticket with that workflow, you can simply add a new parallel branch with the Create a new JIRA issue or the Create ServiceNow Record actions:
Figure 6 – add parallel branches for JIRA and/or ServiceNow to the Logic App
Next Steps
Resource Exemption in Azure Security Center now gives you a way to exempt resources from a recommendation in case it has already been remediated by a process not monitored by Azure Security Center, or in case your organization has decided to accept the risk related to that recommendation. You can easily build your own automations around that feature or leverage the sample artifact we have published in the Azure Security Center GitHub community. You can also publish your own automation templates in the community, so others can benefits from your efforts, as well. We have started a GitHub Wiki so you can easily learn how to publish your automations.
Now, go try it out and let us know what you think! We have published a short survey so you can easily share your thoughts around this new capability and the automation artifact with us.
Special thanks to Miri Landau, Senior Program Manager for reviewing this article.
by Contributed | Sep 23, 2020 | Uncategorized
This article is contributed. See the original author and article here.
There are lots of new announcements at Ignite’20 and it is the great time to reflect and summarize our journey thus far with security and compliance in SharePoint, OneDrive, and Teams. We are excited to share with you a roundup of recent new security and compliance controls in SharePoint and OneDrive and Teams. In this new norm of working remotely, safeguarding your business critical data is super important and we are here to help.
Click on the links below to learn more about respective scenarios and features. All the features mentioned below are generally available, except the ones explicitly called out as Public Preview or Private Preview.
For our Ignite 2020 announcements in Security and Compliance in SharePoint and OneDrive, check out this blog here.
User & session security
MFA (Multi-factor-authentication) for Users
Multi-factor-authentication is new norm and our recommended scheme to identify and authenticate users accessing content in Microsoft 365. Azure Active Directory offers MFA capabilities that you can turn on for internal and external users. Check out the link above for more details.
Unified session sign-out powered by Continuous access evaluation – Public Preview
User has lost his device and you want to sign him/her out across all sessions on all devices? We are providing you a unified session sign-out capability powered by continuous access evaluation. Check out the link above for more details.
Figure. Microsoft 365 admin signs out a user across all sessions on all devices
External users, sharing, and access
External sharing policies in SharePoint and OneDrive
Manage external access in Microsoft Teams
Collaborating with partners and clients external to your organization is bread and butter of many businesses. With our continued investments in external collaboration, SharePoint, OneDrive, and Teams is the hub for your external collaboration teamwork. Check the links above for details.
Figure. SharePoint admin center external sharing settings
Automatic expiration of external access for content in SharePoint & OneDrive
Managing external users access is important to ensure no loss of organization’s data after the external project is completed. You can now configure a The solution is here, automatic expiration of external access for content. Check out the link above for more details.
Figure. SharePoint site collection admin manages external access expiration
Access governance insights in SharePoint and OneDrive – Private Preview
With growing digital data it becomes important to govern the access policies for your top sites and teams that matter the most. Access governance insights in SharePoint and OneDrive aims to help you on these regards. If interested to be an early adopter, sign-up for the private preview here.
Conditional access policies – Devices and Locations security
Granular conditional access policies – Unmanaged device policy
Azure Active Directory offers the coarse grained conditional access policies, and within SharePoint and OneDrive you can do a site specific fine grained device policies. For example, top secret sites you want to block access from unmanaged devices. Check out the above link for more details.
Network IP address policy
Control access to the content based on location IP address that user is accessing from.
Information Protection with Sensitivity labels
As part of the Microsoft Information Protection (MIP) journey, we have a series of capabilities in SharePoint, OneDrive, and Teams to protect your sensitive content and we call out a few below. We continue to invest in this journey.
Microsoft Information Protection for Files
The encrypted files are now treated as first class experience in SharePoint, OneDrive, and Teams, and users can search for them and also co-author in Office Apps in them.
Figure. Microsoft information protection sensitivity labels for files
Microsoft Information Protection at scale – Auto classification with sensitivity labels
With the scale at which digital data is growing, it is not sufficient to have manual labelling only and expect the users and administrators to manually label files. Auto classification with sensitivity labels aim to power you to automatically detect sensitive content in your digital estate and label them.
Figure. Microsoft 365 compliance center showing auto labelling modes
Sensitivity labels for Teams, SharePoint Sites, and Microsoft 365 Groups
Not only at the Files level, you can also now classify and label a SharePoint site, Team, and Microsoft 365 Group and holistically secure all contents in them.
Sensitivity labels with external sharing policies – Public Preview coming soon
We are expanding the policies that can be associated with sensitivity labels, now with external sharing policy settings in SharePoint and OneDrive sites. If interested, sign-up here.
Sensitivity labels with MFA Policy – Private Preview
Multi-factor authentication (MFA) is our recommended authentication scheme for user authentication. You can now associate MFA (multi-factor-authentication) policy to sensitivity labels. If interested to try this out, sign up for the private preview here.
Data loss prevention (DLP)
DLP for SharePoint and OneDrive and Teams
To comply with business standards and industry regulations, organizations must protect sensitive information and prevent accidental leakage of organization’s data. Microsoft 365 Data Loss Prevention policies designed to help you prevent accidental data loss.
DLP Block external access by default for sensitive files in SharePoint/OneDrive/Teams
External collaboration is important for business, however, you do want to protect your sensitive files accidentally shared with external users. This feature specifically helps you meet that need. You can now block external sharing and access until a DLP scan is run on a given file that just got uploaded to SharePoint or OneDrive. Check out this feature link for more details.
DLP policy for blocking anyone links for sensitive content
Often you want to share sensitive content with external collaborators, however, you want to prevent access and sharing anyone with the link option. This new DLP rule helps you to achieve that granular control, check out the link above.
Endpoint data loss prevention (DLP) – Public preview
With remote working and proliferation of devices, end points have exponentially grown, we are helping you to protect and avoid leakage of sensitive content at all end points on Windows devices. Learn more about Endpoint DLP here.
Compliance – Information governance
M365 Communication Compliance
Communication compliance is an insider risk solution in Microsoft 365 and they help you with reviewing messages in scanned email, Microsoft Teams, Yammer, or third party communication tools. Check out the above link for more details.
M365 Multi-Geo capabilities
More organizations are becoming global and have a need to meet data residency compliance in keeping the users OneDrive and Mailbox in their home geo. Multi-Geo helps you to meet these data residency needs while at the same time offering the modern productivity experience to your global workforce.
Figure. SharePoint admin center showing tenant spanned across multiple geo locations
Information Barriers (IB) for SharePoint & OneDrive
You may have compliance need to put barriers in collaboration and communication between certain set of users in your organization to avoid conflict of interest. You can now achieve these controls in Microsoft 365, checkout the Information Barriers scenario link above.
Figure. SharePoint site owner manages information segments for a site
Retention labels
You can meet your governance needs for retaining or deleting the content after certain period of time, check out the retention labels and policies link above.
Records management
Organizations of all types require a records management solution to meet their regulatory, legal, and business requirements. Microsoft 365 records management is designed to help you meet these requirements. Check out the link above.
Check out Microsoft 365 compliance solutions page for many more compliance features available in Microsoft 365.
Other security controls
Global reader role in SharePoint
To reduce the number of administrators with privileged global admin roles, Azure Active Directory introduced Global Reader role. This role is now supported in SharePoint admin center so that they have only read access to all things SharePoint administration. Check out the link above for more details.
Customer key
Microsoft 365 has additional layer of encryption called service encryption on top of volume-level encryption thru BitLocker. Customer key is built on service encryption and enhances the ability to meet the demands of compliance requirements. To learn more, check out the link above.
Customer key for Exchange and SharePoint is already generally available. Customer key for Teams will come to private preview later calendar year 2020.
For licensing related information, check out the Microsoft 365 licensing guidance for security and compliance.
We hope this compilation of security and compliance controls was useful and informative for you.
Check out many more Ignite sessions in the Ignite website and Microsoft 365 Adoption Center: Virtual Hub. If you are new to Microsoft 365, learn how to try or buy a Microsoft 365 subscription.
As you navigate this challenging time, we have additional resources to help. For more information about how we are responding together to COVID-19, visit our Remote Work site. We’re here to help in any way we can.
Thank you!
Sesha Mani – Principal Group Product Manager (GPM)
Microsoft 365, SharePoint and OneDrive
Praveen Vijayaraghavan, Principal PM Manager
Microsoft 365, Teams
Recent Comments