General availability: Azure Data Explorer connector for Power Automate, Logic Apps, and Power Apps

General availability: Azure Data Explorer connector for Power Automate, Logic Apps, and Power Apps

This article is contributed. See the original author and article here.


Azure Data Explorer connector for Power Automate, Logic Apps, and Power Apps enables you to automate alerts and notifications, orchestrate business workflows, and build low-code, no-code apps. This is done by using native Azure Data Explorer actions to execute KQL queries and commands on your cluster.


Some of the key scenarios that can be built using integration of Azure Data Explorer with Power Automate and Logic Apps:



  • Automation of alerts and notifications

  • Automation of recurring tasks and business workflows

  • Automation of copy of data scenarios

  • Automation of export of data

  • Integration with Microsoft or 3rd party services


Some of the key scenarios that can be built using integration of Azure Data Explorer with Power Apps: 



  • Management of reference data

  • Data entry scenarios such as audit in manufacturing plants

  • Decision making apps e.g. in energy and utilities industry, one of the common scenario is to predict the maintenance of machines and sending a technician to respond to those scenarios.


Check out the usage example documentation to get started.



 

flow-sql-example.png

Row-level security in Azure Database for PostgreSQL – Hyperscale

Row-level security in Azure Database for PostgreSQL – Hyperscale

This article is contributed. See the original author and article here.

 


Row-level security (RLS) provides an important layer of security and is available as of PostgreSQL 9.5. It is also frequently used to implement data security for multi-tenant and SaaS applications. In this article, we will look at row level security on Azure Database for PostgreSQL – Hyperscale (Citus) to help you better understand how this feature might be used to implement data security in your application.


 


Before we get to RLS, here’s how Azure Database for PostgreSQL – Hyperscale helps with the distribution (sharding) of data. It brings the sharding logic to data layer and manages the shards across the nodes which make up the server group. Once you choose a relevant distribution key, Citus distributes the data. If you are a SaaS provider, the distribution key could be customer / tenant identifier. In such a case, with RLS, you can ensure the right set of data is visible to different users of the database across organizations while Citus can manage their data within single database cluster / server group.


 


Having looked at why, let’s jump right into how of this. We’re going to walk you through the steps to configure and test row level security in Azure Database for PostgreSQL – Hyperscale. You should start by Creating Azure PostgreSQL Hyperscale (Citus) instance / server group.


Once the hyperscale server group is created and ready for connection, let’s proceed with the next steps.


 


Create a Table and Load Some Sample Data:


We’re going to create a new schema which will hold the table(s) where we want to enable RLS. This is not really required but just to ensure that you get full understanding of how this should work in the real environment.


 


CREATE SCHEMA test1;


 


In this schema, we’ll create a distributed table and load some data into it.


 


Create table:


 


CREATE TABLE test1.events(


tenant_id int,


id int,


type text


);


 


Shard the table on ‘tenant_id’ column:


 


SELECT create_distributed_table(‘test1.events’,’tenant_id’);


 


Load dummy data into the table:


 


INSERT INTO test1.events VALUES (1,1,’push’);


INSERT INTO test1.events VALUES (2,2,’push’);


INSERT INTO test1.events VALUES (1,2,’push’);


INSERT INTO test1.events VALUES (2,1,’push’);


 


After adding this dummy data into the new table, next step is to add roles other than the default admin (citus) which will have access to data as per the need.


 


Add Additional Roles as Required:


To do this, you need to login to Azure Portal as the default role ‘citus’ isn’t given privileges to create new roles.


As shown below, once you navigate to the hyperscale server group on the portal –


nitinm09_0-1654309807855.png


 


Step 1 – click on ‘Roles’ under Server group management,


Step 2 – click on ‘+ Add’ to add new role


Step 3 – provide a name for the new role and assign a password to it.


 


For this exercise, we’ll create two roles, namely – tenant1 and tenant2. The reason we chose the role names is to ensure that shard key (in this case tenant_id) can be part of the name of the role and hence 1 and 2. You will see in the next section, why this is important.


 


Once this is done, grant privileges to these roles as needed.


 


Grant Required Privileges to the New Role(s):


Since we created a new schema to hold the distributed table, first step is to ensure that the new roles have access to this schema.


 


GRANT usage ON SCHEMA test1 TO tenant1, tenant2;


 


Without this step, if you try to assign privileges directly, PostgreSQL will return an error suggesting the role doesn’t have permission to access the schema.


 


Next, assign actual privileges on the table(s) to the roles.


 


GRANT SELECT, UPDATE, INSERT, DELETE


  ON test1.events TO tenant1, tenant2;


 


At this time, we have given the required privileges on the table to the newly created roles. However, if a user logs in with these, they will be able to see all the data across shards.


 


This is where the row level security comes into picture.


 


Configure Row Level Security:


To ensure that the primary role (citus) has access to all the data when we add new roles and enable RLS, create a policy which is applicable to this role:


 


CREATE POLICY admin_all ON test1.events


  TO citus           — apply to this role


  USING (true)       — read any existing row


  WITH CHECK (true);


 


Note that the policy will come into effect once row level security is enabled for the table.


 


The next step is to create a policy which will define the check on rows accessible by users.


 


CREATE POLICY user_mod ON test1.events


  USING (current_user = ‘tenant’ || tenant_id::text);


  — lack of CHECK means same condition as USING


 


The policy defines which rows user has access to as it concatenates string ‘tenant’ with the tenant_id column of the table. If you scroll back to where we created the roles and why we chose those names for the roles, it should make sense now.


 


And then finally enable the RLS on the table.


 


ALTER TABLE test1.events ENABLE ROW LEVEL SECURITY;


 


To further simplify this – the rows in the table have tenant_id values 1 or 2. Policy defines expression (‘tenant’ || ‘1’) as role name to have access to rows where tenant_id is 1 and so on. Of course, you need to create more roles as you keep adding rows to the table with different tenant IDs.


 


This check is pushed down to all the worked nodes in the hyperscale server group and will ensure that the access to the data is governed by the policy created by row level security.


 


This is it! Go ahead, login with the new role and try to fetch or change the rows in the table.


 


On the dummy data defined here, when ‘tenant1’ logs in and runs the following query –


 


SELECT * FROM test1.events;


 


The output is this –


 


nitinm09_1-1654309807858.png


 


This ensures that the role ‘tenant1’ only has access to rows with tenant_id being 1 and similarly role ‘tenant2’ will have access to rows with tenant_id being 2 and so on.


 


Stay tuned for more!


 


FastTrack for Azure: Move to Azure efficiently with customized guidance from Azure engineering. FastTrack for Azure – Benefits and FAQ | Microsoft Azure

Generally Available: Data Discovery using Trainable Classifiers

Generally Available: Data Discovery using Trainable Classifiers

This article is contributed. See the original author and article here.

Earlier this year, we introduced 9 new out-of-box Trainable Classifiers for sensitive business document discovery and classification. These include Finance, IT, IP, Tax, Agreements, Healthcare, Procurement, and Legal Affairs.   


 


To make the discovery of your sensitive information easier, we are introducing data discovery with Trainable Classifiers in Content Explorer. This means that your sensitive documents that are classified by Trainable Classifiers can be viewed even if they are not used as conditions in an auto-labeling policy. This includes both pre-trained out-of-box models as well as custom trainable classifiers.  


 


For each classifier, you can drill down from the location to the specific document that has been classified. If the classification is not what is expected, there is an opportunity to provide No-Match feedback so that we can further improve our classifiers. We currently have 11 out-of-box business document classifiers available for this feature with another 23 in private preview and another 30 that will be available for preview shortly.  


 


These Trainable classifiers can be used for the discovery and classification of sensitive information across SPO and ODB by clicking on the respective classifier/category in Content Explorer.  


 


Some of the Trainable Classifiers we plan to provide as built-in out-of-box include 



  • Intellectual property and trade secrets such as Project Documents, Standard Operating Procedures, Software Product Development Files, Network Design documents 

  • Business-critical documents such as M&A files, Business Plan, Strategic Planning Documents, Meeting Notes Statement of Work, Financial Statements, Manufacturing batch records, Customer account Files, Market research reports 

  • Sensitive business content as part of daily business operations such as Payroll, Invoice, Statement of Work, Financial Statements, Statement of Accounts, Employee Performance Files, Facility Permits 


 


Robin_Baldwin_0-1655479348443.png


 


 


Robin_Baldwin_1-1655479348447.png


 


Customers can opt-out of this feature by raising a support ticket with this request and this feature can be turned off for their tenant. 


 


Thanks for reading!

Minimizing Shift Change Overhead with Clinician Soundbites

Minimizing Shift Change Overhead with Clinician Soundbites

This article is contributed. See the original author and article here.

Screenshot 2022-06-16 132015.png


Minimize handoff/huddle times between shifts with Soundbite in Microsoft Teams.


“Traditional written communications isolate and inconvenience many employees resulting in a disconnect.


With Soundbite™ for Microsoft Teams, you can show empathy and create a sense of purpose by sharing personalized short-form content with frontline workers” and clinicians. – Soundbite™ for Your Frontline Workforce


Watch the video below to see how “Dr. G.” demonstrates the use of clinician soundbites to minimize overhead and churn for already overworked clinicians.


To learn more about the use of Microsoft Teams and Soundbite leverage the resource links below. You can also reach Soundbite directly at https://soundbite.ai/ and/or contact you Microsoft account team.


Resources:



Work on the Clinician Soundbites was done by Jasmine Hoegh, Erin Spencer, and Michael Gannotti


Thanks for visiting – Michael Gannotti   LinkedIn | Twitter


me.jpg