Double click a table in Log Analytics table side bar to run a preview query

Double click a table in Log Analytics table side bar to run a preview query

This article is contributed. See the original author and article here.

Previewing data in a table:


Previewing the data in a table is one of the best ways to instantly understand it’s content and data structure.


Since it’s introduction, a large amount of our users have used our new ‘preview’ popup window to gain instant insight into the content of a table – and continued to use the ‘load to query editor’ button to load the preview query as a starting point for more complex insights.


 


Double click to preview:


Today we introduce a small but important enhancement to this feature.


Double clicking a table item will now immediately run the preview query in the query editor:


Double Click to preview GIF.gifThis is a small yet powerful enhancement designed to further shorten your way to insight.


 


Settings:


Use our settings to choose if you prefer double clicking a table to run the preview query or add the table’s name to the query editor:


Settings for run preview query on double click of schema.png


 


Feedback:


We value your feedback! please leave your comments and let us know if you like this new enhancement. 


 

OneDrive Tips for Beginners & Pros

This article is contributed. See the original author and article here.

Microsoft OneDrive lets you save files and photos securely online and access them from any device, virtually anywhere. From backing up files to sharing and collaborating, OneDrive has a lot of features for home, school and work that you may not have discovered yet. Let’s take a look at OneDrive tips for beginner and advanced users—and uncover how to use it to its maximum potential.


 


OneDrive tips for beginners
Here are some of the key things you can do with OneDrive, including how to get a free account if you haven’t already.



  • Create a free OneDrive account and get 5GB of storage, that’s enough to store 2,500 photos and hundreds of docs1.

  • Turn on PC folder backup to automatically sync your Windows Desktop, Documents, and Pictures folders to OneDrive. Now, these folders are backed up, protected and available across all your devices.

  • Access your files and view your photos online by signing into your OneDrive account on the web. Also, if you lose your device or it crashes, you can always find your OneDrive files here.

  • When you’re on the go, you can use the OneDrive mobile app to access or share files and photos right from your mobile device. Give yourself freedom to roam with OneDrive app for Android or iOS.

  • Use the OneDrive app on your phone to scan and save multiple pages of printed documents. Now everything from whiteboard notes and business cards, to receipts and to-go menus are there when you need them.

  • Automatically backup your phone’s camera roll to OneDrive to keep your favorite moments backed up, protected and all in one place. Once backed you’ll also have easy access to your photos across all your laptop, tablet or other devices. Note: Automatic camera roll backup can only be used on one account at a time. So if you have both a personal and work OneDrive account on your phone, you’ll need to pick one.

  • Easily share and collaborate on files, folders, and photos with colleagues, friends and family.

  • Use Personal Vault to add extra protection to sensitive photos and files, like social security cards, drivers licenses, passports and more.2 The free and 100 GB OneDrive plans allow you store 3 files in Personal Vault. Microsoft 365 Personal and Family subscribers can store as many files as they want in Personal Vault, up to their storage limit.

  • Graduating from school and want to keep using your OneDrive? Use Mover (which is built into OneDrive for work and school) to transfer the files from your school account to your OneDrive personal account in just a few clicks.

  • Accidentally throw away a file? Track down deleted files quickly in the recycle bin which is available only on OneDrive for web.

  • Turn on AutoSave on your Word, Excel and PPT files. Now you have up-to-the-second versions saved, in case of a crash or battery running out.


 


OneDrive tips for pros
If you’ve been using OneDrive for a while, it might be time to take it to the next level. Here are a few advanced features that may help make life easier, while keeping your files and photos safer.



  • Add another layer of security to your OneDrive account by using two-step verification across your entire Microsoft account.

  • Need some extra space? Easily manage your storage by seeing how much you have left and what’s using up the most space.

  • Mark selected files for offline access on your phone or PC. That way, if you lose your internet connection you can still work on your files on your phone or PC.

  • Scan, sign and send a document with the OneDrive mobile app. From school permission slips to invoices and beyond, now you have a way to quickly scan, sign and send important documents on-the-go.

  • Get a quick summary of activity on shared files using the file details pane, including who the file is shared with, recent activity, file size/type and more.

  • Free up extra storage space on your Windows 10 PC. Storage Sense automatically frees up space by making local files that you haven’t used recently online-only again. Online-only files stay safe in OneDrive and are accessible from your PC, browser or mobile app as long as you have an internet connection.

  • Use version history to restore any OneDrive file to a previous point in time, up to 30 days after being modified. If one of your collaborators made changes that just won’t work, simply revert them.


 


No matter how you use OneDrive—to back up your camera roll, scan and sign documents, or to share files—these tips will help you get the most out of OneDrive. If you need 1 TB of storage, ransomware protection and other robust features for home, school or work—check out the premium features available with a Microsoft 365 subscription.


 


1 Assumes photos are 2MB each and docs are .08MB each.
2 This feature is not available on OneDrive for school or work.

mTLS between AKS and APIM

mTLS between AKS and APIM

This article is contributed. See the original author and article here.

Mutual TLS Authentication between Azure Kubernetes Service and API Management


 


By (alphabetically): Akinlolu Akindele, Dan Balma, Maarten Van De Bospoort, Erin Corson, Nick Drouin, Heba Elayoty, Andrei Ermilov, David Giard, Michael Green, Alfredo Chavez Hernandez, Hao Luo, Maggie Marxen, Siva Mullapudi, Nsikan Udoyen, William Zhang


 


Introduction


 


We have two goals in this doc:



  1. How to set up AKS cluster for mutual TLS (mTLS) authentication between Azure Kubernetes Service (AKS) NGINX ingress controller and a client app such as curl? If you are not using a gateway for your microservices or using a gateway other than Azure API Management (APIM), this portion is what might interests you.

  2. How to set up APIM for mTLS between AKS and APIM? This covers the case in which APIM is used as the API gateway for REST services hosted in AKS cluster.


This is a sister doc to Use MITREid Connect for OAuth2 Authorization in API Management: one covers securing AKS via mTLS between AKS and APIM while the other covers securing APIM via OAuth2 and OpenID Connect across APIM, Identity Provider and clients.


Our goal is for AKS (as a service) to authenticate APIM (as a client) so that only calls from APIM with a valid client cert with private key can get thru. Therefore what we need is not TLS between APIM and AKS which is for client (APIM in our case) to authenticate server (AKS in our case). What we need is mutual TLS.


As a reference and also for context, this  and this  documents provide mTLS authentication between APIM and Azure App Service.


The steps:



  1. HTTPS calls from APIM is intercepted by frontend load balancer of App Service

  2. App Service frontend load balancer injects an X-ARR-ClientCert request header with the client certificate (base64 string) as the header value, before forwarding the request to application code.

  3. Application code retrieves the cert string such as headers[“X-ARR-ClientCert”] and converts it to an X.509 cert.

  4. Application code parses the cert and verifies the attributes and claims as client authentication.


As you can see, the approach for mTLS between APIM and App Service is not as good as we wish:



  1. The HTTP header X-ARR-ClientCert seems to be Microsoft-specific, instead of any open spec (maybe there is no such spec?) What is the story for AKS? While APIM is Microsoft, AKS is internally Kubernetes.

  2. The client cert authentication happens inside application code. This defeats the purpose of using APIM as API gateway.


Our goal is to achieve mTLS between APIM and AKS without custom security code in applications in AKS pods. Rather we hope to rely on AKS NGINX ingress controller and ingress resources to perform client cert authentication at infrastructure level.


 


Prerequisites


 




  • kubectl. Minimum version required is v 1.18. To find your kubectl client version:


    kubectl version –client



  • openssl for preparing certificates. Or if you prefer, you can use other tools for creating self-signed certs.




  • helm . (Windows 10 users can just put the unzipped folder anywhere and add the corresponding PATH variable.)




 


Prepare DNS


 


Since our plan is not to use VNET to enclose both AKS and APIM, we need to have a DNS-resolvable domain name. This domain name will be mapped to AKS NGINX ingress controller load balancer static IP. For this we need to first register a domain. As an example, aksingress.com  is registered and its subdomain dev.aksingress.com  will be used in this document.


 


Prepare X.509 Certificates


 


Self-signed certs can be used for dev/test. OpenSSL can be used for creating self-signed certs.


We need the following three certs in certain file formats:


 



































Name Purpose Environment Private Key Required Required Formats
CA Certificate Authority Kubernetes Secrets No .crt, .cer
Server Server Certificate Kubernetes Secrets Yes .crt, .key
Client Client Certificate APIM, test client Yes .crt, .key, .pfx

 


NOTES:



  1. Relying on legacy Common Name for cert validation will be deprecated in Kubernetes. It is recommended to use Subject Alternate Names (SANs) instead.

  2. For TLS, the server cert SAN must match the FQDN of server backend, which, in our case, is the AKS ingress resource host name. This host name will pair with the static IP of AKS NGINX ingress controller we will create later on.

  3. The private key of CA is NOT installed anywhere: either in Kubernetes secret or API Management.


First let’s create configuration files for both client and server certs:


File: server_dev.cnf


[ req ]
default_bits = 4096
prompt = no
encrypt_key = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext

[ dn ]
CN = dev.aksingress.com
emailAddress = acp@microsoft.com
O = Microsoft
OU = CSE
L = Redmond
ST = WA
C = US

[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = dev.aksingress.com


File: client_dev.cmf


[ req ]
default_bits = 4096
prompt = no
encrypt_key = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext

[ dn ]
CN = gateway.com
emailAddress = acp@microsoft.com
O = Microsoft
OU = CSE
L = Redmond
ST = WA
C = US

[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = gateway.com


Below we assume the existence of a subfolder .mTLS under openssl command.


Openssl commands:


# Create CA
openssl req -x509 -sha256 -newkey rsa:4096 -keyout mTLSca.key -out mTLSca.crt -days 3650 -nodes -subj “/CN=My Cert Authority”

# Generate the Server Key, and Certificate and Sign with the CA Certificate
openssl req -out mTLSserver_dev.csr -newkey rsa:4096 -nodes -keyout mTLSserver_dev.key -config mTLSserver_dev.cnf
openssl x509 -req -sha256 -days 3650 -in mTLSserver_dev.csr -CA mTLSca.crt -CAkey mTLSca.key -set_serial 01 -out mTLSserver_dev.crt

# Generate the Client Key, and Certificate and Sign with the CA Certificate
openssl req -out mTLSclient_dev.csr -newkey rsa:4096 -nodes -keyout mTLSclient_dev.key -config mTLSclient_dev.cnf
openssl x509 -req -sha256 -days 3650 -in mTLSclient_dev.csr -CA mTLSca.crt -CAkey mTLSca.key -set_serial 02 -out mTLSclient_dev.crt

# to verify CSR and show SAN
openssl req -text -in mTLSserver_dev.csr -noout -verify
openssl req -text -in mTLSclient_dev.csr -noout -verify


Since APIM expects certs in Microsoft format such as .pfx and .cer, and Kubernetes expects certs in .crt and .key format, we need the following conversion.


# Convert .crt + .key to .pfx
openssl pkcs12 -export -out mTLSca.pfx -inkey mTLSca.key -in mTLSca.crt
openssl pkcs12 -export -out mTLSclient_dev.pfx -inkey mTLSclient_dev.key -in mTLSclient_dev.crt
openssl pkcs12 -export -out mTLSserver_dev.pfx -inkey mTLSserver_dev.key -in mTLSserver_dev.crt

 


Create AKS Cluster


 


To leverage the AKS-managed Azure Active Directory integration  feature, we can use the following CLI to create an AKS cluster with AKS-managed AAD integration.


# parameters used for creating AKS
tenant_id=”1aaaabcc-73b2-483c-a2c7-b9146631c677″
aks_admin_group_name=”aks-admin-group”
aks_api_group_name=”aks-api-group”
resource_group_name=”rg-aks”
aks_cluster_name=”aks-cluster-04″

echo “display current AAD groups”
az ad group list -o table
# echo “Create a group for AKS cluster admins”
# az ad group create –display-name $aks_admin_group_name –mail-nickname myalias

# echo “Create resource group $resource_group_name”
# az group create –name $resource_group_name –location centralus

echo “get aks-admin-group object ID for $aks_admin_group_name:”
aks_admin_group_object_id=$(az ad group show –group $aks_admin_group_name –query objectId -o tsv)
echo $aks_admin_group_object_id
echo “get aks-api-group object ID for $aks_api_group_name:”
aks_api_group_object_id=$(az ad group show –group $aks_api_group_name –query objectId -o tsv)
echo $aks_api_group_object_id

echo “Create an AAD-managed AKS cluster”
az aks create –resource-group $resource_group_name
–name $aks_cluster_name
–node-count 1
–enable-aad
–aad-admin-group-object-ids $aks_admin_group_object_id
–aad-tenant-id $tenant_id
#–generate-ssh-keys


 


Creating Kubernetes Secrets


 


First make sure we are working with the correct AKS cluster context.


echo “Ensure you have the right credential. It will update C:Users[userid].kubeconfig with the new cluster context.”
az aks get-credentials -g rg-aks -n aks-cluster-04

echo “Display the current AKS cluster context”
kubectl config current-context


Assume the ca.crt, server_dev.crt and server_dev.key files are in a sub-folder named mTLS.


# Add server.crt, server.key and ca.crt into Kubernetes secret named ingress-secret
kubectl create secret generic ingress-secret-dev –from-file=tls.crt=”mTLSserver_dev.crt” –from-file=tls.key=”mTLSserver_dev.key” –from-file=ca.crt=”mTLSca.crt”

# Display the secret
kubectl get secret ingress-secret-dev
# List all secrets in the cluster
kubectl get secrets


 


Creating an NGINX Ingress Controller


 


An ingress controller is required to work with Kubernetes ingress resources. We will define client authentication and TLS configurations in an ingress resource.


 


We can put ingress controller either in the default namespace or a custom namespace.


    # Create a Helm repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# To see the Helm repo
helm repo list
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx
–namespace default
–set controller.replicaCount=1
–set controller.nodeSelector.”beta.kubernetes.io/os”=linux
–set defaultBackend.nodeSelector.”beta.kubernetes.io/os”=linux

Details can be found in this doc .


 


Have a Test App and Add its Container to ACR


 


This is beyond the scope of this document.


Ideally for better test result, the REST API app should have the following:



  1. At least three methodts: get, post, delete. This would allow us to test RBAC, such as only a specific role can delete while anyone can do get/create. RBAC is out of the scope of this document.

  2. The get method should return the full request headers as part of the response so that we can see the headers received by the application code. As a request goes through OAuth2 and then mTLS, some additional headers will be added and are available to application code.


 


Deploy the Container to AKS Pods


 


Create a YAML file and save it with the name “tinyrest_container.yml.


apiVersion: apps/v1
kind: Deployment
metadata:
name: tinyrest
labels:
app: tinyrest
spec:
replicas: 1
selector:
matchLabels:
app: tinyrest
template:
metadata:
labels:
app: tinyrest
spec:
containers:
– name: tinyrest
image: myacr.azurecr.io/tinyrest:latest
ports:
– containerPort: 3000

Authenticate with Azure Container Registry from Azure Kubernetes Service  by running a command like below:


echo “ACR integration with AKS”
az aks update –name aks-cluster-04 –resource-group rg-aks –attach-acr myacr

Deploy the container by running the following kubectl commands:


echo “Deploy container from ACR to AKS”
kubectl apply -f ./aks_bash/tinyrest_container.yml
kubectl get deploy
kubectl get pods

 


Deploy a Service to Expose the Pods


 


Create a YAML for service:


apiVersion: v1
kind: Service
metadata:
name: tinyrest-svc
spec:
ports:
– port: 8080
targetPort: 3000
protocol: TCP
name: http
selector:
app: tinyrest

Deploy the service by running the following kubectl commands:


echo “Deploy AKS service”
kubectl apply -f ./aks_bash/tinyrest_service.yml
kubectl get svc

The second command should show the NGINX ingress controller as a LoadBalancer in addition to the service you just added.


 


Deploy an Ingress Resource with Security Configurations


 


Create a YAML file for an ingress resource:


apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: “on”
nginx.ingress.kubernetes.io/auth-tls-secret: “default/ingress-secret-dev”
name: tinyrest-ingress-dev
namespace: default
spec:
rules:
– host: dev.aksingress.com
http:
paths:
– backend:
serviceName: tinyrest-svc
servicePort: 8080
path: /
tls:
– hosts:
– dev.aksingress.com
secretName: ingress-secret-dev

In this ingress resource, we have specified the following for mTLS authentication:



  1. The ingress must verify a client cert (server authenticating client);

  2. Use the Kubernetes secret (named ingress-secret-dev) as the source for server cert, server key and CA cert.

  3. The hostname specified (dev.aksingress.com  in our example) must match the SAN in server cert.


Deploy the ingress resource with rules via the following kubectl commands:


echo “Deploy ingress resource with rules”
kubectl apply -f ./aks_bash/tinyrest_ingress_rules.yml
kubectl get ingress
kubectl describe ingress tinyrest-ingress-dev

Make sure you see a static external IP address after deploying the ingress service. There might be a short delay after running the deploy command before the static IP shows up.


 


Add A Record to DNS


 


Now the static IP of the AKS ingress controller is available. You can map it to the domain (dev.aksingress.com ) in your DNS setup.


 


Testing AKS Configuration for mTLS Authentication


 


With client cert authentication and CA cert configured in AKS ingress resource, we can test it using curl client.




  • If you call the ingress without supplying the client cert or client key, you will get the following error


    $ curl https://dev.aksingress.com/resource  -k


    <html> <head><title>400 No required SSL certificate was sent</title></head> <body>

    400 Bad Request


    No required SSL certificate was sent
    nginx/1.19.2</body> </html>


  • Mutual TLS authentication between AKS and curl client can be achieved by supplying client cert, client key and CA cert, as shown below.


     curl –verbose https://dev.aksingress.com/resource –cert “mTLSclient_dev.crt” –key “mTLSclient_dev.key” –cacert “mTLSca.crt”

    If our test application returns the incoming headers, it looks like below:


     “request_header”: {
    “host”: “dev.aksingress.com”,
    “ssl-client-verify”: “SUCCESS”,
    “ssl-client-subject-dn”: “C=US,ST=IL,L=Libertyville,OU=CSE,O=Microsoft,emailAddress=acp@microsoft.com,CN=gateway.com”,
    “ssl-client-issuer-dn”: “CN=My Cert Authority”,
    “x-request-id”: “556a994d6f9949eef44189a18294080e”,
    “x-real-ip”: “10.244.0.1”,
    “x-forwarded-for”: “10.244.0.1”,
    “x-forwarded-proto”: “https”,
    “x-forwarded-host”: “dev.aksingress.com”,
    “x-forwarded-port”: “443”,
    “x-scheme”: “https”,
    “user-agent”: “curl/7.68.0”,
    “accept”: “*/*”
    }

    In its response, in addition to the correct response from the AKS pods, the following verbose section indicates client authentication of server cert is successful.


     * Server certificate:
    * subject: CN=dev.aksingress.com; emailAddress=acp@microsoft.com; O=Microsoft; OU=CSE; L=Libertyville; ST=IL; C=US
    * start date: Sep 29 13:10:18 2020 GMT
    * expire date: Sep 27 13:10:18 2030 GMT
    * common name: dev.aksingress.com (matched)
    * issuer: CN=My Cert Authority
    * SSL certificate verify ok.



 


Configuring mTLS in APIM


 


Details can be found in How to secure back-end services using client certificate authentication in Azure API Management .


 


End-to-End Test


 


To perform end-to-end test, we also need to follow the other document to configure OAuth2.


The end-to-end test covers two security loops:


 


OAuth2, which covers



  • client app (either public or private client)

  • Identity Provider (any OAuthe2-compliant Identity Provider such as Azure AD or MITREid Connect)

  • API gateway (APIM)


mTLS, which covers



  • Client (APIM) authenticating server (AKS)

  • Server (AKS) authenticating client (APIM)


The end-to-end security can be illustrated by the diagram below.


 


security_oauth2.drawio.png


 


The OAuth2 Test Tool (http://aka.ms/ott ) can be used for the test.


 


If your REST API used for test returns the incoming HTTP headers in its response body, the headers in its response should look like below:


“request_header”: {
“host”: “aksingress.com”,
“ssl-client-verify”: “SUCCESS”,
“ssl-client-subject-dn”: “CN=gateway.com”,
“ssl-client-issuer-dn”: “CN=My Cert Authority”,
“x-request-id”: “a1e62e86b490b1afc29f5fd3fbfa802c”,
“x-real-ip”: “10.244.0.1”,
“x-forwarded-for”: “10.244.0.1”,
“x-forwarded-proto”: “https”,
“x-forwarded-host”: “aksingress.com”,
“x-forwarded-port”: “443”,
“x-scheme”: “https”,
“x-original-forwarded-for”: “67.186.69.18”,
“x-correlation-id”: “23a8237a-d16b-4471-8c19-058717c982cf”,
“origin”: “https://npmwebapp.azurewebsites.net”,
“sec-fetch-site”: “cross-site”,
“sec-fetch-mode”: “cors”,
“sec-fetch-dest”: “empty”,
“content-type”: “application/json”,
“accept”: “*/*”,
“accept-encoding”: “gzip,deflate,br”,
“accept-language”: “en-US,en;q=0.9”,
“authorization”: “Bearer [token]”,
“referer”: “https://npmwebapp.azurewebsites.net/”,
“user-agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36”
}

The first half indicates client (APIM) has successfully authenticated the server (AKS) cert and forwarded the request to the server (aksingress.com ), which performs its own authentication of the client. The second half shows the JWT used for OAuth2 authorization. The sec-fetch-* headers indicate this is a CORS call and preflight is required (client domain: npmwebapp.azurewebsites.net , API gateway domain: [apim-svc-name].azure-api.net). The client cert CN (in our case aksingress.com) is different from APIM FQDN.


 


Troubleshooting


 


Log of NGINX Ingress Controller


 


Reading the log of the NGINX ingress controller is an effective way to troubleshoot. You can retrieve the ingress controller log via the following kubectl commands:


# get the name of NGINX ingress controller
kubectl get pods -n default | grep nginx-ingress
# get the log for the NGINX ingress controller
kubectl logs -n default nginx-ingress-ingress-nginx-controller-7cb87487f5-jg8xw

Below is a sample error entry in such log:


W0923 16:30:28.571719       6 controller.go:1146] Unexpected error validating SSL certificate “default/ingress-secret” for server “aksingress.com”: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0

while a successful request can look like below:


10.244.0.1 – – [23/Sep/2020:22:30:38 +0000] “GET /resource HTTP/2.0” 200 459 “-” “curl/7.68.0” 38 0.002 [default-tinyrest-svc-8080] [] 10.244.0.13:3000 459 0.000
200 69ad615ba1e85defdaba5a0ba57529df

Ingress Resource Setup


 


Another thing to check is the ingress resource setup:


$ kubectl describe ingress tinyrest-ingress-dev
Name: tinyrest-ingress-dev
Namespace: default
Address: 52.154.41.113
Default backend: default-http-backend:80 (<error: endpoints “default-http-backend” not found>)
TLS:
ingress-secret-dev terminates dev.aksingress.com
Rules:
Host Path Backends
—- —- ——–
dev.aksingress.com
/ tinyrest-svc:8080 (10.244.0.13:3000)
Annotations: nginx.ingress.kubernetes.io/auth-tls-secret: default/ingress-secret-dev
nginx.ingress.kubernetes.io/auth-tls-verify-client: on
Events: <none>

Notice that since we have configured nginx.ingress.kubernetes.io/auth-tls-verify-client:  on, the error endpoints “default-http-backend” not found is expected.


Missing Client Cert for Server Authentication of Client


 


If error indicates missing client cert, please check the API inbound policy in APIM. In order for APIM to supply client cert to AKS ingress resource for authenticating the client, the inbound processing policy must contain the following node


<authentication-certificate thumbprint=”05F6B958079A4FC88978946FB3DA65B37F0F9E4E” />

Make sure the thumbprint matches with the thumbprint of the client cert you installed on APIM.


Ingress Secret Cannot be Found


 


Check the YAML file for the ingress resource to make sure the secret name and namespace are correct. You can use kubectl to describe the Kubernetes secret and should see the following three certs/key:


$ kubectl describe secret ingress-secret-dev
Name: ingress-secret
Namespace: default
Labels: <none>
Annotations: <none>

Type: Opaque

Data
====
tls.crt: 1675 bytes
tls.key: 3272 bytes
ca.crt: 1809 bytes


 

Advanced Incident Management for Office and Endpoint DLP using Azure Sentinel

Advanced Incident Management for Office and Endpoint DLP using Azure Sentinel

This article is contributed. See the original author and article here.

A common question we get from organizations that use Microsoft Information Protection is, how can we receive a single pane of glass across not only DLP and other information protection events but correlate with the entire IT estate? How can I effectively use the richness of data for incident management and reporting?


 


In this post we will focus on how this can be achieved with Azure Sentinel, by utilizing a custom Azure Function for ingestion. Let’s start with a few teasers.


 


Below is a sample where an Office DLP incident is connected with other incidents as well as the Microsoft Defender for Endpoint alerts from the device. Over time this native Azure Sentinel feature will evolve to support more entities for automated correlation.


Picture1.png


 


This is a Workbook sample of reporting of DLP incidents across departments and geography.


Graph.bmp


 


In this GRAPH sample using an Workbook, we have selected a document node (Darkness), which expands a table with SharePoint DLP alerts as well as SharePoint Activity for that document to instantly go deeper in the investigation.


Graph2.bmp


 


The code and instructions for ingestion of Endpoint and Office DLP events can be found here, https://github.com/OfficeDev/O365-ActivityFeed-AzureFunction/tree/master/Sentinel/EndPointDLP_preview (Although the naming is endpoint it includes both Office and Endpoint data). Please note that  the code for endpoint will change as soon as the endpoint DLP events are included in dlp.all.


 



  1. Register a new application in Azure AD https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app 

    • Microsoft GRAPH (Application permissions)

      • Group.Read.All

      • User.Read.All



    • Office 365 Management APIs (Application permissions)

      • ActivityFeed.Read

      • ActivityFeed.ReadDlp (Needed for detailed DLP events)Picture6.png





  2. Collect the identity and secret for the new App created in step 1. For production, store the secret in Azure Key vault https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references, generate the keys now delegate access to the function in step 7.

    • clientID

    • clientSecret

    • TenantGuid

    • exuser (User account to allow for mapping to sensitive info types, it should only have the permissions to run Get-DlpSensitiveInformationType)

    • Azure Sentinel Workspace Name



  3. Get the WorkSpace ID and Workspace Key for your Sentinel Workspace.

    • Select the workspace from the Log Analytics workspaces menu in the Azure portal. Then select Agents management in the Settings section.Picture7.png



  4. Click deploy to Azure in the repo https://github.com/OfficeDev/O365-ActivityFeed-AzureFunction/tree/master/Sentinel/EndPointDLP_previewPicture8.png

  5. Provide the parameters needed for the function.

    • SPUS is only used if you are going to deploy ingestion of emails to  SharePoint to be able to retrieve a full copy of the emails from the incidents.Picture9.png

    • Click Review and create

    • Click Create, if all parameters passed

    • The Deployment will startPicture11.png

    • On completion it is likely that several of the functions will have an error. The actual function code is deployed in the next step so the errors are expected.Picture12.png



  6. Go to the Resource Group where you deployed the app, you will see the core services deployed. Click the Function App, we will come back here in a momentPicture13.png

  7. To enable the app to automatically synch DLP policies to Sentinel run the following commands it will allow the APP to fully manage Sentinel. You need to define the RG where the Log Analytics database is hosted for Sentinel.

    • Start Powershell and ensure that the Az module is installed.

    • $id = (Get-AzADServicePrincipal -DisplayNameBeginsWith YourAPP).id

    • New-AzRoleAssignment -ResourceGroupName YOURRGWITHSENTINEL -RoleDefinitionName “Azure Sentinel Contributor” -ObjectId $id 

    • You can use the UI as well under Identity of the function, this process can also be used granting access to your key vault on completion of the setup of the function.Picture14.png



  8. Deploy the code used for the functions.

    • Download the deployment zip (endpointdlpservice.zip )

    • Start Powershell and ensure that the Az module is installed.

    • Connect-AzAccount



  9. Run Publish-AzWebApp -ResourceGroupName REPLACEWITHYOURRG -Name REPLACEWITHYOURAPPNAME -ArchivePath C:pathenpointdlpservice.zipPicture15.png

  10. To initialize the variables in the app

    • Navigate to the Enablement function in your Function AppPicture16.png

    • open the function under functions, open “Code + Test” , click Test/Run, click RunPicture17.png



  11. Note if there are any errors generated in this run, you will see it in the logging window. If there is a typo or similar in your configuration files. Go back to the main window for the App and click Configuration to update the parameterPicture18.png

  12. Note, the Analytic Rules functions will not be successful until you have ingested both SharePoint, Exchange events and in the case of Endpoint you need Endpoint events.

    • The API actively refuses queries that are in-valid.



  13. If the Log Analytic rules that corresponds to DLP Policies aren’t created after data ingestion, run the Enablement function again. It will reset the time scope of the functions.

  14. If you want to ingest original email content to SharePoint please see https://github.com/OfficeDev/O365-ActivityFeed-AzureFunction/tree/master/Sentinel/logicapp.

  15. To setup the reporting please follow


  16. If you want to try out ingestion of documents from endpoints look at this https://github.com/OfficeDev/O365-ActivityFeed-AzureFunction/tree/master/Sentinel/EndPointDLP_preview/DocumentCopy 

  17. In the repo see the “Important Additional Customization”



Summary


This is just a starting point to get DLP incident data in to Azure Sentinel. There is enrichment code to add details from Microsoft GRAPH that can be customized. You can customize the code to send events to different Azure Sentinel Workspaces based on geography and other details. In Azure Sentinel you can start to create automated actions using Playbooks, you can create your own Kusto queries to receive new insights. More on that in a later post. And yes, we are investigating the option to provide native integration with Azure Sentinel as well.


 


 


 

ADF to Synapse Pool: Please enable Managed Service Identity and try again

ADF to Synapse Pool: Please enable Managed Service Identity and try again

This article is contributed. See the original author and article here.

Again quick post about error and mitigation.


So as it is publicly documented ( today, Oct – 2020) managed identities are not currently supported on the SQL Pools under Synapse workspace. I mentioned the date, because this may change. But so far that is the current scenario.


So suppose you are using the ADF pipeline ( in or out Synapse workspace) but you are connecting to a SQL Pool under synapse workspace.


 


You may hit this issue:


Managed Service Identity has not been enabled on this server.


Or full error message:


Sink_pool.png


 


 


This limitation is documented under the following links:


 https://docs.microsoft.com/en-us/answers/questions/58750/data-flow-error-in-azure-synapse-analytics-workspa.html


 


Are there any limitations with COPY using Synapse workspaces (preview)?


Authenticating using Managed Identity (MSI) is not supported with the COPY statement or PolyBase (including when used in pipelines). You may run into a similiar error message:


com.microsoft.sqlserver.jdbc.SQLServerException: Managed Service Identity has not been enabled on this server. Please enable Managed Service Identity and try again.


https://docs.microsoft.com/en-us/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest


 


So here is your scenario: You configured SQL Server user to connect to the database trying to avoid the managed identity problem but still if you enable sink stagging it hits this problem. If you do not enable the sinking stage as you have a large number of rows to load it will take a long time to run because the insert will be executed row by row.


 


Here is the reason:


Staged copy by using PolyBase: To use this feature, create an Azure Blob Storage linked service or Azure Data Lake Storage Gen2 linked service with account key or managed identity authentication that refers to the Azure storage account as the interim storage.


https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse


 


 


Workaround:


 



  •  Change the authentication method of the staging store linked service to key or service principal auth. The point is avoiding the managed identities but still enable the sinking stage.


Step by Step:


 


Success Scenario


This the storage account configuration with the account key.  Further, I will enable the Sink stage using this storage account which is also the source of my data.


Account_key_storage.png


 


enablestage_storage.png


 


And………. it Worked:


suceed.png


Failure configuration :


 


Here the storage account is using managed identity authentication.


storage_account_managed.png


Once I try to run it failed as it follows:


failure_adf.png


 


Thanks to the case collaboration of ADF team Yassine Mzoughi, Darius Ciubotariu and Synapse team Jackie Huang and Olga Guzheva.


 


That is it!


Liliam C Leme


UK engineer.