This article is contributed. See the original author and article here.
Utilizando Log Analytics para monitorar logs de auditoria do Azure RedHat OpenShift
Introdução
Por padrão, os clusters Azure Red Hat OpenShift possuem uma forma de monitorar os logs de auditoria através do OpenShift Logging, que envolve a instalação do OpenShift Elasticsearch Operator e OpenShift Cluster Logging. Embora essa solução seja eficiente, ela não permite a integração com o Azure Monitor, a solução de monitoramento da Microsoft, nem a centralização dos logs de auditoria de diversos clusters em um único local.
Para demonstrarmos uma solução personalizada, é necessário possuir um cluster Azure Red Hat OpenShift. Caso você não possua um cluster, é possível seguir o tutorial Criando um cluster Azure Red Hat OpenShift e lembre-se de utilizar a opção do pull secret para baixar as imagens da RedHat Pull Secret
Fluent Bit é um sistema de coleta e encaminhamento de registros e logs (logs de eventos e mensagens) desenvolvido como parte do ecossistema Fluentd. É uma solução leve e eficiente projetada para coletar, filtrar e encaminhar logs em ambientes distribuídos.
Azure Red Hat OpenShift
Após a criação do cluster, vamos analisar as pastas que estão os logs de auditoria do cluster.
Faça o login no cluster, você pode pegar o endereço do cluster no portal do Azure, na aba Overview do cluster criado e clicando no botão Connect
Clique na URL e utilize o username kubeadmin como user e o password como senha.
Instalando o Fluent Bit no cluster
Para fazer a instalação no Azure Red Hat OpenShift, precisamos setar o security context constraints (SCC), para isso você precisa estar logado via cli e ter um usuário com permissão de cluster-admin.
Por padrão a instalação do Fluent Bit os DaemonSets são instalados somente nos workers nodes, mas para ter acesso aos logs de auditoria, precisamos fazer a instalação somente no master node, para isso, vamos criar um arquivo chamado values.yaml com o seguinte conteúdo:
# kind — DaemonSet or Deployment
kind: DaemonSet
# replicaCount — Only applicable if kind=Deployment
replicaCount: 1
image:
repository: cr.fluentbit.io/fluent/fluent-bit
# Overrides the image tag whose default is {{ .Chart.AppVersion }}
tag: “latest-debug”
pullPolicy: Always
# Configure podsecuritypolicy
# Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
# from Kubernetes 1.25, PSP is deprecated
# See: https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes
# We automatically disable PSP if Kubernetes version is 1.25 or higher
podSecurityPolicy:
create: false
annotations: {}
openShift:
# Sets Openshift support
enabled: true
# Creates SCC for Fluent-bit when Openshift support is enabled
securityContextConstraints:
create: true
annotations: {}
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## only available if kind is Deployment
ingress:
enabled: false
className: “”
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: “true”
hosts: []
# – host: fluent-bit.example.tld
extraHosts: []
# – host: fluent-bit-extra.example.tld
## specify extraPort number
# port: 5170
tls: []
# – secretName: fluent-bit-example-tld
# hosts:
# – fluent-bit.example.tld
## only available if kind is Deployment
autoscaling:
vpa:
enabled: false
annotations: {}
# List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory
controlledResources: []
# Define the max allowed resources for the pod
maxAllowed: {}
# cpu: 200m
# memory: 100Mi
# Define the min allowed resources for the pod
minAllowed: {}
# cpu: 200m
# memory: 100Mi
updatePolicy:
# Specifies whether recommended updates are applied when a Pod is started and whether recommended updates
# are applied during the life of a Pod. Possible values are “Off”, “Initial”, “Recreate”, and “Auto”.
updateMode: Auto
## How long (in seconds) a pods needs to be stable before progressing the deployment
##
minReadySeconds:
## How long (in seconds) a pod may take to exit (useful with lifecycle hooks to ensure lb deregistration is done)
##
terminationGracePeriodSeconds:
priorityClassName: “”
env: []
# – name: FOO
# value: “bar”
# The envWithTpl array below has the same usage as “env”, but is using the tpl function to support templatable string.
# This can be useful when you want to pass dynamic values to the Chart using the helm argument “–set =”
# https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function
envWithTpl: []
# – name: FOO_2
# value: “{{ .Values.foo2 }}”
#
# foo2: bar2
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file
config:
service: |
[SERVICE]
Daemon Off
Flush {{ .Values.flush }}
Log_Level {{ .Values.logLevel }}
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port {{ .Values.metricsPort }}
Health_Check On
## https://docs.fluentbit.io/manual/pipeline/inputs
inputs: |
[INPUT]
Name tail
Path /var/log/kube-apiserver/*.log
multiline.parser docker, cri
Tag audit.kube-apiserver.*
DB /tmp/kube_apiserver.db
Mem_Buf_Limit 50MB
Refresh_Interval 10
Skip_Empty_Lines On
Buffer_Chunk_Size 5M
Buffer_Max_Size 50M
Skip_Long_Lines Off
[INPUT]
Name tail
Path /var/log/openshift-apiserver/*.log
multiline.parser docker, cri
Tag audit.openshift-apiserver.*
DB /tmp/openshift-apiserver.db
Mem_Buf_Limit 50MB
Refresh_Interval 10
Skip_Empty_Lines On
Buffer_Chunk_Size 5M
Buffer_Max_Size 50M
Skip_Long_Lines Off
[INPUT]
Name tail
Path /var/log/oauth-apiserver/*.log
multiline.parser docker, cri
Tag audit.oauth-apiserver.*
DB /tmp/oauth-apiserver.db
Mem_Buf_Limit 50MB
Refresh_Interval 10
Skip_Empty_Lines On
Buffer_Chunk_Size 5M
Buffer_Max_Size 50M
Skip_Long_Lines Off
## https://docs.fluentbit.io/manual/pipeline/filters
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
## https://docs.fluentbit.io/manual/pipeline/outputs
outputs: |
[OUTPUT]
Name stdout
Match *
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/upstream-servers
## This configuration is deprecated, please use `extraFiles` instead.
upstream: {}
## https://docs.fluentbit.io/manual/pipeline/parsers
customParsers: |
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
# This allows adding more files with arbitary filenames to /fluent-bit/etc by providing key/value pairs.
# The key becomes the filename, the value becomes the file content.
extraFiles: {}
# upstream.conf: |
# [UPSTREAM]
# upstream1
#
# [NODE]
# name node-1
# host 127.0.0.1
# port 43000
# example.conf: |
# [OUTPUT]
# Name example
# Match foo.*
# Host bar
# The config volume is mounted by default, either to the existingConfigMap value, or the default of “fluent-bit.fullname”
volumeMounts:
– name: config
mountPath: /fluent-bit/etc/fluent-bit.conf
subPath: fluent-bit.conf
– name: config
mountPath: /fluent-bit/etc/custom_parsers.conf
subPath: custom_parsers.conf
Se desejar comparar o arquivo que está sendo criado com o arquivo oficial do Fluent Bit, você pode acessar o repositório do Fluent Bit, o arquivo yaml acima também tem a configuração para as pastas abaixo de logs do Azure Red Hat OpenShift que usam a tag [INPUT].
/var/log/kube-apiserver
/var/log/openshift-apiserver
/var/log/oauth-apiserver
Nessa configuração acima estão também estamos usando a imagem com a tag “latest-debug”, com essa tag é possível ver os logs do Fluent Bit no console do pod após a instalação do Fluent Bit no cluster, para isso basta executar o comando abaixo:
ls /var/log/kube-apiserver
ls /var/log/openshift-apiserver
ls /var/log/oauth-apiserver
Para realizar a instalação, esteja na mesma pasta em que o arquivo values.yaml foi criado e execute o comando abaixo.
Logo após instalado, vá ao dashboard do seu cluster, selecione workloads e pods na aba lateral e selecione o project como logging, você deve ter a mesma quantidade de pods que o cluster tem de worker nodes, no meu caso são três workers nodes.
Com a configuração atual estamos somente lendo os arquivos de logs e mostrando no terminal.
Criando um Log Analytics workspace
Para enviarmos os logs para o Azure Monitor precisamos criar um Log Analytics workspace, para isso acesse siga os passos
Após a criação do Log Analytics workspace e acesse o mesmo e na menu lateral nos settings clique no Agents.
Salve o Workspace ID e o Primary Key, pois vamos usar os mesmo para a nova configuração.
Agora precisamos adicionar mais um output na configuração do ConfigMap do Fluent Bit.
Vá no ConfigMap(fluent-bit) e adicione o output abaixo no final do arquivo e clique no salvar.
Após criar a secret você pode verificar a mesma rodando o comando abaixo.
kubectl get secret fluentbit-secret -n logging
Agora precisamos adicionar secret no DaemonSet, para isso vá no menu lateral e selecione DaemonSets e clique no fluent-bit e selecione Enviroments
Clique no Add from ConfigMap or Secret
Adicione as environments SharedKey e WorkspaceId e no Select a resource , selecione o Secret que foi criado anteriormente fluent-bit-secret, deixe igual a imagem abaixo e clique no save.
Para que a nova configuração seja aplicada, é necessário excluir os Pods atuais; execute o comando abaixo.
Após deletar os pods, você pode verificar que os novos pods já estão sendo criados com a nova configuração, para isso execute o comando abaixo.
kubectl get pods -l app.kubernetes.io/instance=fluent-bit -n logging
# Utilize o nome do primeiro de pod que aparecer e execute o comando abaixo para ver os logs do pod.
kubectl logs fluent-bit-xxxx -n logging | grep “customer_id=”
Vai mostrar os logs como abaixo, mostrando que o output para o Log Analytics workspace a foi enviado com sucesso.
Vizualizando os logs de auditoria no Log Analytics workspace
Entre no portal da azure, busque na barra de pesquisa do Log Analytics workspace e na lista selecione o Log Analytics workspace que foi criado nos passos anteriores.
No menu lateral selecione logs como na imagem abaixo.
Vai abrir uma tela de queries e feche a mesma.
Em tables, abra custom logs e deve ter uma tabela com no nome AuditOpenshift_CL
Vá no campo e coloque o comando abaixo e clique no Run
AuditOpenshift_CL |
take 100
Após rodar o comando, irá mostrar todos os logs de auditoria que estão sendo enviados para o Log Analytics workspace
Conclusão
Em resumo, o Fluent Bit é uma ferramenta poderosa para coletar e enviar logs para o Log Analytics Workspace da Azure. Com a configuração correta, você pode coletar logs de vários serviços e aplicativos em execução em seu cluster Kubernetes(OpenShift) e enviá-los para o Log Analytics Workspace para análise e monitoramento. Além disso, o Fluent Bit é altamente configurável e pode ser personalizado para atender às suas necessidades específicas. Esperamos que este guia tenha sido útil para você começar a usar o Fluent Bit em seu ambiente Kubernetes(OpenShift).
This article is contributed. See the original author and article here.
Introduction
Microsoft Fabric is a powerful unified analytics solution that allows users to seamlessly connect to various data sources, including Azure Databricks, and create insightful reports and visualizations without the need to move the data.
In this tutorial, we’ll show you step-by-step how to connect to Azure Databricks generated Delta Tables and create a report in Microsoft Fabric.
By the end of this tutorial, you’ll have the knowledge needed to read Azure Databricks generated Delta Tables from a Microsoft Fabric using Notebook and SQL query. You will also learn how to create a Power BI report that can help drive business decisions. So, let’s get started!
Prerequisites
Before you connect, complete these steps:
An Azure Databricks workspace
An ADLS Gen2 account to store delta table and a parquet file
Enter the connection details (sign in if required) and select Next.
In this case, I am using ‘Organization Account’ Authentication kind and hence need to sign in.
Field
Details
Connection
Existing connections for the specified storage location will appear in the drop-down. If none exist, create a new connection.
Connection name
The Azure Data Lake Storage Gen2 connection name.
Authentication kind
The supported models are: Organizational account, Account Key, Shared Access Signature (SAS), and Service principal. For more information, see ADLS shortcuts.
Enter the Shortcut Name and Sub path details and then click Create.
Field
Details
Shortcut Name
Name of your shortcut
URL
The Azure Data Lake Storage Gen2 URL from the last page.
Sub Path
The directory where the delta table resides.
The shortcut pointing to the delta table (fact_internet_sales) created in the last section will now appear as a delta table under Tables in the Explorer pane.
Click on the table (fact_internet_sales) and the data in the table will show up.
Read the data from Notebook – Lakehouse mode
The data in the table can now be queried directly from the notebook in Fabric.
Right-click on the table or click on ellipses (…) next to the table, click Open in notebook and then New notebook.
New notebook will appear with the query automatically generated to read the data in the table.
Select the Run Cell button or press Ctrl+Enter to execute the query and view the data.
Read the data using SQL – SQL Endpoint mode
The data in the table can also be queried directly using T-SQL query in Fabric.
Browse to the SQL Endpoint created as part of Lakehouse provisioning from your workspace.
After opening SQL Endpoint from the workspace, expand the database, schema and tables folder in the object Explorer to see all tables listed.
Right-click on the table (fact_internet_sales) or click on ellipses (…) next to the table, click New SQL Query and then Select TOP 100 rows.
The script will be automatically generated and executed to show the data in the table or click on Run to execute the query.
Visualise the data in Fabric using Power BI
The data in delta table can now be accessed and analysed in Power BI. You can either create a new dataset or use the default dataset created as part of lakehouse provisioning for a new report. For more information, see Direct Lake in Power BI and Microsoft Fabric
Using new dataset
If you are in the Lakehouse mode, click on New Power BI dataset.
If you are in the SQL endpoint mode, click on New Power BI dataset from Reporting tab.
In the New Dataset dialog, select the table to be included in the dataset and click Confirm.
The dataset is automatically saved in the workspace, and then opens the dataset. In the web modelling experience page, click on New Report
In the report authoring page, drag or select the attributes from Data pane to the left-hand side pane to be included in the visualization.
Using default dataset
Select your workspace and open the default dataset.
On the dataset page, click on Start from scratch to create a new report.
In the report authoring page, drag or select the attributes from Data pane to the left-hand side pane to be included in the visualization.
Summary
In conclusion, this guide provides a seamless solution for accessing Azure Databricks generated delta tables from Microsoft Fabric and visualizing the data in Power BI without the need to move the data. By following the steps outlined in this guide, you can easily connect to your delta tables and extract valuable insights from your data. With the power of Azure Databricks and Microsoft Fabric combined, you can take your data analysis to the next level and make informed decisions that drive business success.
Give it a try and let me know if this was helpful.
This article is contributed. See the original author and article here.
Microsoft MVPs continue to help the community by sharing their profound technical knowledge and demonstrating leadership. In this article, we spotlight Rudy Ooms, an Enterprise Mobility MVP from the Netherlands, and explore his insightful troubleshooting story that made a significant impact in addressing a real-world challenge.
Rudy found a question in the Microsoft Management Customer Connection Program (MM CCP) regarding the 0x800705B4 error. The individual posting the question referenced a previous blog post by Rudy where he shared the same error, however, it was not exactly the same case. Therefore, he quickly decided to step in to help the person who posted this question.
“The moment I noticed the question popping up in the MM CCP, I became aware of the fact that the problem he was experiencing wasn’t a simple one and was giving him and his company a lot of issues and headaches. So, at that point in time, I really needed to help him out. When taking a closer look at the issue, I quickly understood that the Microsoft support desk could find it difficult to solve this case. Why? If you can’t reproduce it yourself it can become difficult to solve it and that’s where I come in”.
The issue was the device sync issue that impacted on new Autopilot enrollments due to error 0x800705b4. Rudy promptly set up his test device, started troubleshooting with his familiar tool Fiddler, went through a lot of trial and error such as using the SyncML tool, and he discovered that an illegal XML character was the culprit. By removing the assignment of the policy and the autopilot settings within the registry, he successfully mitigated the issue and made devices sync and new enrollments worked smoothly. For a comprehensive insight into his challenges and the adjustments he undertook, we highly recommend delving into the detailed narrative on his blog post. Rudy mentions that he helped another individual on the WinAdmins Discord channel facing the exact same issue.
“After digging into the issue and finding the culprit within 24 hours yeah that felt pretty good,” Rudy is looking back on his contribution. Despite the lack of access to the questioner’s company’s tenants, the reason behind the early resolution of the situation was his desire to help, as well as his ability to improve his own troubleshooting skills by identifying the cause of the problem. This experience taught him a couple of things, ”It reminds me that you can learn new stuff every day… even when you thought you knew everything about SYNCML. And the MS community is strong and always willing to help – and so am I!”
The Product Group at Microsoft recognizes this wonderful troubleshooting story. Juanita Baptiste, Senior Program Manager, said of Rudy’s and the rest of the MVP community’s contributions, “The MVP community is more than just customers to bounce ideas off of. They are experts in their areas and cover more scenarios than we think. I have changed the design specs and features based on feedback from this community and it’s helped us build a better product. We can’t help everyone at the level of detail that MVPs (like Rudy) does, but the fact that they have each other for support and is an immense help to us!”
This story is the best practice of helping each other as a community. Whether you are an MVP or not, everyone has the ability to help others by sharing unique expertise and experience. Next, it is your turn. For example, the following sites (not limited to just the following, of course) can help you make a difference right away, starting today!
This article is contributed. See the original author and article here.
We’ve heard a lot about GitHub Copilot, but maybe more specifically about LLMs, large language models and how they can be used to generate code. You might even have used ChatGPT.
GitHub Copilot chat, is a product built by GitHub, it relies on a specific type of LLm a so called codex model and integrates with your IDE. It’s a bit like a pair programmer, but one that has seen a lot of code and can help you write it.
So what will we do today? We’ll use GitHub Copilot chat to solve a problem. The problem we have is Rock Paper Scissors. It’s a small game that most people knows the rules to. It’s also an interesting problem as it’s small and contained, but still has some complexity to it.
Where do we start? The interesting part here is that there are many ways to start which I discovered speaking to my colleague Cynthia. What we’re doing today is based on the excellent challenge module by Cynthia.
– Domain description. In this version, we write a domain descriptions with all rules and concepts in it and feed that to our AI pair programmer.
– One comment at a time. Here, we write a comment and gradually work our towards a solution. In this approach we tackle one concept and rule at a time.
For the sake of this article, we’ll use the domain description approach.
Solving the problem: use domain description
Luckily for us, the training module already have a domain description, here it is:
Game rules:
Rock beats scissors (breaking it).
Scissors beat paper (cutting it).
Paper beat rock (wrapping it).
The minigame is multiplayer and the computer plays the role of your opponent and chooses a random element from the list of elements
Interaction with the player:
The console is used to interact with the player.
The player can choose one of the three options: rock, paper, or scissors.
The player can choose whether to play again.
The player should be warned if they enter an invalid option.
The player is shown their score at the end of the game.
Validation of user input:
At each round, the player must enter one of the options in the list and be informed if they won, lost, or tied with the opponent.
The minigame must handle user inputs, putting them in lowercase and informing the user if the option is invalid.
By the end of each round, the player must answer whether they want to play again or not.
1. Create a new file called rockpaperscissor.py and paste the domain description at the top like so:
# rockpaperscissor.py
# Game rules:
# Rock beats scissors (breaking it).
# Scissors beat paper (cutting it).
# Paper beat rock (wrapping it).
# The minigame is multiplayer and the computer plays the role of your opponent and chooses a random element from the list of elements
# Interaction with the player:
# The console is used to interact with the player.
# The player can choose one of the three options: rock, paper, or scissors.
# The player can choose whether to play again.
# The player should be warned if they enter an invalid option.
# The player is shown their score at the end of the game.
# Validation of user input:
# At each round, the player must enter one of the options in the list and be informed if they won, lost, or tied with the opponent.
# The minigame must handle user inputs, putting them in lowercase and informing the user if the option is invalid.
# By the end of each round, the player must answer whether they want to play again or not.
# game loop where the player and computer selects rock, paper, or scissors, round is repeated until player quits by typing "quit"
Now we’ve provided context to Copilot in two ways, both by providing a domain description and by providing a file name *rockpaperscissor.py*. In the next step, let’s see what Copilot can do for us.
1. Type the following code:
quit = False
while
Here’s where I stopped typing code because Copilot suggested the following:
not quit:
and for every Enter I pressed, Copilot suggested code. So after having typed Enter a number of times and accepting all completions I arrived at this code:
quit = False
while not quit:
# player input
player = input("rock, paper, or scissors? ")
# computer input
import random
computer = random.choice(["rock", "paper", "scissors"])
# game logic
if player == computer:
print("tie")
elif player == "rock":
if computer == "paper":
print("computer wins")
else:
print("player wins")
elif player == "paper":
if computer == "scissors":
print("computer wins")
else:
print("player wins")
elif player == "scissors":
if computer == "rock":
print("computer wins")
else:
print("player wins")
else:
print("invalid input")
# ask player if they want to play again
play_again = input("play again? (y/n) ")
if play_again == "n":
quit = True
elif play_again == "y":
quit = False
else:
print("invalid input")
quit = True
Improve the code, step 1
This code works, but it’s not the greatest code to look at. It doesn’t keep track of scores and it’s not very readable. So let’s see if we can improve it.
Open the the Chat component by selecting it at the bottom left of your left side menu (looks like a chat window). In it, type the prompt “improve code”. It should generate a response like so:
One improvement you could make is to add a score counter for the player and computer, so that the player can see their score at the end of the game. Here's an updated version of the code with a score counter:
followed by the code:
# Game rules:
# Rock beats scissors (breaking it).
# Scissors beat paper (cutting it).
# Paper beat rock (wrapping it).
# The minigame is multiplayer and the computer plays the role of your opponent and chooses a random element from the list of elements
# Interaction with the player:
# The console is used to interact with the player.
# The player can choose one of the three options: rock, paper, or scissors.
# The player can choose whether to play again.
# The player should be warned if they enter an invalid option.
# The player is shown their score at the end of the game.
# Validation of user input:
# At each round, the player must enter one of the options in the list and be informed if they won, lost, or tied with the opponent.
# The minigame must handle user inputs, putting them in lowercase and informing the user if the option is invalid.
# By the end of each round, the player must answer whether they want to play again or not.
# initialize score counters
player_score = 0
computer_score = 0
# game loop where the player and computer selects rock, paper, or scissors, round is repeated until player quits by typing "quit"
quit = False
while not quit:
# player input
player = input("rock, paper, or scissors? ")
# computer input
import random
computer = random.choice(["rock", "paper", "scissors"])
# game logic
if player == computer:
print("tie")
elif player == "rock":
if computer == "paper":
print("computer wins")
computer_score += 1
else:
print("player wins")
player_score += 1
elif player == "paper":
if computer == "scissors":
print("computer wins")
computer_score += 1
else:
print("player wins")
player_score += 1
elif player == "scissors":
if computer == "rock":
print("computer wins")
computer_score += 1
else:
print("player wins")
player_score += 1
else:
print("invalid input")
# ask player if they want to play again
play_again = input("play again? (y/n) ")
if play_again == "n":
quit = True
elif play_again == "y":
quit = False
else:
print("invalid input")
quit = True
# print final scores
print("player score:", player_score)
print("computer score:", computer_score)
In the above code, we’ve added a score counter for the player and computer, so that the player can see their score at the end of the game. We’ve also added a print statement to print the final scores.
Improve the code, step 2, use your know-how as a developer
What else can we do? At this point, we can start to refactor the code. An improvement we can do is to add a so called rule matrix. A rule matrix is a matrix that contains all the rules of the game. It’s a bit like a lookup table. Let’s provide that as a prompt to Copilot.
Here’s what we’re typing in Copilot chat:
improve code by adding a rule matrix to compare player and computer choices
which results in the following output from Copilot:
# Game rules:
# Rock beats scissors (breaking it).
# Scissors beat paper (cutting it).
# Paper beat rock (wrapping it).
# The minigame is multiplayer and the computer plays the role of your opponent and chooses a random element from the list of elements
# Interaction with the player:
# The console is used to interact with the player.
# The player can choose one of the three options: rock, paper, or scissors.
# The player can choose whether to play again.
# The player should be warned if they enter an invalid option.
# The player is shown their score at the end of the game.
# Validation of user input:
# At each round, the player must enter one of the options in the list and be informed if they won, lost, or tied with the opponent.
# The minigame must handle user inputs, putting them in lowercase and informing the user if the option is invalid.
# By the end of each round, the player must answer whether they want to play again or not.
# initialize score counters
player_score = 0
computer_score = 0
# rule matrix
rules = {
"rock": {"rock": "tie", "paper": "lose", "scissors": "win"},
"paper": {"rock": "win", "paper": "tie", "scissors": "lose"},
"scissors": {"rock": "lose", "paper": "win", "scissors": "tie"}
}
# game loop where the player and computer selects rock, paper, or scissors, round is repeated until player quits by typing "quit"
quit = False
while not quit:
# player input
player = input("rock, paper, or scissors? ")
# computer input
import random
computer = random.choice(["rock", "paper", "scissors"])
# game logic
if player not in rules:
print("invalid input")
else:
result = rules[player][computer]
if result == "win":
print("player wins")
player_score += 1
elif result == "lose":
print("computer wins")
computer_score += 1
else:
print("tie")
# ask player if they want to play again
play_again = input("play again? (y/n) ")
if play_again == "n":
quit = True
elif play_again == "y":
quit = False
else:
print("invalid input")
quit = True
# print final scores
print("player score:", player_score)
print("computer score:", computer_score)
Now, you see how the code looks a lot better, no more nested if statements. We’ve also added a rule matrix (technically it added a dictionary not a matrix, but it’s still a big improvement over the code that was there before) to compare player and computer choices. Of course, we can still improve the code. For example, we can add a function to print the final scores. We should also add tests and documentation before we call it a day.
Conclusion
Sometimes using an AI assistant may look like magic, we could give it a full domain description. That could be a valid approach if the problem is small enough. Even then you need to improve the code in stages.
Another learning is that the more knowledge you have of code in general and the problem domain, the better you can guide the AI assistant to arrive at the solution you want.
Compare the difference between the first and last attempt, the last attempt is much better, don’t you think?
AI assistants aren’t here to replace us YET, but to help us solve problems, we still need to guide, we still need to know what we’re doing. But they can help us solve problems faster and better.
This article is contributed. See the original author and article here.
Intellectual property (IP) theft can wreak havoc on the supply chain and defense, stripping away an organization’s, or nation’s, competitive advantage. Hackers don’t necessarily pose the biggest threat to IP. Insider threats from employees, contractors and partners pose just as big a threat (some might argue bigger) from both accidental and deliberate data loss. While IP comes in many common forms, such as documents and spreadsheets, but images and CAD files pose just as big a risk and are more difficult to protect with traditional security tools. It is possible to protect and watermark CAD files stored and shared in Microsoft 365 applications to help prevent data loss and IP theft and meet Defense compliance requirements such as CMMC. Read on to learn more.
WHAT ARE CAD FILES?
If you’re not familiar with them, computer-aided design (CAD) files are used for designing models or architecture plans in a 2D or 3D rendering. CAD files are used for creating architectural designs, building plans, floor plans, electrical schematics, mechanical drawings, technical drawings, blueprints, or for special effects in movies. They are used by every organization related to any type of manufacturing or construction, including those who manufacture tools and equipment for other manufacturers.
2D CAD files are drawings that mimic ‘old school’ drafting work. Most often these still exist as blueprints for structures where the height isn’t as critical for the design or is a standard dimension, however the layout within that 2-dimensional space is critical. For example, how do we fit our desks, chairs, tables, etc., into that space? The problem with portraying complicated 3-dimensional objects like machine parts in only 2 dimensions is that they need to be rendered from multiple angles so that all critical dimensions are portrayed properly. This used to result in a lot of drawings of the same part, but from different angles.
3D files on the other hand can be portrayed in 3 dimensions and can be rotated in space and even ‘assembled’ with other parts. This can help Engineers discover issues (such as a pipe or shaft that has been accidentally routed through another part) much more quickly so they can be resolved long before production begins.
Much like image files, there are several types of CAD file extensions (.DWG, .DXF, .DGN, .STL) and the file type is dependent on the brand of software used to create them.
CHALLENGES TO CAD FILE PROTECTION
Since most CAD files contain intellectual property or IP, protecting them is critical to protect competitive advantage, avoid malicious theft/corporate espionage and stop sharing with unauthorized audiences. Depending on the industry, different regulations and protection policies may also need to be applied to protect CAD files. For example, in the defense industry, file that contain controlled unclassified information (CUI) must be classified and labelled as CUI under CMMC 2.0, NIST 800-17, and NIST 800-53 regulations.
Out of the box tools are often limited in their ability to classify and tag CAD files to meet the stringent requirements. Additionally, CAD files are often shared and collaborated on using file shares or even file sharing and collaboration tools like SharePoint, and Teams. Without the ability to properly classify and tag information Defense suppliers are at risk of losing valuable Government and Defense contracts to accidental sharing or malicious users.
5 TIPS TO PROTECT CAD FILES IN M365
Protecting CAD files is no different to protecting any other sensitive documents in your care. We recommend you:
Identify Sensitive CAD Files – The first step to any data protection strategy is knowing where your sensitive CAD files exist. If you don’t, you should consider using a scanning tool to find any files and apply appropriate protections.
Restrict Access – Ensure only users and partners who require access sensitive CAD are authorized to do so. Then follow tip #3.
Restrict Actions Authorized Users Can Take – Just because a user should be able to access a document, should they have carte blanche? For example, should they be able to edit it, download it or share it? Should they be able to access it on a public Wi-Fi or at an airport? You need to be able to apply fine grain access and usage controls to prevent data misuse and loss.
Digitally Watermark files to provide a visual reminder of the sensitivity level of files and add information about the user for tracking purposes in the event of a leak. For Defense applications you’ll want to add CUI markings to your watermark such as a CUI Designation Indicator.
Track Access – Keep an audit log of access and actions authorized users have taken with sensitive CAD files (print, save, download, email, etc.) and have a process in place to identify any suspicious activity (multiple downloads, access in the middle of the night, from a suspicious IP address, etc.).
DYNAMICALLYCLASSIFY, PROTECT AND WATERMARK CAD FILES WITH NC PROTECT
NC Protect from Microsoft Partner and MISA member, archTIS, provides advanced data-centric security across Microsoft applications to enhance information protection for cloud, on-premises and hybrid environments. The platform empowers enterprises to automatically find, classify and secure sensitive data, and determine how it can be accessed, used and shared with granular control using attribute-based access control (ABAC) and security policies.
NC Protect offers a range of unique capabilities to restrict access to, protect and watermark CAD files, as well as other documents, in Microsoft’s document management and collaboration application. Capabilities include:
Classification
NC Protect automatically applies Microsoft Information Protection (MIP) sensitivity labels based on the contents of the file.
Apply additional meta data or classification as required. For example, tag files as CUI.
Encryption
NC Protect leverages Microsoft Information Protection (MIP) sensitivity labels and Rights Management System (RMS) to encrypt CAD and other files.
Encrypt files at rest or in motion (e.g., email attachments)
Watermarking
Watermark CAD files with any attributes such as user name, date, time, etc. to deter photographing and remind users of the sensitivity of the file.
Automatically embed CUI Designator data into a 2D or 3 D CAD file as a secure digital watermark including: Name, Controlled BY, Category, Distribution/Limited Dissemination Control, and POC.
Add CUI designator markings.
Restrict Access & Actions
Protected CAD files can only be opened and modified by authorized users based on predefined policies.
Force read-only access for internal and guest users with a built-in Secure Viewer to prevent Copy, Paste, Print, Save As and Download capabilities.
Policies can also control if and who protected CAD files can be shared with.
Hide sensitive CAD files from the document view of unauthorized users in file sharing applications.
Tracking
Track access to all protected files as well as actions users have taken with the file.
Export user actions and logs to Microsoft Sentinel, Splunk or a CSV file for further analysis and upstream actions.
Supported Platforms & File types:
Protects CAD file across all Microsoft 365 applications: SharePoint, Teams, OneDrive, Exchange email, Office 365, as well as SharePoint Server and Windows file shares.
EASY TO CONFIGURE ACCESS, PROTECTION AND WATERMARK POLICES
Applying these policies and controls with NC Protect from archTIS is easy to do using the product’s built-in policy builder.
EASY TO CONFIGURE ACCESS, PROTECTION AND WATERMARK POLICES
For example, the policy below allows NC Protect to deny any guests users the ability to see that CAD files even exist within the network. With this policy activated, a guest will not see a dwg file – even if it resides in a container or Team that they have full access to. Consider how easy it is to share access to SharePoint, OneDrive and Teams with external users and how critical collaboration with external vendors can be for the business.
Users often place sensitive data into places that they don’t realize are accessible by people outside of the organization. This policy allows NC Protect to apply a blanket restriction on guests and mitigate the potential loss of sensitive intellectual property.
For more granular protection, the policy below forces any users who are not part of the Engineering Department to be limited to read only access to CAD files. Even if someone from the Engineering group gives them access to these files, if their department is not Engineering NC Protect will automatically invoke the Secure Reader when they try to open them. In this case the department attribute is being used, but NC Protect can use any attribute such as existing group memberships, title or any other custom attribute to determine how users can interact with these files.
NC Protect’s built-in Secure Reader enforces ‘true read only’ access. Users can’t download, copy or even print a protected file. NC Protect can also watermark the CAD file (or any other type of file) so if a user screenshots the drawing, the photo will contain their name, date and ‘CONFIDENTIAL’ as seen in the image below.
About the author
Irena Mroz, Chief Marketing Officer, archTIS
As CMO, Irena Mroz is responsible for leading archTIS’ product marketing, branding, demand generation and public relations programs. A technical cybersecurity marketer, Mroz has spent her 25+ year career empowering start-ups and public software companies to exceed growth objectives through successful product positioning, demand generation, high profile events and product evangelism. Mroz holds a Bachelor of Science in Mass Communications from Boston University’s College of Communication.
About archTIS
archTIS is a global provider of innovative software solutions for the secure collaboration of sensitive information. The company’s award-winning data-centric information security solutions protect the world’s most sensitive content in government, defense, supply chain, enterprises and regulated industries through attribute-based access and control (ABAC) policies. archTIS’ complementary NC Protect software enhances Microsoft security capabilities with fine-grain, dynamic ABAC policies to control access to and add unique data protection capabilities to secure sensitive data across Microsoft 365 apps, SharePoint on-premises and Windows file shares. The company is a Microsoft Partner and a member of the Microsoft Intelligent Security Association. For more information, visit archtis.com or follow @arch_tis.
This article is contributed. See the original author and article here.
At Microsoft Inspire 2023, we announced that we are bringing together Microsoft Dynamics 365 Marketing and Microsoft Dynamics 365 Customer Insights into one offer, enabling organizations to unify and enrich their customer data to deliver personalized, connected, end-to-end customer journeys across sales, marketing, and service. We are retaining the existing “Dynamics 365 Customer Insights” name to encompass this new offer of both applications. Today, we’re excited to share that the new Dynamics 365 Customer Insights is now generally available for purchase.
For our existing Dynamics 365 Marketing and Dynamics 365 Customer Insights customers, this change signals an acceleration into our “better together” story, where we’ll continue to invest in new capabilities that will enable stronger, insights-based marketing, making it easier for marketers and data analysts to glean insights from customer data. Beginning September 1, 2023, customers who had the previous license for Marketing and/or Customer Insights will only see a product name change in the product; there will be no changes to the core product functionality due to the consolidation of the two products.
The new Customer Insights offers your organization flexibility to meet your business needs, with access to both the customer data platform (Customer Insights—Data) and real-time marketing with customer journey orchestration (Customer Insights—Journeys). The new pricing enables customers to unlock access to both applications and then buy the capacity they need. This gives you, our customers, the power of choice—where you can start with one or both applications and further invest in the capabilities that you’d like to scale. If you’re an existing customer of Microsoft Dynamics 365 Sales or Microsoft Dynamics 365 Customer Service, you can use Customer Insights as the foundation of your customer experience (CX) stack by achieving greater customer understanding and orchestrating contextual customer journeys across every touchpoint of the business.
Achieve greater personalization with Copilot in Dynamics 365 Customer Insights
With the Customer Insights customer data platform, you can gain a holistic view of your customers, anticipate needs, and discover growth opportunities. And with real-time marketing and journey orchestration, you can deliver personalized, in-the-moment customer-triggered engagements that are relevant and contextual. With Copilot in Customer Insights, you can save time by using natural language to create or enhance target segments. You can also nurture creativity by turning topics into suggested copy, helping marketers move from concept to completion faster.
With the power of Copilot in Dynamics 365 Customer Insights, included at no additional cost, your data analysts and marketers can be more productive and increase their focus on personalizing the customer journey.
Our latest investments in copilot capabilities include the ability to:
Get help with content development by providing a short list of key points, and tailor with a tone of voice that matches your brand and campaign. Utilize the generated content suggestions as-is or build upon them in email, social posts, and more.
Customer success with Dynamics 365 Customer Insights: Lynk & Co
Let’s take a look at an organization that is using Dynamics 365 Customer Insights today.
Lynk & Co is a Sweden-based company that is transforming the way people use cars by offering a simple and flexible experience where customers can choose to buy, borrow, or subscribe to a vehicle. With ambitions to disrupt the automobile industry and launch its business in seven markets in less than two years, Lynk & Co needed to quickly build an infrastructure that could support multi-channel customer engagement and drive highly personalized experiences. The company chose Microsoft Dynamics 365 for its out-of-the-box and customizable tools and the ability it provided to build in modules to create unique processes and prioritize specific customer experiences. Within 18 months, Lynk & Co was able to ramp up a significant digital presence in Belgium, France, Germany, Italy, Netherlands, Spain, and Sweden, as well as open social clubs designed to bring the company’s online brand to life through community-focused events.
The company uses Dynamics 365 Customer Insights to capture actionable customer data and link it with operational data within its cars. This is helping the company create seamless, highly personalized experiences for every customer from their first engagement to every time they use the app, drive a car, have service, or visit a club. It also makes it easy to support customers if they want to move from simply borrowing a car, to a monthly subscription, or to a car purchase.
With the customer journey orchestration features in Dynamics 365 Customer Insights, customers get personalized messaging and image content. Beyond that, the system sends right-timed information on specific-to-the-customer club event invitations. These events vary from country to country but have included everything from unplugged live music nights and art openings to meet-ups for running and cycling groups, community talks on social issues, or workshops on how to upcycle old sneakers.
Engagement data from these events feeds back into the platform to further personalize member experiences across all lines of business, across all communication channels—and helps Lynk & Co learn and iterate.
Learn more and get started today with Dynamics 365 Customer Insights
To learn more about Dynamics 365 Customer Insights, take the guided tour or start a free 30-day trial. If you have questions about the merging of Dynamics 365 Marketing and the previous Dynamics 365 Customer Insights, including pricing, please reference the FAQ on Microsoft Learn. If you missed Inspire 2023, you can watch the session by Emily He (Corporate Vice President, Business Applications Marketing), on demand, to see the announcements for Business Applications, including the latest innovations in Dynamics 365 Customer Insights.
The new Dynamics 365 Customer Insights
We’re bringing together Marketing and Customer Insights into one offer.
This article is contributed. See the original author and article here.
Introduction
Supply Chain Management lets you manage, track, and verify compliance with export control restrictions prior to confirming, picking, packing, shipping, and invoicing sales orders. The new advanced export control functionality allows you to manage your export control policies using a native Microsoft Dataverse solution that interfaces directly with your Supply Chain Management instance. Supply Chain Management then enforces compliance with international trade regulations by consulting your export-control policies in real time.
The export control dataverse solution allows you to keep track of the many different rules and policies, expressing these rules, including complex ones, using formulas similar to those in Microsoft excel. The fact that it is a dataverse-based solution also allows your other systems to access your export control rules thanks to the hundreds of connectors available for Dataverse.
The solution implements five primary concepts:
Jurisdictions
A jurisdiction is a set of codes, categories, restrictions, exceptions and licenses. It represents a set of configurations that apply to incoming requests. Like the US International Traffic in Arms Regulation (ITAR), US Export Administration Regulations (EAR) or EU Dual Use.
You can create as well your own jurisdiction for your companies internal policies.
Codes and categories
The codes that make up a jurisdiction are often referred to as Export Control Classification Numbers (ECCNs).
An example of an export control classification number is 7A994, which is defined by the United States Export Administration Regulations (US EAR) export control jurisdiction. This classification number applied to “Other navigation direction finding equipment, airborne communication equipment, all aircraft inertial navigation systems not controlled under 7A003 or 7A103, and other avionic equipment, including parts and components.” According to the US EAR, ECCN 7A994 is a part of the *Anti Terrorism (AT)* control category.
Restrictions
Each export control jurisdiction defined a set of restrictions under which export control actions should be disallowed unless an exception exists.
Exceptions
Exceptions allow an action even though a restriction would otherwise block it. Common types of exceptions include licenses, blanket exemptions, and corporate policies.
Exceptions are defined the same way as restrictions, but also provide extra requirements that apply when the exception is used, such as the need to display a message to the user o to print text and licenses on documents.
Licenses
Licenses are the specific permissions to be able to trade an item or set of items in a given context. It is common that the authorities are the ones providing the licenses.
This article is contributed. See the original author and article here.
Microsoft Learn has a passionate and inspiring community to support your learning journey wherever it may take you. Here we highlight a few of our global learners who have shared their stories about making successful career changes using Microsoft Learn. Our learners inspire us with their perseverance, ingenuity, and the courage to reinvent themselves (Zoologist to Functional Consultant!). Many had to make a significant career change due to the pandemic, proving to us all that if they could make a switch during such a challenging time, we can all be successful with the right learning path and helpful resources in place. Each career changer started by identifying their goal and strategically working toward it–and you can do the same.
Here are a few of their stories:
Introducing Manoj Bora: Hospitality industry to IT Pro
Photo of Manoj Bora
Manoj came to Microsoft Learn from 20 years in the hospitality industry. When the pandemic struck, he lost his job, and the peace of mind that comes with a stable career. In March 2020, he was forced to start over, finding odd jobs and doing manual labor to provide for his family. At that point, Manoj decided to turn to the tech industry to take advantage of the many career opportunities he found available. He explored careers as a developer, software testing, SAP, and Oracle, but it was Microsoft Dynamics 365 which appealed to him most as he had transferable skills. He dove deep into Dynamics 365 but quickly realized he needed structured and practical training – this led him to Microsoft Learn. Gradually with the help of self-paced learning content, community discussion forums, user groups, and Microsoft organized events, Manoj was able to establish his new career in IT. Today, he works as a Dynamics 365 Customer Engagement Functional Consultant.
“Even if you do not have a computer science degree or any IT expertise,” Manoj points out, “if you put your focus on learning something new, you can achieve it with amazing Microsoft Learn content, the helpful Microsoft community, and the evolution of low-code, no-code Power Platform.”
Key insight from Manoj:His advice to other learners is to identify your learning goals ahead of time and pursue all possibilities because Microsoft Learn offers so many resources and learning paths.
Introducing Ikenna Udeani: Student to Data Analyst
Photo of Ikenna Udeani
Ikenna was fresh out of college when he discovered Microsoft Learn. Our platform played a crucial role in helping Ikenna secure his first job immediately after graduation. Microsoft Learn was instrumental in preparing him to earn the Microsoft Certified: Azure Data Fundamentals certification, which he showcased on his LinkedIn profile. This caught the attention of hiring managers, and as a result, he was offered a job—but he didn’t stop there. Ikenna went on to earn six additional certifications, while also working towards two more new certifications.
“I can’t overstate the impact that Microsoft Learn has had on my professional growth and development,” says Ikenna. “I would highly recommend it to anyone looking to enhance their skills and advance their career in the tech industry.”
Key insight from Ikenna: His favorite feature on Microsoft Learn is the sandbox environment, which allowed him to get interactive experience using various Azure features for free and to practice his skills.
Introducing Nikhil More: Zoology college educator to Functional Consultant
Photo of Nikhil More
Our learners come to Microsoft Learn with diverse backgrounds—Nikhil’s includes a master’s degree in zoology and experience in ecological research and teaching. Like many others, the pandemic brought unexpected changes to his life, and he lost his job as a college teacher. That’s when he discovered Microsoft Learn, and quickly realized that the platform had a well-structured approach aligned with the job he aspired to achieve. The continuous learning opportunities provided by Microsoft Learn ensure that he’s always at the forefront of industry trends and equipped to deliver exceptional results.
“It has empowered me to bridge the gap between my biology background and a thriving career in technology,” says Nikhil. “The platform has not only provided me with the knowledge I needed but also bolstered my sense of confidence and purpose. With Microsoft Learn as my guide, I am excited to see where my Dynamics 365 career takes me next.”
Key insight from Nikhil: One of his favorite aspects of Microsoft Learn is that it provides a structured learning path, offering modules and courses that gradually build your knowledge. It feels like you’re embarking on an exciting journey, with each module representing a new stop along the way.
Has Microsoft Learn helped you on your journey to building skills and achieving your goals? Fill out our form for a chance to have your story featured. We can’t wait to hear from you!
This article is contributed. See the original author and article here.
This week, we launched a new playlist on the Microsoft Azure YouTube channel that includes all episodes of our interview series, Microsoft SaaS Stories: Learn from Software Experts. This series highlights partners at various stages of their software as a service (SaaS) journey and their unique experiences building, publishing, and growing on the Microsoft commercial marketplace.
In my role as an Engineering Manager at Microsoft, I’ve seen our software partners take a variety of approaches to SaaS. The most successful companies were the ones that spent the time to understand the scope and steps within the journey to SaaS, both on the business and technical sides. As my team helped companies through this journey to build resilient, scalable, secure applications, they each learned unique insights that enabled their success. I saw a significant opportunity to connect companies at different stages in this journey so that they could share and learn from others to be some of the most successful on our platform and in the market.
Here is a summary of each episode we’ve produced so far:
Episode 1: Basis Theory. CTO Brandon Weber shares how they built confidence with customers by creating an easy-to-use SaaS platform that scales while remaining reliable and secure. Learn the challenges they encountered running a 24/7 service while evolving the service and handling customer growth.
Episode 2: Zammo. In this episode with Zammo’s Stacey Kyler and Nicholas Spagnola, we learn about their significant growth in business and much faster time to close based on having their products in the marketplace. They share their experience building for Azure and running a No-Code Conversational AI Software SaaS platform.
Episode 3: Wolfpack. In this episode with Wolfpack’s Koen den Hollander, we learn how they built their SaaS application for retail customers, and how connecting engineers directly to customers enables them to deliver value at scale.
Episode 4: Vocean. In this episode, we explore how Vocean built their SaaS application that changes the way organizations make decisions. They share the importance of taking time to plan, learn, and listen to experts around you before rushing to build features.
Episode 5: Access Infinity. In this episode, we talk to Access Infinity’s Managing Director, Keshav Nagaraja and explore how Access Infinity saw an opportunity in their consulting business to create platforms that help their customers at scale, and how they came up with a pricing model that drives positive user behaviors.
Episode 6: Sage. In this episode, we learn how Sage embraced the opportunities to shift their application to SaaS, how they used SaaS as an opportunity to simplify their pricing model, and how they use a simple set of principles to guide complex changes.
Are you a partner with a SaaS solution on marketplace who is interested in sharing your SaaS story? Comment below and our team will reach out to learn more about your story!
This article is contributed. See the original author and article here.
Copilot AI is reshaping customer service just like it’s changing every other aspect of business operations. Before now, customer service managers had no way to gauge the results of their efforts to incorporate AI in their practices. Copilot analytics in Dynamics 365 Customer Service fills that gap, offering deep insights into the operational impact of an organization’s investment in AI-enhanced customer service.
Key metrics and insights
To view Copilot analytics, go to Customer Service historical analytics and select the Copilot tab. Here, comprehensive metrics and insights provide a holistic perspective on the value that Copilot adds to your customer service operations.
Usage metrics
Daily Active Users: The number of individual agents who engaged with Copilot at least once in a day over the specified date range
Total Copilot AI Responses: The aggregate number of responses that Copilot provided in a day over the specified date range
Number of Responses Used: The number of Copilot responses from which an agent copied text
Percentage of Copilot AI Responses Used: The proportion of Copilot responses from which agents copied text
Productivity metrics: Cases
Total Cases Resolved: The aggregate number of cases that agents resolved while Copilot was available
Number of Cases Resolved Using Copilot AI: The number of cases that agents resolved with Copilot’s help
Percentage of Cases Resolved Using Copilot AI: The proportion of cases that agents resolved with Copilot’s help
Average Days to Close for Cases: The average number of days it took agents to resolve cases, with and without Copilot’s help
Case Throughput: The average number of cases that agents resolved per day, with and without Copilot’s help
Productivity metrics: Conversations
Total Conversations: The aggregate number of agent-customer interactions that involved Copilot
Number of Conversations Using Copilot AI: The number of completed conversations in which Copilot played a role
Percentage of Conversations Using Copilot AI: The proportion of conversations in which Copilot played a role
Average Conversation Handle Time: The average duration of conversations in which Copilot played a role
Conversation Throughput: The average number of completed conversations (excluding emails and voice interactions) per day in which Copilot played a role
Satisfaction metrics
Agent Ratings: Agents’ ratings of Copilot’s responses, both positive and negative
The potential of Copilot analytics
Copilot analytics gives leaders of organizations that use Dynamics 365 Customer Service a comprehensive toolset to assess the impact of Copilot on their customer support functions. By analyzing key metrics, supervisors and managers can make informed decisions, optimize processes, and elevate levels of customer satisfaction.
It’s important to recognize that Copilot analytics is a transformative asset for customer service organizations. As you explore its capabilities, you’ll find that its insights have the potential to drive improvements in the productivity of your customer service teams.
AI solutions built responsibly
Enterprise grade data privacy at its core. Azure OpenAI offers a range of privacy features, including data encryption and secure storage. It allows users to control access to their data and provides detailed auditing and monitoring capabilities. Copilot is built on Azure OpenAI, so enterprises can rest assured that it offers the same level of data privacy and protection.
Responsible AI by design. We are committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are putting those principles into practice across the company to develop and deploy AI that will have a positive impact on society.
Learn more about Copilot analytics
Watch a video to learn how copilot AI searches company knowledge sources and generates optimized responses in a single click.
Recent Comments