This article is contributed. See the original author and article here.
Introduction:
JavaScript, the powerhouse behind many applications, sometimes faces limitations when it comes to memory in a Node.js environment. Today, let’s dive into a common challenge faced by many applications – the default memory limit in Node.js. In this blog post, we’ll explore how to break free from these limitations, boost your application’s performance, and optimize memory usage.
Understanding the Challenge:
Node.js applications are confined by a default memory limit set by the runtime environment. This can be a bottleneck for memory-intensive applications. Fortunately, Node.js provides a solution by allowing developers to increase this fixed memory limit, paving the way for improved performance.
Checking Current Heap Size:
Before making any tweaks, it’s crucial to grasp your Node.js application’s current heap size. The code snippet below, saved in a file named `heapsize.js`, uses the V8 module to retrieve the system’s current heap size:
For blessed images (Images developed by Microsoft), SSH would be enabled by default, however your app should be up and running without any issues.
Under configuration blade -> General Settings on Azure app service. Ensure that SSH is “on” as shown below,
After successfully loading your SSH session, navigate to /home and create a file named heapsize.js using the command – touch heapsize.js. Edit the file using the vi editor with the command – vi heapsize.js.
Copy the code snippet above, then save the file by entering `!wq` and pressing Enter.
Run the file using `node heapsize.js`, and you will receive the following output.
Note: In case you encounter v8 module not found issues, attempt to resolve it by installing v8 from SSH using the command – `npm install v8`.
Adjusting Memory Limits:
To increase the memory limit for your Node.js application, use the `–max-old-space-size` flag when starting your script. The value following this flag denotes the maximum memory allocation in megabytes.
For instance, running the following command increases the heap size:
node --max-old-space-size=6000 heapsize.js
This modification results in an expanded total heap size, furnishing your application with more memory resources.
Testing on Azure app service:
In the section on how-to run-on Azure App Service, we’ve learned how to check the current heap size. Now, for the same file, let’s attempt to increase the heap size and conduct a test.
Execute the command “node –max-old-space-size=6000 heapsize.js” in the WebSSH session. You can observe the difference with and without the argument –max-old-space-size=6000.
Automating Memory Adjustments with App Settings on Azure App Service:
For a more streamlined approach, consider adding the `–max-old-space-size` value directly as an app setting. This setting allows you to specify the maximum memory allocation for your Node.js application.
In the above snippet, replace “ with the desired maximum memory allocation for your application.
Sample screenshot:
How to choose the value to which you can increase the heap size?
Selecting an appropriate heap size value is contingent upon the constraints of the app service plan. Consider a scenario where you’re utilizing a 4-core CPU, and concurrently running three applications on the same app service plan. It’s crucial to recognize that the CPU resources are shared among these applications as well as certain system processes.
In such cases, a prudent approach involves carefully allocating the heap size to ensure optimal performance for each application while taking into account the shared CPU resources and potential competition from system processes. Balancing these factors is essential for achieving efficient utilization of the available resources within the specified app service plan limits.
For example:
Let’s consider an example where you have a 4-core CPU and a total of 32GB memory in your app service plan. We’ll allocate memory to each application while leaving some headroom for system processes.
Total Memory Available: 32GB
System Processes Overhead: 4GB Reserve 4GB for the operating system and other system processes.
Memory for Each Application: (32GB – 4GB) / 3 = 9.33GB per Application Allocate approximately 9.33GB to each of the three applications running on the app service plan.
Heap Size Calculation for Each Node.js Application: Suppose you want to allocate 70% of the allocated memory to Node.js heap. Heap Size per Application = 9.33GB * 0.7 ≈ 6.53GB
Total Heap Size for All Applications: 3 * 6.53GB = 19.59GB This is the combined heap size for all three Node.js applications running on the app service plan.
Remaining Memory for Other Processes: (32GB – 19.59GB) = 12.41GB
The remaining memory can be utilized by other processes and for any additional requirements.
These calculations are approximate and can be adjusted based on the specific needs and performance characteristics of your applications. It’s essential to monitor the system’s resource usage and make adjustments accordingly.
Validating on Azure app Service:
To validate, you can check the docker.log present in the location – /home/Logfiles and check the docker run command as below and you can see the app setting NODE_OPTIONS getting appended to the docker run command.
docker run -d --expose=8080 --name xxxxxx_1_d6a4bcfd -e NODE_OPTIONS=--max-old-space-size=6000 appsvc/node:18-lts_20240207.3.tuxprod
Conclusion:
Optimizing memory limits is pivotal for ensuring the seamless operation of Node.js applications, especially those handling significant workloads. By understanding and adjusting the memory limits, developers can enhance performance and responsiveness. Regularly monitor your application’s memory usage and adapt these settings to strike the right balance between resource utilization and performance. With these techniques, unleash the full potential of your JavaScript applications!
This article is contributed. See the original author and article here.
1 | What is #MarchResponsibly?
March is known for International Women’s Day – but did you know that women are one of the under-represented demographics when it comes to artificial intelligence prediction or data for machine learning? And did you know that Responsible AI is a key tool to ensure that the AI solutions of the future are built in a safe, trustworthy, and ethical manner that is representative of all demographics? J As we celebrate Women’s History Month, we will take this opportunity to share technical resources, Cloud Skills Challenges, and learning opportunities to build AI systems that behave more responsibly. Let’s #MarchResponsibly together.
2 | What is Responsible AI?
Responsible AI principles are essential principles that guide organizations and AI developers to build AI systems that are less harmful and more trustworthy.
Fairness issues occur when the AI system favors one group of people vs another, even when they share similar characteristics. Inclusiveness is another area that we need to examine whether the AI system is intentionally or unintentionally excluding certain demographics. Reliability and Safety are another area that we must make sure to consider outliers and all the possible things that could go wrong. Otherwise, it can lead to negative consequences when AI has abnormal behavior. Accountability is the notion that people who design and deploy AI systems must be accountable for how their systems operate. We recently saw this example in the news where the U.S. Congress summoned social media tech leads to hearing how their social media algorithms are influencing teenagers to lose their lives and inflict self-harm. At the end of the day, who compensated the victims or their families for the loss or grief? Transparency is particularly important for AI developers to find out why AI models are making mistakes or not meeting regulatory requirements. Finally, security and privacy are an evolving concern. When an AI system exposes or accesses unauthorized confidential information this is a privacy violation.
3 | Why is Responsible AI Important?
Artificial Intelligence is at the center of many conversations. On a daily basis we are seeing increasing news headlines on the positive and negative impact of AI. As a result, we are seeing unprecedented scrutiny for governments to regulate AI and governments acting as a response. The trend has moved from building traditional machine learning models to Large Language models (LLMs). However, the AI issues remain the same. At the heart of everything is data. The underlining data collected is based on human behavior and content we create, which often includes biases, stereotypes, or lack of adequate information. In addition, data imbalance where there is an over or under representation of certain demographics is often a blind spot that leads to bias favoring one group verse another. Lastly, there are other data risks that could have undesirable AI effects such as using unauthorized or unreliable data. This can cause infringement and privacy lawsuits. Using data that is not credible will yield erroneous AI outcomes; or back decision-making based on AI predictions. As a business, not only is your AI system untrustworthy, but this can ruin your reputation. Other societal harms AI systems can inflict are physical or psychological injury, and threats to human rights.
3 | Empowering Responsible AI Practices
Having practical responsible AI tools for organizations and AI practitioners is essential to reducing negative impacts of AI system. For instance, debugging and identifying AI performance metrics are usually numeric value. Human-centric tools to analyze AI models are beneficial in revealing what societal factors impact erroneous outputs and prediction. To illustrate, the Responsible AI dashboard tools empowers data scientists and AI developers to discovers areas where there are issues:
Addressing responsible AI with Generative AI applications is another area where we often see undesirable AI outcomes. Understanding prompt engineering techniques and being able to detect offensive text or image, as well as adversarial attacks, such as jailbreaks are valuable to prevent harm.
Having resources to build and evaluate LLM applications in a fast and efficient manner is a challenge that is much needed. We’ll be sharing awesome services organizations and AI engineers can adopt to their machine learning lifecycle implement, evaluate and deploy AI applications responsibly.
4 | How can we integrate Responsible AI into our processes?
Data scientists, AI developers and organizations understand the importance of responsible AI, however the challenges they face are finding the right tools to help them identify, debug, and mitigate erroneous behavior from AI models.
Researchers, organizations, open-source community, and Microsoft have been instrumental in developing tools and services to empower AI developers. Traditional machine learning model performance metrics are based on aggregate calculations, which are not sufficient in pinpointing AI issues that are human-centric. In this #MarchResponsibly initiative you will gain knowledge on:
Identifying and diagnosing where your AI model is producing error.
Exploring data distribution
Conducting fairness assessments
Understanding what influences or drives your model’s behavior.
Preventing jailbreaks and data breach
Mitigating AI harms.
4 | How can you #MarchResponsibly?
Join in the learning and communications – each week we will share our Responsible AI learnings!
Share, Like or comment.
Celebrate Women making an impact in responsible AI.
This article is contributed. See the original author and article here.
In this fifth and final blog post in our MLOps Production series, guest blogger Martin Bald, Senior Manager Developer Community from one of our startup partners Wallaroo.AI will go through model workload orchestration and show how to continue the journey for building scale and ease of management for deploying sustainable and value producing models into production.
Introduction
Throughout this blog series we have seen how we can easily and quickly get our ML models into production, validate them for desired outcomes, proactively monitor for data drift and take swift proactive action to ensure we have optimal model output. As we scale and deploy more models into this production process across multiple cloud environments, Data Scientists and ML Engineers are burdened with spending too many valuable cycles on the data plumbing and repetitive tasks needed just to get models to run and produce business reports – often using tools not designed for AI workloads.
Data engineers are also spending far too many cycles supporting data scientists as they try to run and analyze ML pipelines instead of building robust upstream data pipelines to ensure business continuity. In attempting to achieve value from their AI efforts, they soon find bottlenecks preventing them from realizing the production demands they need.
ML Workload Orchestration flow works within 3 tiers:
Tier
Description
ML Workload Orchestration
User created custom instructions that provide automated processes that follow the same steps every time without error. Orchestrations contain the instructions to be performed, uploaded as a .ZIP file with the instructions, requirements, and artifacts.
Task
Instructions on when to run an Orchestration as a scheduled Task. Tasks can be Run Once, where it creates a single Task Run, or Run Scheduled, where a Task Run is created on a regular schedule based on the Kubernetes cronjob specifications. If a Task is Run Scheduled, it will create a new Task Run every time the schedule parameters are met until the Task is killed.
Task Run
The execution of a task. These validate business operations and successfully identify any unsuccessful task runs. If the Task is Run Once, then only one Task Run is generated. If the Task is a Run Scheduled task, then a new Task Run will be created each time the schedule parameters are met, with each Task Run having its own results and logs.
Fig 1.
We can manage our models and pipelines and control how we deploy and undeploy resources and invite collaborators to work on projects with us.
We see from Fig 1 above that at its core orchestration is a Python file, one or more Python files to be exact. These Python files can contain any kind of processing code, other dependencies that we need. Essentially these files will contain references to one or more deployed pipelines. This allows us to schedule runs of these files and reference these pipelines that are deployed as needed.
It also fully supports the connections that we make so I can have as many of those connections as we need. We often see people using these automations to take live input feeds into the pipelines and write the results to another external data source or file store.
Once these are set up I can wrap them all in this orchestration and register that orchestration in the platform. This means that I can then create what is called Tasks or Runs of this Orchestration. These can be done On Demand or Ad Hoc or can be scheduled to run on a regular basis. For example we could schedule it to run every minute, day, week, month etc,.
This means that we can easily define, automate, and scale recurring production AI workloads that ingest data from predefined data sources, run inferencing, and deposit the results to a predefined location efficiently and easily with added flexibility for the needs of your business.
This example provides a quick set of methods and examples regarding Wallaroo Connections and Wallaroo ML Workload Orchestration.
In this example we will we will step through:
Create a Wallaroo connection to retrieving information from an external source.
Upload Wallaroo ML Workload Orchestration.
Run the orchestration once as a Run Once Task and verify that the information was saved in the pipeline logs.
Schedule the orchestration as a Scheduled Task and verify that the information was saved to the pipeline logs.
The first step is to import the various libraries we’ll use for this example.
import wallaroo
from wallaroo.object import EntityNotFoundError, RequiredAttributeMissing
# to display dataframe tables
from IPython.display import display
# used to display dataframe information without truncating
import pandas as pd
pd.set_option('display.max_colwidth', None)
import pyarrow as pa
import time
# Used to create unique workspace and pipeline names
import string
import random
# make a random 4 character suffix
suffix= ''.join(random.choice(string.ascii_lowercase) for i in range(4))
display(suffix)
The next step is to connect to Wallaroo through the Wallaroo client and set up the variables we will use. The Python library is included in the Wallaroo install and available through the Jupyter Hub interface provided with your Wallaroo environment.
Note: If logging into the Wallaroo instance through the internal JupyterHub service, use wl = wallaroo.Client() as seen below.
The following helper methods are used to either create or get workspaces, pipelines, and connections.
# helper methods to retrieve workspaces and pipelines
def get_workspace(name):
workspace = None
for ws in wl.list_workspaces():
if ws.name() == name:
workspace= ws
if(workspace == None):
workspace = wl.create_workspace(name)
return workspace
def get_pipeline(name):
try:
pipeline = wl.pipelines_by_name(name)[0]
except EntityNotFoundError:
pipeline = wl.build_pipeline(name)
return pipeline
Next we will create our workspace and pipeline for the tutorial. If this tutorial has been run previously, then this will retrieve the existing ones with the assumption they’re for us with this tutorial.
We’ll set the retrieved workspace as the current workspace in the SDK, so all commands will default to that workspace.
We’ll now upload our model into our sample workspace, then add it as a pipeline step before deploying the pipeline to it’s ready to accept inference requests.
# Upload the model
housing_model_control = (wl.upload_model(model_name,
model_file_name,
framework=wallaroo.framework.Framework.ONNX)
.configure(tensor_fields=["tensor"])
)
# Add the model as a pipeline step
pipeline.add_model_step(housing_model_control)
Fig 2.
deploy the pipeline
pipeline.deploy()
Output:
Waiting for deployment. This will take up to 45s……………….ok
Fig 3
We will create the data source connection via the Wallaroo client command create_connection.
We’ll also create a data connection named inference_results_connection with our helper function get_connection that will either create or retrieve a connection if it already exists.
The method Workspace add_connection(connection_name) adds a Data Connection to a workspace. We’ll add connections to our sample workspace, then list the connections available to the workspace to confirm.
With the pipeline deployed and our connections set, we will now generate our ML Workload Orchestration. Orchestrations are uploaded to the Wallaroo instance as a ZIP file. Orchestrations are uploaded with the Wallaroo client upload_orchestration(path) method.
We will loop until the uploaded orchestration’s status displays ready.
orchestration = wl.upload_orchestration(path="./remote_inference/remote_inference.zip")
while orchestration.status() != 'ready':
print(orchestration.status())
time.sleep(5)
Once an Orchestration has the status ready, it can be run as a task. The task runs options can be scheduled or run once.
Run Once Task
We’ll do both a Run Once task and generate our Run Once Task from our orchestration.
Tasks are generated and run once with the Orchestration run_once(name, json_args, timeout) method. Any arguments for the orchestration are passed in as a Dict. If there are no arguments, then an empty set {} is passed.
# Example: run once
import datetime
task_start = datetime.datetime.now()
task = orchestration.run_once(name="simpletaskdemo",
json_args={"workspace_name": workspace_name,
"pipeline_name": pipeline_name,
"connection_name": inference_connection_name
})
The list of tasks in the Wallaroo instance is retrieved through the Wallaroo Client list_tasks() method. This returns an array list of the following:Fig 6.
For this example, the status of the previously created task will be generated, then looped until it has reached status started.
while task.status() != "started":
display(task.status())
time.sleep(5)
Output:
'pending'
‘pending’
‘pending’
We can view the inferences from our logs and verify that new entries were added from our task. We can do that with the task logs() method.
In our case, we’ll assume the task once started takes about 1 minute to run (deploy the pipeline, run the inference, undeploy the pipeline). We’ll add in a wait of 1 minute, then display the logs during the time period the task was running.
The other method of using tasks is as a scheduled run through the Orchestration run_scheduled(name, schedule, timeout, json_args). This sets up a task to run on a regular schedule as defined by the schedule parameter in the cron service format.
e.g.
This task runs on the 42nd minute of every hour.
schedule={'42 * * * *'}
The following schedule runs every day at 12 noon from February 1 to February 15 2024 – and then ends.
schedule={'0 0 12 1-15 2 2024'}
For our example we will create a scheduled task to run every 5 minutes, display the inference results, then use the Orchestration kill task to keep the task from running any further.
It is recommended that orchestrations that have pipeline deploy or undeploy commands be spaced out no less than 5 minutes to prevent colliding with other tasks that use the same pipeline.
wait 420 seconds to give the scheduled event time to finish
time.sleep(420)
scheduled_task_end = datetime.datetime.now()
pipeline.logs(start_datetime = scheduled_task_start, end_datetime = scheduled_task_end)
492 rows × 4 columns
Fig 8.
Finally you can use the below commands to list the scheduled run tasks, and end them using the kill task command.
wl.list_tasks()
scheduled_task.kill()
Conclusion
In this final blog post in our series we have addressed a very common set of challenges that AI teams face with production AI workloads and how to solve them through Model Workload Orchestration. This means that we can easily define, automate, and scale recurring production AI workloads that ingest data from predefined data sources, run inferencing, and deposit the results to a predefined location efficiently and easily.
If you want to try the steps in this blog post series you can access the tutorials at this link and use the free inference servers available on the Azure Marketplace. Or you can download a free Wallaroo.AI Community Edition and .
Wallaroo.AI is a unified production AI platform built for Data Scientists and ML Engineers for easily deploying, observing, and optimizing machine learning in production at scale – in any cloud, on-prem, or at the edge.
This article is contributed. See the original author and article here.
Today we kick off the 14th Microsoft Ability Summit, an annual event to bring together thought leaders to discuss how we accelerate accessibility to help bridge the Disability Divide. There are three key themes to this year’s summit: Build, Imagine, and Include.
This article is contributed. See the original author and article here.
We’re excited to announce the expansion of Microsoft’s data residency capabilities by adding content of interactions with Microsoft Copilot for Microsoft 365 to our portfolio of data residency commitments and offerings. We are expanding our product terms and Microsoft 365 data residency offerings to contractually guarantee that we will store the content of your interactions with Copilot for Microsoft 365 in the same country or region in which you store your existing Microsoft 365 content.
Frontline managers have control, on a team-level, over the capabilities offered in Microsoft Shifts. However, until now, Admins couldn’t determine which capabilities could be modified in Microsoft Shifts by frontline managers.
With our latest Graph API release, now in Public Preview (beta version), Admins will be able to choose whether frontline managers can modify the following capabilities for their teams in Shifts:
Open shifts,
Swap shifts,
Offer shifts,
Time-off requests,
Time-off reasons,
Time clock,
Time clock geolocation, and
Management of schedule groups.
In the example below, the frontline managers at Contoso East don’t have permissions to modify the value of the following settings:
Open shifts,
Swap shift requests,
Offer shift requests,
Time-off requests,
Time-off reasons, and
Time clock.
Scenario
At Contoso Ltd, Microsoft Shifts is being used across their stores. As part of an on-going project to manage time-off reasons in Shifts centrally and in a way that complies with Contoso’s HR policies, the IT department is:
Seeking a way to prevent any type of frontline manager (i.e., managers with either team or schedule owner roles) from changing their team’s time-off reasons.
What is new?
Contoso’s Admin can now remove frontline manager’s capability of adding, editing and deleting time-off reasons across the entire frontline teams using the newly released ShiftsRoleDefinition Graph API.
An Admin requires to:
Get the team IDs for the Microsoft Teams using Shifts; and,
Remove the CanModifyTimeOffReasons parameter from the allowedResourceActions list for both team and schedule owner roles on every team using Shifts.
Once this is completed, your frontline managers in Shifts settings page will go from seeing this:
This article is contributed. See the original author and article here.
Windows 365 Customer Lockbox is now generally available for all organizations with a Microsoft 365 E5 or Office 365 E5 subscription. This security feature ensures that Microsoft cannot access content in your Cloud PCs to do service operations without your explicit approval.
What is Customer Lockbox?
In some cases, Microsoft support engineers may need to access your content to determine the root cause of an issue and address it. Windows 365 Customer Lockbox requires the engineer to request access from you as a final step in the approval workflow.
With Customer Lockbox, you have the option to approve or deny the request for your organization, and provide direct-access control to your content.
Customer Lockbox is included in the Microsoft 365 or Office 365 E5 subscriptions and can be added to other plans that have an Information Protection and Compliance or an Advanced Compliance add-on subscription. See Plans and pricing for more information.
How to use Windows 365 Customer Lockbox
Turn Customer Lockbox requests on or off
You can turn on Customer Lockbox controls in the Microsoft 365 admin center. When you turn on Customer Lockbox, Microsoft must obtain your organization’s approval before accessing any of your tenants’ content.
Using a work or school account that has the global administrator role, go to https://admin.microsoft.com/ and sign in.
Once you select Customer Lockbox, a right-hand column will appear. Check the “Require approval for all data access request” checkbox and press the Save button on the bottom of the column to turn on the feature.
Approve or deny a Customer Lockbox request
Using a work or school account that has either the Global Administrator or the Customer Lockbox access role assigned, go to https://admin.microsoft.com/ and sign in.
Choose Support > Customer Lockbox Requests
A list of Customer Lockbox requests is displayed.
Select the Customer Lockbox request, then choose Approve or Deny.
A green confirmation message about the approval of the Customer Lockbox request will be displayed.
Auditing access
Once just-in-time (JIT) access expires, the troubleshooting ticket is marked as complete. You can then visit compliance.microsoft.com and select Audit under the Solutions category to see what was done during the session. For Windows 365 specific records, under Record types, select Windows365CustomerLockbox.
This article is contributed. See the original author and article here.
In today’s fast-paced business landscape, efficient project planning and insightful execution are essential for success. However, the manual processes involved in project management can often lead to inefficiencies, delays, and increased risks. That’s where Copilot for project comes in, revolutionizing the way organizations approach project management.
With the latest update, this trailblazing feature is Generally Available to all Dynamics 365 Project Operations enabled geographies and languages, ensuring that organizations worldwide can leverage its transformative capabilities. Whether you’re a project manager in a professional services organization or leading projects across various industries, Copilot for project is designed to meet your needs.
Copilot for project empowers users to enhance project management efficiency by generating work breakdown structures, assessing risk registers with suggested mitigations, producing comprehensive project status reports, and enabling natural language commands through the sidecar chat feature.
Copilot for project capabilities
Insightful Project Status Reporting
One of the most time-consuming tasks for project managers is the production of project status reports. Gathering data from multiple sources, summarizing project health dimensions, and highlighting risks are all essential but repetitive tasks that can consume valuable time and resources.
Copilot for project changes the game by automating key components of the project status report, allowing project managers to focus on crafting narrative text and refining project-specific insights. Using Copilot for project, the project manager can produce project status reports that integrate concise summaries of scheduling and financial data, as well as generate insightful content that highlights the overall project progress, financial performance, and schedule performance. There are two types of reports to address the reporting needs of both internal and external stakeholders: an internal report that provides a work summary by resource, along with financial data including estimates and actuals, and an external report that excludes the financial data. All reports are saved and can be recalled with all prior edits maintained.
Efficient Task Planning
Streamline project planning with auto-generated work breakdown structures, saving time and effort in creating project delivery plans. Enter the project name and description, and Copilot will provide the suggested task plan for your project. You can tailor further this task plan to suit your project’s needs.
Risk Assessment and Mitigation Planning
Given the disposition of the project’s scope, schedule, and budget, Copilot assesses risk registers, provides mitigation suggestions, and gauges probabilities for each identified risk.
Call to Action
With Copilot for project, project managers can now achieve significant time savings, especially when juggling multiple projects simultaneously. By eliminating mundane tasks like manual data aggregation, maintaining multiple data pivots for collecting insights, and summarization, project managers can allocate their energy towards strategic decision-making and driving project success.
Overall, Copilot for project represents a significant leap forward in project management efficiency and effectiveness. With its advanced AI capabilities, organizations can optimize project delivery times, reduce costs, increase customer satisfaction, and ultimately drive growth and profitability. Embrace the future of project management with Copilot for project and unlock a world of possibilities for your organization.
Learn More
We are making constant enhancements to our features. To learn more about Project for copilot feature, visit Copilot for project.
This article is contributed. See the original author and article here.
Microsoft Copilot for Security and NIST 800-171: Access Control
Microsoft Copilot for Security in Microsoft’s US Gov cloud offerings (Microsoft 365 GCC/GCC High and Azure Government) is currently unavailable and does not have an ETA for availability. Future updates will be published to the public roadmap here.
As of this writing we’ve received the Proposed Rule of the Cybersecurity Maturity Model Certification (CMMC) 2.0, and the public comment period ended on February 26. The National Institute of Standards and Technology (NIST) just released their analysis of public comments on the final draft of NIST Special Publication 800-171 Revision 3 (NIST 800-171r3) and initial draft of NIST 800-171Ar3. NIST plans to publish final versions sometime in Spring 2024. These publications are important because one of the primary requirements for CMMC is that organizations will need to implement most, if not all, of NIST 800-171r3’s controls for Level 2 certification.
In the first blog of this series, we looked at the System and Information Integrity family of requirements (3.14) in the draft of NIST 800-171r3, which covers flaw remediation, malicious code protection, security alerts via advisories and directives, and system monitoring. Also, the blog discussed how Microsoft Copilot for Security (Security Copilot) can help DIB organizations meet these requirements by identifying, reporting, and correcting system flaws more efficiently and effectively. The second blog in this series will dive into the very first requirement family – Access Control (3.1)
Early reports indicate organizations are reducing time and resource constraints by deploying Security Copilot in private preview and the early access program. Despite no public timeline on the availability of Security Copilot in Microsoft’s US-sovereign cloud offerings (Microsoft 365 GCC/GCC High and Azure Government), it’s worthwhile exploring how companies in the Defense Industrial Base (DIB) may use these AI-powered capabilities to meet NIST 800-171r3 security requirements, and ultimately defend against identity threats with finite or limited resources.
NOTE: Some requirements, such as 3.1.1 contain seven bullets (a-g) or more, and an entire blog could be written on that one requirement alone. Each section is not exhaustive of the requirement nor the applications of certain technologies. The suggested applications of Microsoft solutions do not guarantee compliance with any regulation nor prevention of an attack or compromise. All images and references are based upon preview experiences and do not guarantee identical experiences in general availability or within the U.S. Sovereign Cloud offerings.
Access Control (3.1.)
One might ask why Access Control holds the prominent first spot in the NIST 800-171 publication. It’s relatively simple – Access Control is alphabetically first. However, this requirement family is arguably one of the most paramount because of the remarkable growth in identity-based attacks and the need for identity architects or teams to work more closely with the Security Operations Center (SOC). Microsoft Entra data noted in the Microsoft Digital Defense Report shows the number of “attempted attacks increased more than tenfold compared to the same period in 2022, from around 3 billion per month to over 30 billion. This translates to an average of 4,000 password attacks per second targeting Microsoft cloud identities [2023]”.
3.1.1. Account Management
It is obviously a great starting point to “a. Define the types of system accounts allowed and prohibited” to access systems that hold Controlled Unclassified Information (CUI) or other sensitive information. Many organizations or their Managed Security Service Provider (MSSP) develop a mapping of privileged accounts and non-privileged accounts within their environment and develop policy based on principles of Least Privilege – which is a requirement to discuss later in this blog. Yet, the power of Microsoft Entra ID and Security Copilot shines most brightly after the security team “define(s)” or “c. Specify(ies) authorized users of the system(s), group(s) and role membership(s), and access authorization(s).”
Microsoft Entra provides rich information for Microsoft Defender for Identity (MDI) and Microsoft Sentinel for “e. Monitor(ing) the use of system accounts.” Yet, Security Copilot increases the utility of this trove of incidents and events further by easily summarizing details about the totality of a user’s authentications, associations, and privileged access as shown in the figure below.
Furthermore, SOC and Identity administrators alike can quickly surface every user in the environment with expired, risky, or dormant accounts. They can also take the next steps to “f. Disable system accounts” when they meet those criteria or modify the identities and/or privileges. Much of this investigation and troubleshooting is done without the need of policy and configuration surfing, nor does the SOC or Identity administrator need to craft a KQL query or PowerShell script from scratch. Security Copilot allows these two roles to do all of this using natural language prompts.
Alex Weinert, VP of Identity Security at Microsoft, recently spoke of the narrowing gap between these two types of administrators, skillsets, and their teams in Episode 2 of The Defender’s Watch. Alex explains, “it’s more nuanced than… relying on your SOC team to catch things that are happening in Identity. Not all Identities are the same. Not all your servers are the same. We want to be making sure the two teams are working together to build a map of what are those critical resources and that there’s a feedback loop… listening to the SOC on the other side understanding what’s happening in the organization and what are we going to do as administrators [given investigation to remediation of an incident can take time]”. Security Copilot can be the accelerant for incidents and intelligence to drive Account Management and identity policy change.
Alex also quipped “if you’re an Identity Architect go buy your SOC team a pizza and get to know them” as he expressed the need for collaboration across Identity and SOC teams for access control. Ironically Dominoes just rolled out unified identity with Microsoft Entra ID.
3.1.2. Access Enforcement
Security Copilot may help organizations day-to-day enforce Microsoft Entra ID access control policies and modify configurations to increase the identity score shown below. An Identity administrator or member of the SOC can also quickly create an audit log, for example, to detect when a new credential is added to an application registration by simply asking Security Copilot for the applicable KQL code. Also, individuals interviewed for CMMC assessments can leverage Security Copilot to quickly surface a summary of activities completed by your Entra ID (active directory) privileged users, identify when changes to Conditional Access policies were made, and more.
When going through a CMMC assessment, an assessor will be looking to determine if approved authorizations for “logical access” to CUI and system resources are enforced. Taking a step away from Security Copilot, it’s important to note the new MDI Identity Threat Detection and Response (ITDR) dashboard is one of the most elegant ways to show where and how enforcement is taking place or where your organization may not be. In a single plane, administrators can see their identity score from Microsoft Secure Score updated daily with a quick link to see access control policies and “system configuration settings”, new instances where users have exhibited risky lateral movement, and a summary of privileged identities with a quick link to view the full “list of approved authorizations”.
3.1.3. Information Flow Enforcement
Organizations meet this requirement by managing “information flow control policies and enforcement mechanisms to control the flow of CUI between designated sources and destinations (e.g., networks, individuals, and devices) within systems and between interconnected systems.” Microsoft Purview’s Information Protection label policies along with proper configuration of Data Loss Prevention (DLP) policies can prevent the flow of sensitive information between internal and external users via email, Teams, on-premises repositories and other applications. Security Copilot can share with users the top DLP alerts shown below, give a summary or explanation of an alert, and assist in adjusting policy based upon the alert scenario.
3.1.5 Least Privilege
Applying least privilege to accounts can often be combined with managing the functions they can perform, such as executing code or granting elevated access. Once an organization turns on Microsoft Defender for Cloud and Microsoft Entra ID Privileged Identity Management (PIM) for its resources in Azure or other infrastructure, users can be granted just-in-time access to virtual machines and other resources. Conversely, those same users can lose access based upon suspicious behavior like clearing event logs or disabling antimalware capabilities. Security Copilot can be used in the Microsoft Entra admin portal to guide the administrator on creating notification policies or conduct access reviews for activities like the aforementioned.
Security Copilot may also be used to identify where users have more than ‘just enough access’, or help the administrator create lifecycle workflows where a user’s privileges need modification based on changes in their role or group. On a final note, the draft of NIST 800-171Ar3 specifies that an assessor would possibly need to examine a list of access authorizations and validate where privileges were removed or reassigned during a given period – all of which can be generated in reports aided by Security Copilot.
3.1.11 Session Termination
This requirement has some art along with science. An organization can define “conditions or trigger events that require automatic session termination” by periods of inactivity, time of day, risky behavior, and more. Microsoft Entra ID defaults reauthentication requests to a rolling 90 days but that may be too infrequent for some users whom daily access sensitive data sets, such as an Azure subscription with Windows servers holding CUI. Security Copilot can aid administrators to develop Conditional Access policies based on sign-in frequency, session type (from a managed or non-managed device), or sign-in risk. Also, Security Copilot can be prompted to help a SOC analyst reason over permission analytics to determine the impact of a user who’s exhibiting risky behavior and take subsequent action to terminate a session outside of the normal ‘conditions’.
3.1.16 Wireless Access and 3.1.18 Access Control for Mobile Devices
Rather an endpoint such as a laptop or various types of mobile devices, Security Copilot can aid users within the Microsoft Intune admin center to create policies for “usage restrictions, configuration requirements, and connection requirements” when wirelessly accessing systems of record. Below is an example of the embedded Security Copilot experience where we want to create a policy for Windows laptops in our environment.
Example of Security Copilot assisting with Endpoint Management Policies
Users can also ask Security Copilot to summarize an existing policy for devices in the environment, as well as generate or explore Microsoft Entra ID conditional access policies.
“Authoriz[ing] each type of wireless access” or “connection of mobile devices” will require policies that span multiple technologies. In many cases, administrators tasked with creating or managing these policies may not have the combined domain knowledge, yet Security Copilot bolsters individuals where they may possess certain skill gaps.
Meeting NIST 800-171 with Limited Resources
Joy Chik wrote in her blog, 5 ways to secure identity and access for 2024, “Identity teams can use natural language prompts in Copilot to reduce time spent on common tasks, such as troubleshooting sign-ins and minimizing gaps in identity lifecycle workflows. It can also strengthen and uplevel expertise in the team with more advanced capabilities like investigating users and sign-ins associated with security incidents while taking immediate corrective action.”
Microsoft Security Copilot is an advanced security solution that helps companies protect CUI access and prepare for CMMC assessment by elevating the skillset of almost every cybersecurity tool and professional in the organization. It’s also bringing the identity team and the SOC team closer together than ever before. DIB companies working with limited resources or MSSPs struggling to keep up with demand will, both, likely look to creatively deploy AI solutions such as Security Copilot in the near future.
This article is contributed. See the original author and article here.
A new chapter in business AI innovation
As we begin a new year, large companies and corporations need practical solutions that rapidly drive value. Modern customer relationship management (CRM) and enterprise resource planning (ERP) systems fit perfectly into this category. These solutions build generative AI, automation, and other advanced AI capabilities into the tools that people use every day. Employees can experience new, more effective ways of working and customers can enjoy unprecedented levels of personalized service.
If you’re a business leader who has already embraced—or plans to embrace—AI-powered CRM and ERP systems in 2024, you’ll help your organization drive business transformation, innovation, and efficiency in three key ways:
Streamline operations: Transform CRM and ERP systems from siloed applications into a unified, automated ecosystem, enhancing team collaboration and data sharing.
Empower insightful decisions: Provide all employees with AI-powered natural language analysis, allowing them to quickly generate insights needed to inform decisions and identify new market opportunities.
Elevate customer and employee experiences: Personalize customer engagements using 360-degree customer profiles. Also, boost productivity with AI-powered chatbots and automated workflows that free employees to focus on more strategic, high-value work.
The time has come to think about AI as something much more than a technological tool. It’s a strategic imperative for 2024 and beyond. In this new year, adopting CRM AI for marketing, sales, and service and ERP AI for finance, supply chain, and operations is crucial to competing and getting ahead.
2023: A transformative year for AI in CRM and ERP systems
Looking back, 2023 was a breakthrough year for CRM AI and ERP AI. Microsoft rolled out new AI-powered tools and features in its CRM and ERP applications, and other solution providers soon followed. Among other accomplishments, Microsoft launched—and continues to enhance—Microsoft Copilot for Dynamics 365, the world’s first copilot natively built for CRM and ERP systems.
Evolving AI technologies to this point was years, even decades, in the making. However, as leaders watched AI in business gradually gain momentum, many took steps to prepare. Some applied new, innovative AI tools and features in isolated pilot projects to better understand the business case for AI, including return on investment (ROI) and time to value. Others forged ahead and broadly adopted it. All wrestled with the challenges associated with AI adoption, such as issues around security, privacy, and compliance.
In one example, Avanade, a Microsoft solutions provider with more than 5,000 clients, accelerated sales productivity by empowering its consultants with Microsoft Copilot for Sales. Consultants used to manually update client records in their Microsoft Dynamics 365 CRM system and search across disconnected productivity apps for insights needed to qualify leads and better understand accounts. Now, with AI assistance at their fingertips, they can quickly update Dynamics 365 records, summarize emails and meetings, and prepare sales information for client outreach.
In another example, Domino’s Pizza UK & Ireland Ltd. helped ensure exceptional customer experiences—and optimized inventory and deliveries—with AI-powered predictive analytics in Microsoft Dynamics 365 Supply Chain Management. Previously, planners at Domino’s relied on time-consuming, error-prone spreadsheets to forecast demand at more than 1,300 stores. By using intelligent demand-planning capabilities, they improved their forecasting accuracy by 72%. They can also now quickly generate the insights needed to ensure each store receives the right resources at the right times to fill customer orders.
Trends and insights for CRM AI and ERP AI in 2024
All signs indicate that in the years to come organizations will continue to find new, innovative ways to use CRM AI and ERP AI—and that their employees will embrace the shift.
In recent research that looks at how AI is transforming work, Microsoft surveyed hundreds of early users of generative AI. Key findings showed that 70% of users said generative AI helped them to be more productive, and 68% said it improved the quality of their work. Also, 64% of salespeople surveyed said generative AI helped them to better personalize customer engagements and 67% said it freed them to spend more time with customers.1
Looking forward, the momentum that AI in business built in 2023 is expected to only grow in 2024. In fact, IDC predicts that global spending on AI solutions will reach more than USD500 billion by 2027.2
Some of the specific AI trends to watch for in 2024 include:
Expansion of data-driven strategies and tactics. User-friendly interfaces with copilot capabilities and customizable dashboards with data visualizations will allow employees in every department to access AI-generated insights and put them in context. With the information they need right at their fingertips, employees will make faster, smarter decisions.
Prioritization of personalization and user experiences. Predictive sales and marketing strategies will mature with assistance from AI in forecasting customer behaviors and preferences and mapping customer journeys, helping marketers be more creative and sellers better engage with customers. Also, AI-powered CRM platforms will be increasingly enriched with social media and other data, providing deeper insights into brand perception and customer behavior.
Greater efficiencies using AI and cloud technologies. Combining the capabilities of AI-powered CRM and ERP tools with scalable, flexible cloud platforms that can store huge amounts of data will drive new efficiencies. Organizations will also increasingly identify new use cases for automation, then quickly build and deploy them in a cloud environment. This will further boost workforce productivity and process accuracy.
Increased scrutiny of AI ethics. Responsible innovation requires organizations to adhere to ethical AI principles, which may require adjustments to business operations and growth strategies. To guide ethical AI development and use, Microsoft has defined responsible AI principles. It also helps advance AI policy, research, and engineering.
AI innovations on the horizon for CRM and ERP systems
Keep an eye on technological and other innovations in the works across the larger AI ecosystem. For example, watch for continued advancements in low-code/no-code development platforms. With low-code/no-code tools, nontechnical and technical users alike can create AI-enhanced processes and apps that allow them to work with each other and engage with customers in fresh, new ways.
Innovations in AI will also give rise to new professions, such as AI ethicists, AI integrators, AI trainers, and AI compliance managers. These emerging roles—and ongoing AI skills development—will become increasingly important as you transform your workforce and cultivate AI maturity.
To drive transformation with AI in CRM and ERP systems, you should carefully plan and implement an approach that works best for your organization. The following best practices for AI adoption, which continue to evolve, can help guide you:
Strategic implementation: Formulate a long-term AI implementation strategy to empower employees and optimize business processes, emphasizing data-driven culture, relevant skills development, and scalable, user-friendly AI tools in CRM and ERP systems.
Ethical adoption: Adhere to evolving ethical guidelines, starting with AI-enhanced process automation and progressing toward innovative value creation, while ensuring your organization is hyperconnected.
Data quality and security: Maintain high data integrity and security standards, regularly auditing AI training data to avoid biases and ensure trustworthiness.
Alignment with business goals: Align AI initiatives with strategic objectives, measuring their impact on business outcomes, and proactively managing any potential negative effects on stakeholders.
As you and your organization learn more about AI and discover what you can do with it, don’t lose sight of the importance of human and AI collaboration. Strongly advocate for using AI to augment—rather than replace—human expertise and decision-making across your organization. Remember, although employees will appreciate automated workflows and AI-generated insights and recommendations, AI is not infallible. Successful business still depends on people making intelligent, strategic decisions.
The importance of embracing AI in business
Immense opportunities exist for organizations across industries to use AI-powered CRM and ERP systems to accelerate business transformation, innovation, and efficiency. According to Forrester Research, businesses that invest in enterprise AI initiatives will boost productivity and creative problem solving by 50% in 2024.4 Yet, without leaders who are fully engaged in AI planning and implementation, many organizations will struggle to realize AI’s full potential.
Be a leader who prioritizes and champions AI in your business strategies for 2024. Your leadership must be visionary, calling for changes that span across roles and functions and even your entire industry. It must be practical, grounded in purposeful investments and actions. It must be adaptable, remaining open and flexible to shifting organizational strategies and tactics as AI technologies evolve.
Team up with a leader in AI innovation
Wherever your organization is in its AI adoption journey, take the next step by learning more about how AI works with Microsoft Dynamics 365, a comprehensive and customizable suite of intelligent CRM and ERP applications.
With copilot and other AI-powered capabilities in Dynamics 365, your organization can create unified ecosystems, accelerate growth, and deliver exceptional customer experiences. It can also continually improve operational agility while realizing greater productivity and efficiency. Get started today to make 2024 a transformative year for your organization.
Gartner is a registered trademark and service mark, and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Recent Comments