This article is contributed. See the original author and article here.
In today’s fast-paced business landscape, efficient project planning and insightful execution are essential for success. However, the manual processes involved in project management can often lead to inefficiencies, delays, and increased risks. That’s where Copilot for project comes in, revolutionizing the way organizations approach project management.
With the latest update, this trailblazing feature is Generally Available to all Dynamics 365 Project Operations enabled geographies and languages, ensuring that organizations worldwide can leverage its transformative capabilities. Whether you’re a project manager in a professional services organization or leading projects across various industries, Copilot for project is designed to meet your needs.
Copilot for project empowers users to enhance project management efficiency by generating work breakdown structures, assessing risk registers with suggested mitigations, producing comprehensive project status reports, and enabling natural language commands through the sidecar chat feature.
Copilot for project capabilities
Insightful Project Status Reporting
One of the most time-consuming tasks for project managers is the production of project status reports. Gathering data from multiple sources, summarizing project health dimensions, and highlighting risks are all essential but repetitive tasks that can consume valuable time and resources.
Copilot for project changes the game by automating key components of the project status report, allowing project managers to focus on crafting narrative text and refining project-specific insights. Using Copilot for project, the project manager can produce project status reports that integrate concise summaries of scheduling and financial data, as well as generate insightful content that highlights the overall project progress, financial performance, and schedule performance. There are two types of reports to address the reporting needs of both internal and external stakeholders: an internal report that provides a work summary by resource, along with financial data including estimates and actuals, and an external report that excludes the financial data. All reports are saved and can be recalled with all prior edits maintained.
Efficient Task Planning
Streamline project planning with auto-generated work breakdown structures, saving time and effort in creating project delivery plans. Enter the project name and description, and Copilot will provide the suggested task plan for your project. You can tailor further this task plan to suit your project’s needs.
Risk Assessment and Mitigation Planning
Given the disposition of the project’s scope, schedule, and budget, Copilot assesses risk registers, provides mitigation suggestions, and gauges probabilities for each identified risk.
Call to Action
With Copilot for project, project managers can now achieve significant time savings, especially when juggling multiple projects simultaneously. By eliminating mundane tasks like manual data aggregation, maintaining multiple data pivots for collecting insights, and summarization, project managers can allocate their energy towards strategic decision-making and driving project success.
Overall, Copilot for project represents a significant leap forward in project management efficiency and effectiveness. With its advanced AI capabilities, organizations can optimize project delivery times, reduce costs, increase customer satisfaction, and ultimately drive growth and profitability. Embrace the future of project management with Copilot for project and unlock a world of possibilities for your organization.
Learn More
We are making constant enhancements to our features. To learn more about Project for copilot feature, visit Copilot for project.
At times, if you are new to working with Environment Variables and you’re looking to use them in your Flows but don’t see them? Here’s why! Flows Outside Solutions If you are creating Flows from the My Flows section, let’s see if you can access Environment Variables or not – Flows in Solutions [Default and … Continue reading Why Environment Variables don’t appear in Flows? | [Quick Tip]
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
Microsoft Copilot for Security and NIST 800-171: Access Control
Microsoft Copilot for Security in Microsoft’s US Gov cloud offerings (Microsoft 365 GCC/GCC High and Azure Government) is currently unavailable and does not have an ETA for availability. Future updates will be published to the public roadmap here.
As of this writing we’ve received the Proposed Rule of the Cybersecurity Maturity Model Certification (CMMC) 2.0, and the public comment period ended on February 26. The National Institute of Standards and Technology (NIST) just released their analysis of public comments on the final draft of NIST Special Publication 800-171 Revision 3 (NIST 800-171r3) and initial draft of NIST 800-171Ar3. NIST plans to publish final versions sometime in Spring 2024. These publications are important because one of the primary requirements for CMMC is that organizations will need to implement most, if not all, of NIST 800-171r3’s controls for Level 2 certification.
In the first blog of this series, we looked at the System and Information Integrity family of requirements (3.14) in the draft of NIST 800-171r3, which covers flaw remediation, malicious code protection, security alerts via advisories and directives, and system monitoring. Also, the blog discussed how Microsoft Copilot for Security (Security Copilot) can help DIB organizations meet these requirements by identifying, reporting, and correcting system flaws more efficiently and effectively. The second blog in this series will dive into the very first requirement family – Access Control (3.1)
Early reports indicate organizations are reducing time and resource constraints by deploying Security Copilot in private preview and the early access program. Despite no public timeline on the availability of Security Copilot in Microsoft’s US-sovereign cloud offerings (Microsoft 365 GCC/GCC High and Azure Government), it’s worthwhile exploring how companies in the Defense Industrial Base (DIB) may use these AI-powered capabilities to meet NIST 800-171r3 security requirements, and ultimately defend against identity threats with finite or limited resources.
NOTE: Some requirements, such as 3.1.1 contain seven bullets (a-g) or more, and an entire blog could be written on that one requirement alone. Each section is not exhaustive of the requirement nor the applications of certain technologies. The suggested applications of Microsoft solutions do not guarantee compliance with any regulation nor prevention of an attack or compromise. All images and references are based upon preview experiences and do not guarantee identical experiences in general availability or within the U.S. Sovereign Cloud offerings.
Access Control (3.1.)
One might ask why Access Control holds the prominent first spot in the NIST 800-171 publication. It’s relatively simple – Access Control is alphabetically first. However, this requirement family is arguably one of the most paramount because of the remarkable growth in identity-based attacks and the need for identity architects or teams to work more closely with the Security Operations Center (SOC). Microsoft Entra data noted in the Microsoft Digital Defense Report shows the number of “attempted attacks increased more than tenfold compared to the same period in 2022, from around 3 billion per month to over 30 billion. This translates to an average of 4,000 password attacks per second targeting Microsoft cloud identities [2023]”.
3.1.1. Account Management
It is obviously a great starting point to “a. Define the types of system accounts allowed and prohibited” to access systems that hold Controlled Unclassified Information (CUI) or other sensitive information. Many organizations or their Managed Security Service Provider (MSSP) develop a mapping of privileged accounts and non-privileged accounts within their environment and develop policy based on principles of Least Privilege – which is a requirement to discuss later in this blog. Yet, the power of Microsoft Entra ID and Security Copilot shines most brightly after the security team “define(s)” or “c. Specify(ies) authorized users of the system(s), group(s) and role membership(s), and access authorization(s).”
Microsoft Entra provides rich information for Microsoft Defender for Identity (MDI) and Microsoft Sentinel for “e. Monitor(ing) the use of system accounts.” Yet, Security Copilot increases the utility of this trove of incidents and events further by easily summarizing details about the totality of a user’s authentications, associations, and privileged access as shown in the figure below.
Furthermore, SOC and Identity administrators alike can quickly surface every user in the environment with expired, risky, or dormant accounts. They can also take the next steps to “f. Disable system accounts” when they meet those criteria or modify the identities and/or privileges. Much of this investigation and troubleshooting is done without the need of policy and configuration surfing, nor does the SOC or Identity administrator need to craft a KQL query or PowerShell script from scratch. Security Copilot allows these two roles to do all of this using natural language prompts.
Alex Weinert, VP of Identity Security at Microsoft, recently spoke of the narrowing gap between these two types of administrators, skillsets, and their teams in Episode 2 of The Defender’s Watch. Alex explains, “it’s more nuanced than… relying on your SOC team to catch things that are happening in Identity. Not all Identities are the same. Not all your servers are the same. We want to be making sure the two teams are working together to build a map of what are those critical resources and that there’s a feedback loop… listening to the SOC on the other side understanding what’s happening in the organization and what are we going to do as administrators [given investigation to remediation of an incident can take time]”. Security Copilot can be the accelerant for incidents and intelligence to drive Account Management and identity policy change.
Alex also quipped “if you’re an Identity Architect go buy your SOC team a pizza and get to know them” as he expressed the need for collaboration across Identity and SOC teams for access control. Ironically Dominoes just rolled out unified identity with Microsoft Entra ID.
3.1.2. Access Enforcement
Security Copilot may help organizations day-to-day enforce Microsoft Entra ID access control policies and modify configurations to increase the identity score shown below. An Identity administrator or member of the SOC can also quickly create an audit log, for example, to detect when a new credential is added to an application registration by simply asking Security Copilot for the applicable KQL code. Also, individuals interviewed for CMMC assessments can leverage Security Copilot to quickly surface a summary of activities completed by your Entra ID (active directory) privileged users, identify when changes to Conditional Access policies were made, and more.
When going through a CMMC assessment, an assessor will be looking to determine if approved authorizations for “logical access” to CUI and system resources are enforced. Taking a step away from Security Copilot, it’s important to note the new MDI Identity Threat Detection and Response (ITDR) dashboard is one of the most elegant ways to show where and how enforcement is taking place or where your organization may not be. In a single plane, administrators can see their identity score from Microsoft Secure Score updated daily with a quick link to see access control policies and “system configuration settings”, new instances where users have exhibited risky lateral movement, and a summary of privileged identities with a quick link to view the full “list of approved authorizations”.
3.1.3. Information Flow Enforcement
Organizations meet this requirement by managing “information flow control policies and enforcement mechanisms to control the flow of CUI between designated sources and destinations (e.g., networks, individuals, and devices) within systems and between interconnected systems.” Microsoft Purview’s Information Protection label policies along with proper configuration of Data Loss Prevention (DLP) policies can prevent the flow of sensitive information between internal and external users via email, Teams, on-premises repositories and other applications. Security Copilot can share with users the top DLP alerts shown below, give a summary or explanation of an alert, and assist in adjusting policy based upon the alert scenario.
3.1.5 Least Privilege
Applying least privilege to accounts can often be combined with managing the functions they can perform, such as executing code or granting elevated access. Once an organization turns on Microsoft Defender for Cloud and Microsoft Entra ID Privileged Identity Management (PIM) for its resources in Azure or other infrastructure, users can be granted just-in-time access to virtual machines and other resources. Conversely, those same users can lose access based upon suspicious behavior like clearing event logs or disabling antimalware capabilities. Security Copilot can be used in the Microsoft Entra admin portal to guide the administrator on creating notification policies or conduct access reviews for activities like the aforementioned.
Security Copilot may also be used to identify where users have more than ‘just enough access’, or help the administrator create lifecycle workflows where a user’s privileges need modification based on changes in their role or group. On a final note, the draft of NIST 800-171Ar3 specifies that an assessor would possibly need to examine a list of access authorizations and validate where privileges were removed or reassigned during a given period – all of which can be generated in reports aided by Security Copilot.
3.1.11 Session Termination
This requirement has some art along with science. An organization can define “conditions or trigger events that require automatic session termination” by periods of inactivity, time of day, risky behavior, and more. Microsoft Entra ID defaults reauthentication requests to a rolling 90 days but that may be too infrequent for some users whom daily access sensitive data sets, such as an Azure subscription with Windows servers holding CUI. Security Copilot can aid administrators to develop Conditional Access policies based on sign-in frequency, session type (from a managed or non-managed device), or sign-in risk. Also, Security Copilot can be prompted to help a SOC analyst reason over permission analytics to determine the impact of a user who’s exhibiting risky behavior and take subsequent action to terminate a session outside of the normal ‘conditions’.
3.1.16 Wireless Access and 3.1.18 Access Control for Mobile Devices
Rather an endpoint such as a laptop or various types of mobile devices, Security Copilot can aid users within the Microsoft Intune admin center to create policies for “usage restrictions, configuration requirements, and connection requirements” when wirelessly accessing systems of record. Below is an example of the embedded Security Copilot experience where we want to create a policy for Windows laptops in our environment.
Example of Security Copilot assisting with Endpoint Management Policies
Users can also ask Security Copilot to summarize an existing policy for devices in the environment, as well as generate or explore Microsoft Entra ID conditional access policies.
“Authoriz[ing] each type of wireless access” or “connection of mobile devices” will require policies that span multiple technologies. In many cases, administrators tasked with creating or managing these policies may not have the combined domain knowledge, yet Security Copilot bolsters individuals where they may possess certain skill gaps.
Meeting NIST 800-171 with Limited Resources
Joy Chik wrote in her blog, 5 ways to secure identity and access for 2024, “Identity teams can use natural language prompts in Copilot to reduce time spent on common tasks, such as troubleshooting sign-ins and minimizing gaps in identity lifecycle workflows. It can also strengthen and uplevel expertise in the team with more advanced capabilities like investigating users and sign-ins associated with security incidents while taking immediate corrective action.”
Microsoft Security Copilot is an advanced security solution that helps companies protect CUI access and prepare for CMMC assessment by elevating the skillset of almost every cybersecurity tool and professional in the organization. It’s also bringing the identity team and the SOC team closer together than ever before. DIB companies working with limited resources or MSSPs struggling to keep up with demand will, both, likely look to creatively deploy AI solutions such as Security Copilot in the near future.
Now that you must’ve already setup your basic Power Platform Pipeline as yet and are looking to explore how to extend the Power Platform Pipeline to do more advanced operations, this post is for you!In case you are still looking to first setup your Power Platform Pipeline, you can check this Blog Series which this … Continue reading Pre-Export Step Required setting in Deployment Pipeline | Power Platform Pipelines
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
Here’s a blog series to get you up to speed on Power Platform Pipelines! Setting up and Running Power Platform Pipelines Here is what you need to get done in order to setup Power Platform Pipelines – Advanced Settings Scenario Blog Once request for deployment is submitted. Pre-Export Step Required setting in Deployment Pipeline | … Continue reading Power Platform Pipelines | Blog Series
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
In case you setup your first Power Platform Pipeline and looking to test it out? This post is for you. Or if you haven’t yet configured your Power Platform Pipelines first, refer this post – Setup Power Platform Pipelines Now that you have your basic Power Platform Pipeline set in place, let’s run a created Pipeline! … Continue reading Run a Power Platform Pipeline
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
A new chapter in business AI innovation
As we begin a new year, large companies and corporations need practical solutions that rapidly drive value. Modern customer relationship management (CRM) and enterprise resource planning (ERP) systems fit perfectly into this category. These solutions build generative AI, automation, and other advanced AI capabilities into the tools that people use every day. Employees can experience new, more effective ways of working and customers can enjoy unprecedented levels of personalized service.
If you’re a business leader who has already embraced—or plans to embrace—AI-powered CRM and ERP systems in 2024, you’ll help your organization drive business transformation, innovation, and efficiency in three key ways:
Streamline operations: Transform CRM and ERP systems from siloed applications into a unified, automated ecosystem, enhancing team collaboration and data sharing.
Empower insightful decisions: Provide all employees with AI-powered natural language analysis, allowing them to quickly generate insights needed to inform decisions and identify new market opportunities.
Elevate customer and employee experiences: Personalize customer engagements using 360-degree customer profiles. Also, boost productivity with AI-powered chatbots and automated workflows that free employees to focus on more strategic, high-value work.
The time has come to think about AI as something much more than a technological tool. It’s a strategic imperative for 2024 and beyond. In this new year, adopting CRM AI for marketing, sales, and service and ERP AI for finance, supply chain, and operations is crucial to competing and getting ahead.
2023: A transformative year for AI in CRM and ERP systems
Looking back, 2023 was a breakthrough year for CRM AI and ERP AI. Microsoft rolled out new AI-powered tools and features in its CRM and ERP applications, and other solution providers soon followed. Among other accomplishments, Microsoft launched—and continues to enhance—Microsoft Copilot for Dynamics 365, the world’s first copilot natively built for CRM and ERP systems.
Evolving AI technologies to this point was years, even decades, in the making. However, as leaders watched AI in business gradually gain momentum, many took steps to prepare. Some applied new, innovative AI tools and features in isolated pilot projects to better understand the business case for AI, including return on investment (ROI) and time to value. Others forged ahead and broadly adopted it. All wrestled with the challenges associated with AI adoption, such as issues around security, privacy, and compliance.
In one example, Avanade, a Microsoft solutions provider with more than 5,000 clients, accelerated sales productivity by empowering its consultants with Microsoft Copilot for Sales. Consultants used to manually update client records in their Microsoft Dynamics 365 CRM system and search across disconnected productivity apps for insights needed to qualify leads and better understand accounts. Now, with AI assistance at their fingertips, they can quickly update Dynamics 365 records, summarize emails and meetings, and prepare sales information for client outreach.
In another example, Domino’s Pizza UK & Ireland Ltd. helped ensure exceptional customer experiences—and optimized inventory and deliveries—with AI-powered predictive analytics in Microsoft Dynamics 365 Supply Chain Management. Previously, planners at Domino’s relied on time-consuming, error-prone spreadsheets to forecast demand at more than 1,300 stores. By using intelligent demand-planning capabilities, they improved their forecasting accuracy by 72%. They can also now quickly generate the insights needed to ensure each store receives the right resources at the right times to fill customer orders.
Trends and insights for CRM AI and ERP AI in 2024
All signs indicate that in the years to come organizations will continue to find new, innovative ways to use CRM AI and ERP AI—and that their employees will embrace the shift.
In recent research that looks at how AI is transforming work, Microsoft surveyed hundreds of early users of generative AI. Key findings showed that 70% of users said generative AI helped them to be more productive, and 68% said it improved the quality of their work. Also, 64% of salespeople surveyed said generative AI helped them to better personalize customer engagements and 67% said it freed them to spend more time with customers.1
Looking forward, the momentum that AI in business built in 2023 is expected to only grow in 2024. In fact, IDC predicts that global spending on AI solutions will reach more than USD500 billion by 2027.2
Some of the specific AI trends to watch for in 2024 include:
Expansion of data-driven strategies and tactics. User-friendly interfaces with copilot capabilities and customizable dashboards with data visualizations will allow employees in every department to access AI-generated insights and put them in context. With the information they need right at their fingertips, employees will make faster, smarter decisions.
Prioritization of personalization and user experiences. Predictive sales and marketing strategies will mature with assistance from AI in forecasting customer behaviors and preferences and mapping customer journeys, helping marketers be more creative and sellers better engage with customers. Also, AI-powered CRM platforms will be increasingly enriched with social media and other data, providing deeper insights into brand perception and customer behavior.
Greater efficiencies using AI and cloud technologies. Combining the capabilities of AI-powered CRM and ERP tools with scalable, flexible cloud platforms that can store huge amounts of data will drive new efficiencies. Organizations will also increasingly identify new use cases for automation, then quickly build and deploy them in a cloud environment. This will further boost workforce productivity and process accuracy.
Increased scrutiny of AI ethics. Responsible innovation requires organizations to adhere to ethical AI principles, which may require adjustments to business operations and growth strategies. To guide ethical AI development and use, Microsoft has defined responsible AI principles. It also helps advance AI policy, research, and engineering.
AI innovations on the horizon for CRM and ERP systems
Keep an eye on technological and other innovations in the works across the larger AI ecosystem. For example, watch for continued advancements in low-code/no-code development platforms. With low-code/no-code tools, nontechnical and technical users alike can create AI-enhanced processes and apps that allow them to work with each other and engage with customers in fresh, new ways.
Innovations in AI will also give rise to new professions, such as AI ethicists, AI integrators, AI trainers, and AI compliance managers. These emerging roles—and ongoing AI skills development—will become increasingly important as you transform your workforce and cultivate AI maturity.
To drive transformation with AI in CRM and ERP systems, you should carefully plan and implement an approach that works best for your organization. The following best practices for AI adoption, which continue to evolve, can help guide you:
Strategic implementation: Formulate a long-term AI implementation strategy to empower employees and optimize business processes, emphasizing data-driven culture, relevant skills development, and scalable, user-friendly AI tools in CRM and ERP systems.
Ethical adoption: Adhere to evolving ethical guidelines, starting with AI-enhanced process automation and progressing toward innovative value creation, while ensuring your organization is hyperconnected.
Data quality and security: Maintain high data integrity and security standards, regularly auditing AI training data to avoid biases and ensure trustworthiness.
Alignment with business goals: Align AI initiatives with strategic objectives, measuring their impact on business outcomes, and proactively managing any potential negative effects on stakeholders.
As you and your organization learn more about AI and discover what you can do with it, don’t lose sight of the importance of human and AI collaboration. Strongly advocate for using AI to augment—rather than replace—human expertise and decision-making across your organization. Remember, although employees will appreciate automated workflows and AI-generated insights and recommendations, AI is not infallible. Successful business still depends on people making intelligent, strategic decisions.
The importance of embracing AI in business
Immense opportunities exist for organizations across industries to use AI-powered CRM and ERP systems to accelerate business transformation, innovation, and efficiency. According to Forrester Research, businesses that invest in enterprise AI initiatives will boost productivity and creative problem solving by 50% in 2024.4 Yet, without leaders who are fully engaged in AI planning and implementation, many organizations will struggle to realize AI’s full potential.
Be a leader who prioritizes and champions AI in your business strategies for 2024. Your leadership must be visionary, calling for changes that span across roles and functions and even your entire industry. It must be practical, grounded in purposeful investments and actions. It must be adaptable, remaining open and flexible to shifting organizational strategies and tactics as AI technologies evolve.
Team up with a leader in AI innovation
Wherever your organization is in its AI adoption journey, take the next step by learning more about how AI works with Microsoft Dynamics 365, a comprehensive and customizable suite of intelligent CRM and ERP applications.
With copilot and other AI-powered capabilities in Dynamics 365, your organization can create unified ecosystems, accelerate growth, and deliver exceptional customer experiences. It can also continually improve operational agility while realizing greater productivity and efficiency. Get started today to make 2024 a transformative year for your organization.
Gartner is a registered trademark and service mark, and Hype Cycle is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Given that you need to setup Power Platform Pipelines, here’s a post for you!This post will walk you through on how you can setup Power Platform Pipelines. Pre-Requisites Here’s what you need to setup in order to enable Power Platform Pipelines – Setting up Environments Here’s how you can setup your Environments in the – … Continue reading Setup Power Platform Pipelines
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
What is your role and title? What are your responsibilities associated with your position?
I am an Integration Developer, and my key responsibilities consist of working with my team and alongside clients, making the transition and integration of their products and services smoother.
Can you provide some insights into your day-to-day activities and what a typical day in your role looks like?
Sure, merging a portion of my activities, what I could express as day-to-day would be: I start by checking for any issues in our clients’ production environments to ensure everything’s running smoothly, and then my main activities will be implementing cloud integration solutions with Azure Integration Services. Occasionally, I also help the team on on-premises projects using BizTalk Server.
Also, one of my big activities is going deep into Enterprise Integration features and crafting new ways to archive specific tasks. Do proof-of-concept in new features, explore existing or new services, test those solutions, and find alternatives, for example, creating Azure functions as an alternative to the Integration Account and testing inside Logic App flows to use those Azure functions.
I’m always on the hunt for new solutions to any problems we face, and in doing so, there’s a lot of documenting everything we do. This documentation is more than just busy work; it really helps by streamlining our processes and guides our team and community through troubleshooting. To ensure the importance of knowledge sharing, I actively produce informative content for our blog and YouTube Channel. This includes writing posts and creating videos that share our experiences, solutions, and insights with a broader audience.
I also contribute to enhancing our team’s productivity by creating tools tailored to address specific issues or streamline processes that are later shared with the community.
What motivates and inspires you to be an active member of the Aviators/Microsoft community?
What really drives me to engage with the Aviators/Microsoft community is my passion for tackling challenges and finding solutions. There’s something incredibly rewarding about cracking a tough problem and then being able to pass on that knowledge to others. I believe we’ve all had that moment of gratitude towards someone who’s taken the time to document and share a solution to the exact issue we were facing. That cycle of giving and receiving is what motivates me the most. It’s about contributing to a community that has been so important in my own learning and problem-solving journey, and I’m inspired to give back and assist others in the same way.
Looking back, what advice do you wish you would have been told earlier on that you would give to individuals looking to become involved in STEM/technology?
I could say something about always having a passion for new technologies and staying up to date with what you are pursuing. There would be nothing wrong with it, but those sound like already-at-hand phrases to be exchanged without considering each individual’s current state.
On a moment, and in a world where mental health is so important, let me share a simple tale that resonates with anyone at the crossroads of their career, whether they are new and confused about what to do, whether they’re just starting, or contemplating a shift in direction. It’s a gentle reminder that venturing into new territories can be daunting but immensely rewarding and that, at times, we may not even realize that our current paths could be detrimental to our well-being, professional growth, and personal relationships.
“There was once a man that went into the wilds of Africa, believing himself to be a hunter for many years. Despite his efforts, he found himself unable to catch any game. Overwhelmed by frustration and feeling lost, he sought the guidance of a Shaman from a nearby tribe.
Confessing to the Shaman, he said, “Hunting is what I live for, but I’m hitting a wall. There’s simply nothing out there for me to hunt, and I don’t know what to do.”
The Shaman, who had seen many seasons and had a kind of wisdom you don’t come across every day, simply put his arm on the hunter’s shoulder, looked him in the eyes and said, “Really? Nothing to hunt for? This land has fed us for generations. There is plenty of hunt out there and yet you cannot see it? Maybe the problem isn’t the land…allow me to ask you something very important, do you genuinely desire to be a hunter?”
This narrative goes much deeper than the act of hunting. It’s a reflection on our passions, how we confront our challenges, and the realization that our perspective might need a shift.
If our passions no longer ignite us, or if our efforts to chase them lead nowhere, it might be a sign to let go, not in defeat, but in liberation, because, in the end, I want everyone to be happy with the career path they have chosen, so that would be my advice, to read this simple tale, apply it to your current situation and ask yourself, “Do I really want to do keep doing what I am doing right now?” And if you find that your current path is not worth pursuing, if your mental health is not in shape, or if you are hitting a wall, then yes, it is time to take the step!
Imagine you had a magic wand that could create a feature in Logic Apps. What would this feature be and why?
In a world where AI is at such a fast pace, one feature that I would personally like to have on Logic Apps is prompted AI-generated Logic App flows. What that would mean is you give a prompt to the designer of what you pretend, and you would have a generated, most efficient flow for what you have described. Of course, you will still need to configure some things, but I think AI-generated flows could outline and cover many scenarios, making our processes faster and more efficient.
AI is here to stay, whether we like it or not; it just doesn’t go away, so we could take advantage of it to create better, faster, and more efficient products or stay behind while we see others do it.
What are some of the most important lessons you’ve learned throughout your career that surprised you?
One of the most surprising yet vital lessons from my career is the central role of relationships in keeping the ship sailing smoothly. Having positive communication and nurturing a positive work environment are crucial elements that empower a team to deliver top-notch results, remain driven, and maximize their daily potential. A car has four tires, and you need them all to get home safely.
Check out this customer success story on how Microsoft is helping to keep Slovenia’s lights on by improving and modernizing ELES’ operations. In 2012, ELES turned to Microsoft when they needed a new enterprise resource planning (ERP) solution. Today, ELES uses Azure Logic Apps to connect their ERP with other systems, improving collaboration between departments, streamline operations, and better manage their energy resources and infrastructure.
For those using the IBM MQ Built-in (In-App) connector available in Logic Apps Standard, check out this article explain more on Handles and how to calculate the max value to set in your IBM MQ server.
Learn about various issue scenarios related to the Azure Automation connector in both Logic App Consumption and Standard, along with its causes and resolutions.
V1 Actions/Triggers of the SQL Connector for Logic Apps will be deprecated by the end of March 2024. In this article, learn how to use a PowerShell Script to identify the Logic Apps still using the deprecated SQL Connectors so that you can change them to the V2 equivalent.
ISE’s retirement date is August 31st, 2024, so make sure you migrate any Logic Apps running on ISE to Logic Apps Standard. Check out this guide video from our FastTrack team that walks you through the whole process!
Check out this recording from the February 2024 meetup for Houston Azure User Group where Azure customers dive into their journey from on-premises Biztalk to Azure Logic Apps hosted in an Integration Service Environment (ISE).
Watch this recording from a webinar hosted by Derek and Tim as they talk about the benefits of Azure’s ecosystem and a step-by-step strategy for a smooth transition from MuleSoft to AIS.
Performance evaluation has been revolutionized by technology, extending its reach to the individual level. Consider health apps on your smartphone. They gather data breadcrumbs from your daily activities, providing analysis of your movement patterns. This isn’t a generic data compilation, but a near-accurate reflection of your physical activity during a specific period.
In the future, it’s conceivable that these apps might be equipped with an AI companion, or Copilot, to guide your next steps based on your past activities. It could suggest rest days or additional exercise to help you achieve your personal health goals.
This concept of performance evaluation based on collected data is the bedrock of process mining and process comparison. Our Copilot functionality adds a layer of assistance, enabling you to make informed decisions about your warehouse operations.
In this context, Copilot can help you optimize warehouse processes. It can identify bottlenecks in certain processes or compare different methods to achieve the same goal, empowering you to choose the most optimal method for your specific case.
In this blog, we’ll explore the essence of this feature, its intended audience, and how and why you should leverage it for your warehousing operations.
Process Mining Insights:
At first glance, using Process Advisor for material movement analysis is easy. The setup process is straightforward:
Go to Warehouse Management > Setup > Process Mining > Warehouse material movement configuration. In the taskbar, select Deploy New Process.
The configuration Wizard will open. Press Next, then enter the name of the process in the field Process Name, choose company, choose number of months to load (12 months = data from latest 12 months) and choose the appropriate Activity. Press Next.
Process is deployed.
The configuration wizard looks like this:
Image: Configuration wizard screenshot.
The easy part is now complete. We have set up a process, named it, and loaded 12 months of data to prepare for our analysis. The difficult part is making sense of our data and using it to make decisions to improve our warehouse output.
Therefore, we will provide you with some real-life examples on how to use the data analysis functionality to understand your processes, and a scenario where we evaluate two different methods and use the Process Advisor to figure out which method would be preferred for our business operations.
Analysis of data
There are multiple ways to analyze your process data to understand and compare your processes.
Start with opening Power Automate and go to the tab Process Mining. The report is accessible on the main page.
Report: When the report is loaded, it can look like this:
Image: Process Mining Case Summary.
3. Select Map
Select the Map tab to display the process map:
Image: Process Mining Map.
This is a screenshot of the process map from our example. On the Map, there are separate locations on which actions(tasks) have taken place, as well as the time spent on this location and between locations. You can change the time unit to, let’s say mean duration, to see how long each activity in a particular location takes per average.
4. Use the Co-Pilot to get started.
We provide you with suggestions for frequent prompts, but you can of course choose to enter whatever you want. In this case, we will use the suggested “provide the top insights” prompt.
Image: Process Mining map with Copilot.
5. Copilot Generates
The Copilot generates a response based on the data in your process map. In the example, we can see that the Copilot has found the “BULK” as the longest running activity, and provided us with a list of the activities with the greatest number of repetitions:
Image: Process Mining map and Copilot generated answer.
6. Copilot Follow Up
We can also ask the Co-pilot follow-up questions. In this case, we will follow-up with the suggested “How to identify my bottleneck?” and “Find my Bottleneck” prompts. The Co-pilot generates a message explaining what the bottleneck is and its mean duration. In this instance, since we have selected the metric Mean duration, we will generate an answer reflecting this metric.
Image: Process Mining map with Copilot generated answer.
The message we receive tells us that the Variant with the highest duration is “Variant 2” with a mean duration of 2 minutes and 47 seconds. It also tells us that the activity with the highest mean duration is “BULK” with a mean duration of 15 minutes.
From this, we can draw the conclusion that “Variant 2” is the variant that takes the longest time to complete, and that the most amount of time is spent in the “BULK” location.
By using the process advisor for warehouse movement material analysis, we can streamline warehouse operations and ensure we don’t spend more time than we need on a particular task or operation. Another example where the Process Advisor can be utilized to enhance operational fluidity in your warehouse is by comparing different methods of achieving a similar goal, to understand which method is more effective to reach your desired goal. We will try to explain how to conduct such a comparison to with a test-case.
In our test-case, we will compare two different methods of picking goods in the Warehouse to figure out which picking method takes less time, so we can increase the Warehouse output.
Test Case : “Single order picking” vs “Cluster picking”
In this test-case, the user wants to know which method of picking is faster, “Single order picking” vs “Cluster picking”. To compare the two, the user goes through the following steps. First, the user creates a Hypothesis for the purpose of this test-case. In this case, the user wants to determine which picking method is faster.
Secondly, the user decides the scope of the test. For both methods, the user will have 5 sales orders with one to five different items per order, in different quantities. Both methods will use identical sales orders for test purposes. In the Work Details screen, we can see the work details for the work that has been created. The Variants are the different Variants of work, so in this instance, for work ID USMF-002327 with Order number 002375 (displayed in the picture) the worker will “Pick” 1 piece of item LB0001 in 5 different variations (in this case colors), then “Put” these 5 items away in packing area (location “PACK”).
Image: Work details screenshot.Image: Work details label(s).
With the “Single order picking” method, the worker picks one order at a time and puts it in the packing location. To clarify, the warehouse worker will go to each location where the item is located, pick and scan that item, repeat the process for each item in that order, take the order to pack location and then repeat with next order.
Worker goes to 5 different locations to pick items, then proceeds to “PACK” location to put items away for packing. Then, the worker repeats the process for the other orders.
Image: Picking locations
After we have constructed our hypothesis and determined the scope, we can go ahead and prepare for the analysis.
First, we will have to deploy our process comparison. We head into Warehouse Management > Setup > Process Mining > Warehouse material process configuration, and in the taskbar, we select Deploy New Process. We select a fitting description as the Process Name, select company and number of months to load. In this test case, we will only be loading one month of data since we don’t need more for this test’s purposes.
Usually, you would want as much correct data(not corrupted/faulty data since this will affect the analysis) and necessary (scope needs to determine how much and what is necessary) data as possible to get a high-quality analysis. When our process has been deployed, we can move on to the analysis and evaluate this process.
We load our process map into Power Automate, and in the beginning, it will look something like this:
Image: Process Map Starting view.
We can press the Play Animation button to get a representation of the process.
Image: Process Map Starting view.
In the Statistics tab, we can see basic information of the process.
Image: Process mining statistics tab overview.
In the Variants tab, we can view the different work-Variants. By selecting one, we can get an in-depth view of, in this case, “Variant 3”. We can see that in this variant, 6 cases occurred, the total duration was 8 minutes and 15 seconds, and the total active time was 8 minutes and 14 seconds. In this case, the attribute selected is Zone. If we look closely at the Variants, we can see that “Variant 2” has 2 cases and the others have 1.
This means that two pieces of “work” that was scheduled were so similar that they could be grouped. This is because, from a warehouse management perspective, the operation is identical. This is because the worker goes to one location, picks item(s) 1, goes to another location and picks item(s) 2, then put them away in “PACK”. Thus, it is two “Pick” operations and one “Put”, and therefore they will be grouped in this view.
Image: Process mining variants tab zone overview.
We can also change the Variants’ view by changing the Attribute selected. In this case, we will change the attribute from Zone to Order number. This will change our view, so that we see different Variants based on work type. It will in this case show us 5 variants, which at first can seem confusing. A new variant is displayed with these settings, since this now displays Variants by order number instead of zone, which means that we get one variant for each Sales order we created, since all of them were different from each other.
Image: Process mining variants tab order number overview.
In this instance, we can see the order numbers in the legend on the right side. This view tells us that we have 5 different order numbers, and the boxes below Variants Overview represents the number of work operations performed per Order Number. The Case Count per order number, in the case of “Variant 2” there has been a total of 6 operations performed (pick, pick, pick, pick, pick, put, as mentioned previously) and in the case of Variant 4 and 5, there has been a total of 3 case count (Pick, Pick, Put).
For this scenario, it can be helpful to see how much work we are performing per event. If we want a view where we can see how much work we do per event, we can switch Attribute to Work Quantity. This will in this instance allow us to see the quantity of work that needs to be performed for each event. In the example of “Variant 2” the interface tells us that 6 events have taken place, in 5 of the events quantity has been 1, and in one of the events quantities was 5. To put this into a warehouse perspective, this means that we have performed 5 of the events 1 time each, which for Variant 2 is “Pick item 1, Pick item 2, Pick item 3, Pick item 4, Pick item 5” and one event where we “Put” away these items 5 times. That single operation is performed 5 times and counts as one event because it is the same event occurring multiple times, whilst the other event, even though they are all “Pick” events, will count as individual events due to picking different products, which are all in different locations. When we “Put” away in “PACK” location, we don’t put the items in different locations, thus it counts as one event.
Image: Process mining variants tab work quantity overview.
If we select Attribute by Work type, this becomes clear:
Image: Process mining variants tab work type overview.
We might want to see the location where the events took place. To do that, we can set Attribute to Location, and the view will show us the locations of the events below the header Variants overview.
Image: Process mining variants tab work location overview.
In this image, we can see the variants based on location. To put this into context, “Variant 6” tells us 6 events have taken place, all in different parts of the warehouse. For “Variant 10”, we can see that one event took place in “LEGOLOC301” and one in “PACK”.
Now, after we have made ourselves comfortable within the report, we can start analyzing our process. To do that, press the Process Compare button below Variants.
A view similar to this one will appear:
Image: Process compare variants tab location map overview.
In the process map displayed on the screen, we have set the Mining attribute to Location, and the Metric to Total duration. This will allow us to see the total amount of time spent in each location.
By changing the Metric to Total count, we can see the number of times an event took place in each location, as the picture below displays:
Image: Process compare variants tab location map overview.
The total amount of time spent in one location and number of cases per location might be valuable, but a more telling metric could be how much time we spent on average per location.
By switching metric to mean duration, we can see the average time spent per location. This gives us yet another hint on which part of the process takes the most amount of time to manage. But, if we want to see how it looks from a proportional perspective, by toggling the percentage sign next to the Metric drop-down menu, we will achieve exactly that.
Image: Process compare variants tab location and mean duration map overview.
As we can see from the image above, LEGOLOC 201 is the location in which we spend the largest percentage of our time. If we want to further examine what is going on in that location, we can do so by pressing the bar. This will change the view slightly, and a card with detailed information will appear on the right of the screen.
Image: Process compare variants tab location map detailed view.
In the highlighted red box, we can see detailed performance data to further assess the performance in this location.
Now, we have enough information to draw some conclusions on our own. We have identified zone LEGOLOC 201 as our “time-thief”, and we know that more than 1/3 of the time was spent on picking items in this zone. To make the analysis process easier, Microsoft’s Copilot has been built into this feature. By pressing the Copilot sign in the top-right corner, you will open the dialogue box where you can create a prompt and ask the Copilot about your process. The Copilot will suggest some common prompts, but you can of course create your own. In this case, we will ask the Copilot to summarize our process.
Image: Process compare map and Copilot dialogue.Image: Process compare map and Copilot generated answer.
As displayed in the picture, the Copilot will give us a summary of the process. Because we have selected to compare our first part of the test vs our default value (the red locations), it also summarizes the default value’s process.
We do get some information on how many events took place etc., but we did not get the total case time, which was the value we wanted to find to confirm or deny our hypothesis. By asking the Copilot what the average case duration and the total case duration was, we received the answer that mean case duration was 4 minutes and 18 seconds, and total duration was 21 minutes and 31 seconds.
So, our answer in this case is that the Single order picking took 21 minutes and 31 seconds to complete.
Image: Process compare map and Copilot generated answer.
Now, we will compare the result to the cluster picking method, to see how they compare.
For context, cluster picking differs from single order picking in the sense that in cluster picking, workers pick multiple orders simultaneously and not one at a time. In this case, it means the worker will pick all 5 sales orders, then put them all away in the packing station at the same time, rather than picking an order, putting them away in the packing station, and repeating for next orders.
Image: Work clusters screenshot.
In this image, we can see the main difference between these picking methods. For cluster picking, we can see that the warehouse worker is tasked with picking 8 pieces of red Lego blocks (left image), and in the second screenshot (right) we can see how many and from which specific positions items should be picked.
Image: Work clusters screenshot with illustrations.
When all items have been picked, the Work status will be updated so all Cluster positions are “In process”.
Image: Work Cluster in progress.
Next task is to put all items in the packing station. When we have done that, all Cluster position Work statuses will be changed to Closed.
Image: Cluster Put screenshot.
As we can see in the image below, work status has been changed to Closed across the board.
Image: Work Clusters status closed.
Now, let’s jump back to the analysis. Start by creating a new process in the same way we did for single order picking and open the process map in Power Automate. In our test case, this is what we are shown on our screen.
Image: Process Compare map.
As we have already covered how choosing different metrics affects the process map and the information on display, we will not do that for this part of the test, since we know we need to compare location as the Mining attribute, and total duration as the Metric.
We will again use the help of the Copilot to evaluate the process map. Once again, we ask for a summary of the process.
Image: Process Compare map and Copilot generated insight.
Test Case Results
The summary from the Copilot tells us that this process started November 6th and ended after 8 minutes and 45 seconds.
This means we have successfully confirmed our hypothesis by using process mining and the process advisor. Now we know for a fact that for one picker with 5 sales orders constructed in this manner, cluster picking is a much more efficient picking method compared to single order picking, since identical amount of work took significantly less time to complete. Therefore, we can draw the conclusion that for all work with similar characteristics, we should prefer using cluster picking over single order picking, at least if we want to increase warehouse output.
Keep in mind, harnessing the power of Process Advisor requires an analytical mindset and a structured approach. The sheer volume of headers, variants, locations, and numbers can be overwhelming. To navigate this complexity, emulate the structured methodology illustrated in this example. By having a clear understanding of your comparison and measurement objectives, and a strategy to achieve them, you’ll significantly enhance the outcomes derived from Process Advisor.
Essential skills for effective process mining:
Use a fact-based approach with warehouse data as the base.
Use a strategic and tactical approach throughout the analysis.
Unlike this example, a great way of using process mining is by using continuous analysis, where you monitor something over time, rather than one-time analysis, which it can also be used for, as in this example.
Use quick data for immediate insights, and big data for continuous and conclusive analysis.
Master filtering to gain valuable insights and sort out what you believe is important.
Wealth of achievements made possible through process mining:
Identify areas in which processes can be improved.
Validate conformance of processes.
Do process simulation and predictive analysis.
Discover the most optimal paths for automatization.
Conclusion:
The power of Process Advisor extends far beyond what we’ve explored in this blog. It’s a versatile tool that can be adapted to a myriad of scenarios, and this guide merely scratches the surface of its potential. We’ve used it here to streamline warehouse operations, but the possibilities are truly limitless.
We encourage you to dive in and experiment with Process Advisor. Use the scenario we’ve outlined as a starting point, but don’t stop there. Input your own warehouse data and see firsthand how Process Advisor can illuminate opportunities for efficiency and growth. The journey towards optimizing your warehouse output begins with the Process Advisor.
Recent Comments