Using advanced hunting to secure OAuth apps

Using advanced hunting to secure OAuth apps

This article is contributed. See the original author and article here.

The use of SaaS applications has become widespread in businesses of all sizes. With more SaaS apps in use, there are more potential targets for attackers. They frequently exploit centralized user authentication systems targeting unaware users with phishing attacks. Attackers can take advantage of this lack of awareness to trick users into authorizing malicious apps, steal credentials and gain access to multiple services. Attack techniques are getting more sophisticated and frequent exploits of poorly designed of SaaS applications are on the rise.  


 


In this blog, we’ll demonstrate how SOC teams can benefit from App governance and its integration with Advanced Hunting to better secure SaaS apps.


 


Why use advanced hunting?


Advanced hunting uses a powerful query language called Kusto Query Language (KQL). KQL allows security analysts to create complex queries that can filter, aggregate, and analyze large volumes of data collected from endpoints, such as security events, process data, network activity, and more. However, this can be challenging for new security analysts who may not be familiar with writing queries in KQL. By using the pre-defined KQL queries and app signals collected in Microsoft 365 Defender, security analysts can immediately benefit from hunting capabilities to investigate app alert insights without having to use any KQL.


 


A real-life example of threat investigation


Let’s investigate a real-life incident triggered by a built-in threat detection policy in App governance. In our case, the “App impersonating a Microsoft logo” alert was triggered. Using our unified XDR platform, Microsoft 365 Defender, a SOC analyst can access all defender alerts in one place via the incidents view. The SOC analyst can filter on status, severity, incident assignment, service sources and other categories. In Figure 1, the Filter Service source = App Governance, Status = New, Severity= High, was applied to help with incident detection and prioritization.  


 


Note: To learn more about App governance built in policies, check out our documentation.


 


GregWiselka_0-1695822379326.png


Figure 1. Selecting incidents.


 


The incident (Figure 1) consists of four alerts that the SOC analyst can review to verify if they are true positives (TP) or false positives (FP) and act accordingly. The SOC analyst can click on the incident and access the attack story (Figure 2), where the alerts can be reviewed in chronological order. They can also view additional information in “What happened” and “Recommended actions” sections which gives the analyst a much better understanding as to why the alert was triggered in the first place with a path forward to remediate.


 


GregWiselka_1-1695822379379.png


Figure 2. Reviewing the attack story.


 


Let’s learn more about the application, by selecting view app details (Figure 3).


GregWiselka_2-1695822379463.png


Figure 3. Selecting View app details.


 


 


Usually, malicious apps will not have any certification or publisher verification because of the app nature community verification would be rare. The combination of all those attributes (highlighted at Figure 4) raise red flags.


 


Because the app is registered in Azure AD, the SOC team can easily access additional information available in the Azure portal which may help with providing additional context that may help with the incident resolution.  


 


GregWiselka_3-1695822379485.png


Figure 4. The malicious O365 Outlook Application card, Highlighted red flags and links to Azure AD and App activities in hunting.


 


In Figure 5, we can see why the machine learning algorithm highlighted the app as malicious, the logo impersonates the original Outlook logo, but the publisher domain does not match the Microsoft domain. The SOC analyst can now follow their company guidelines to disable the app (this can be completed directly in AAD or in App governance app details window – Figure 4)


 


GregWiselka_4-1695822379559.png


Figure 5. View of app details in Azure Portal.


 


Use of Advanced Hunting as part of incident investigation.


After disabling the malicious app, the SOC analyst should investigate further the app activity by selecting, “View app activities” (option highlighted in Figure 4), which will generate the Query 1 also visible in Figure 6. The results visible in Figure 7&8 will include all graph API activities the app preformed on SharePoint Online, Exchange Online, One Drive for Business and Teams workloads.


 


GregWiselka_5-1695822379641.png


 


Figure 6. Advanced hunting query.  


 


Query 1:


// Find all the activities involving the cloud app in last 30 days


let appid = (i : dynamic )


{


    case


    (


        i.Workload == “SharePoint”, i.ApplicationId,


        i.Workload == “Exchange”, iff(isempty(i.ClientAppId), i.AppId, i.ClientAppId),


        i.Workload == “OneDrive”, i.ApplicationId,


        i.Workload == “MicrosoftTeams”, i.AppAccessContext.ClientAppId,


        “Unknown”


    )


};


CloudAppEvents


| where ((RawEventData.Workload ==  “SharePoint” or RawEventData.Workload == “OneDrive”) and (ActionType == “FileUploaded” or ActionType == “FileDownloaded”)) or (RawEventData.Workload == “Exchange” and (ActionType == “Send” or ActionType == “MailItemsAccessed”)) or (RawEventData.Workload == “MicrosoftTeams” and (ActionType == “MessagesListed” or ActionType == “MessageRead” or ActionType == “MessagesExported” or ActionType == “MessageSent”))


| extend AppId = appid(RawEventData)


| where AppId == “Paste your app Id


| where Timestamp between (datetime(“2023-08-08 00:00:00Z”)..30d)


| extend tostring(RawEventData.Id)


| summarize arg_max(Timestamp, *) by RawEventData_Id


| sort by Timestamp desc


| project Timestamp, OAuthApplicationId = AppId, ReportId, AccountId, AccountObjectId, AccountDisplayName, IPAddress, UserAgent, Workload = tostring(RawEventData.Workload), ActionType, SensitivityLabel = tostring(RawEventData.SensitivityLabelId), tostring(RawEventData)


| limit 1000


 


In the query results, the analyst can see the IP address which could be an indicator of malicious activity, attackers frequently use IP of bad reputation, blacklisted, Tor exit nodes. Analyzing historical data can reveal patterns of malicious behavior associated with specific IP addresses. This can be useful for threat intelligence and proactive threat hunting. The analyst can also see impacted workloads and action types which are crucial for them to understand hacker actions.


 


By analyzing these actions, security analysts can trace the steps of the attacker to determine the scope of the breach, how the attacker gained access, and what data or systems may have been compromised. MailItemsAccessed action suggests that an unauthorized user or hacker has accessed the contents of one or more email messages within an email account and UpdateInboxRules can be a sign of an attacker attempting to manipulate email traffic by diverting, filtering, or forwarding messages to their advantage.


 


GregWiselka_6-1695822379799.png


Figure 7. Advanced hunting query results.


 


The analyst may want to create a detection rule (option visible on Figure 6) to proactively identify and alert on similar suspicious activities in the future, which is essential for enhancing an organization’s ability to detect and respond to security threats effectively, automate alerts, reduce false positives, and stay ahead of evolving cyber threats. Learn more about custom detections rules and how to create them here.


 


By selecting one of the records (Figure 8), the SOC analyst can get more information about the impacted user to act accordingly and “stop the bleeding.” They can take immediate action to halt or mitigate the security breach, prevent further access (changing passwords, revoking access privileges or even disabling the compromised account), all result in minimizing the damage. After the bleeding has stopped, the data helps security teams conduct a thorough investigation to determine the root cause of the incident. Understanding how the breach occurred is essential for preventing similar incidents in the future.


 


GregWiselka_7-1695822379875.png


Figure 8. Advanced hunting inspected record details.


 


The app impersonation security incident shows the benefits of app governance machine learning in detecting malicious applications which offers additional layer of protection for your users and organization. The integration of app governance with advanced hunting capabilities provides SOC teams with the tools and insights needed to proactively detect, respond to, and mitigate security threats in SaaS OAuth applications. It allows for a more comprehensive and data-driven approach to SaaS app security, helping organizations protect their critical data and assets.

Azure Container Apps Eligible for Azure Savings Plan for Compute

Azure Container Apps Eligible for Azure Savings Plan for Compute

This article is contributed. See the original author and article here.

Azure Container Apps is now eligible for Azure savings plan for compute! With Azure Container Apps you can build and deploy fully managed, cloud-native apps and microservices using serverless containers. All Azure Container Apps regions and plans are eligible to receive 15% savings (1 year) and 17% savings (3 years) compared to pay-as-you-go when you commit to an Azure savings plan.


  


Learn about Azure Savings Plan for Compute


The Azure savings plan for compute unlocks lower prices on select compute services when you commit to spend a fixed hourly amount for 1 or 3 years. You choose whether to pay all upfront or monthly at no extra cost. As you use select compute services across the world, your usage is covered by the plan at reduced prices, helping you get more value from your cloud budget. During the times when your usage is above your hourly commitment, you’ll simply be billed at your regular pay-as-you-go prices. With savings automatically applying across compute usage globally, you’ll continue saving even as your usage needs change over time. 


 


Here is an example of how Azure savings plan for compute works. If you buy a 1-year savings plan and commit to $5 USD of spend per hour, Azure automatically applies the savings plan to compute usage globally on an hourly basis up to the example $5 hourly commitment. Hourly Consumption plan vCPU usage for Azure Container Apps in West US would be billed at the lower savings plan price of $0.07344 instead of $0.0864 for active usage as follows: 



  • Usage at or below $5 USD for the hour is billed at lower savings plan prices and covered by the savings plan hourly commitment. Note that you would pay the $5 USD amount every hour, even if usage is less.

  • For usage above $5 USD for any given hour, the first $5 USD of usage is billed at lower savings plan prices and covered by the savings plan hourly commitment. The amount above $5 USD is billed at pay-as-you-go prices and will be added to the invoice separately.

  • Azure savings plan for compute is first applied to the product that has the greatest savings plan discount when compared to the equivalent pay-as-you-go rate (see your price list for savings plan pricing). The application prioritization is done to ensure that you receive the maximum benefit from your savings plan investment. 


aspc.png


SourceAzure savings plan for compute


 


Get Started Today


Start saving now and do more with less. Learn more about Azure Container Apps with these resources:



 


Learn more about Azure savings plan for compute with these resources:



 


Already taking advantage of Azure Container Apps and the Azure savings plan for compute? Tell us what you think so far in the comments.

Enhance Your Online Security: A Step-by-Step Guide to Implementing Two-Factor Authentication (2FA)

Enhance Your Online Security: A Step-by-Step Guide to Implementing Two-Factor Authentication (2FA)

This article is contributed. See the original author and article here.

As you embark on your journey back to college or university, it’s essential to prioritize the security of your digital assets, especially when dealing with platforms like Microsoft Azure. One of the most effective ways to fortify your online defenses is by implementing Two-Factor Authentication (2FA), and in this comprehensive guide, we’ll walk you through the process, step by step of how-to setup 2FA and recovery your (2FA) authenticator setting is loose or misplace your device.


The Step-by-Step Process to Safeguard Your Azure Account
By following these steps, you’ll have successfully added Two-Factor Authentication to your Microsoft Azure account, significantly enhancing your online security. Remember that you can also easily remove your account’s connection to a previous Microsoft
Authenticator App.



Thank you for prioritizing the security of your online accounts, and we hope this guide has been helpful. If you have any questions or encounter issues along the way, feel free to reach out for assistance. Your digital safety is paramount, and taking these steps will go a long way in safeguarding your valuable information.


 


The Step by Step Process


Step 1


Go to My Sign-Ins | Security Info | Microsoft.comnt


 


Step 2


Sign into your account


Make sure at the left hand of your screen, you are on Security info. 


 


Note: This page will look different for you, before I hid my email address and device name the previous Authenticator App is connected to. 


Authenticator.PNG


 


Step 3


Click on Add sign-in method 


 


Authenticator.PNG


 


Step 4


Choose a method by clicking on the drop down icon


 


Choose a method.PNG


 


 


Choose a method1.PNG


 


Step 5


Here, I will be using Microsoft Authenticator


Select Microsoft Authenticator


 


Choose a method1.PNG


 


Add a method.PNG


 


Step 6


Click on Add


 


Add a method.PNG


Authenticator app.PNG


 


Step 7 


Download the Microsoft Authenticator App if you previously do not have it on your phone or tablet. 


 


Step 8


Next, after sorting out Step 7 click on Next 


 


Authenticator app.PNG


Account2.PNG


 


Step 9


On the Set up your account screen, click on Next 


 


Account2.PNG


 


Barcode.PNG


 


Step 10


Scan the barcode by 



  • Clicking on the  Authenticator App in your device

  • Click on the + sign at the top right of your screen

  • Select what type of account you are adding 


This are the options 


Personal account 


Work or school account 


Other(Google, Facebook, etc.)


 


Step 11


Click from this option on what you are using, for me I clicked on Work or school account


There is new option to add work or school account that gives you an option to


Sign in 


Scan QR code 


Cancel


 


Click on Scan QR code


 


If this does not work for you that is fine, mine showed an error message 


 


For me it showed an error message that reads:


You’ve have already used this QR code to add an account. Generate a new QR code and try again. 


 


Let’s resolve this together, you can either use of any this steps 



  • By starting the process again and scan the image (this worked for me)

  • Or clicking on manually on your device and adding the code and URL 


Adding the code and URL 



Step 12


Click on Can’t scan image


 


Barcode.PNG


barcode 1.PNG


 


Step 13


Either you are continuing from Step 11 or Step 12


Click on Next 


 


barcode 1.PNG


Barcode.PNG


 


Step 14


After this step, this will show a number for you to enter in your Authenticator App, 


Enter the number and click on Next 


 


completed.PNG


Completed1.PNG


Note: you can easily delete the connection of your account to the previous Microsoft Authenticator App so if you lose your device or change device simply follow the steps above to add a new device. 



Thank you very much for reading keep secure and stay safe.


 


 


 


 

Lesson Learned #438:Sync failed with state: SyncSucceededWithWarnings, Syncgroup: XXX

This article is contributed. See the original author and article here.

Working on a service request our customer reported the following error message: Sync failed with state: SyncSucceededWithWarnings, Syncgroup: XXX – Error #1: SqlException ID: XXXX-NNN-ZZZZ-YYYY-FFFFFFFF, Error Code: -2146232060 – SqlError Number:547, Message: The INSERT statement conflicted with the FOREIGN KEY constraint “FK_Table1”. The conflict occurred in database “DB1”, table “dbo.Table1”, column ‘Id’. SqlError Number:3621, Message: SQL error with code 3621


 


Decoding the Error Message:


The warning state signifies that while synchronization was successfully executed, some errors, like the SqlException indicated above, occurred during the process.


 


Navigating Through the Errors:


 



  1. Foreign Key Constraint Error Analysis:


The error message elucidates a failed attempt to insert a record into a table due to a Foreign Key Constraint Error. The conflict arises when the inserted record’s foreign key doesn’t align with any existing primary keys in the referenced table.


 



  1. Error Resolution Steps:




  • Identify and Analyze: Utilize tools like SSDT / SSMS to compare the databases and pinpoint the conflicting records causing the constraint error.




  • Data Correction: Engage in a meticulous process of amending the data by adding the absent records in the referenced table, or adjusting the foreign key values in the records to be inserted, ensuring they correspond to existing primary keys in the referenced table.




  • Sync Group Recreation: In moments of low traffic or scheduled downtime, recreate the sync group to implement the rectified records seamlessly.




  • Reinitiate Synchronization: After addressing the conflicting records, reinitiate the synchronization process, which should now proceed without constraints or warnings.




 


 

Uma introdução ao GitHub Advanced Security agora nativo no Azure DevOps

Uma introdução ao GitHub Advanced Security agora nativo no Azure DevOps

This article is contributed. See the original author and article here.

Segundo relatórios de investigações da Verizon Data Breach em 2020, foi constatado que 80% das violações de segurança em aplicações Web estão relacionadas a credenciadas roubadas; e 83% das aplicações hoje possuem ao menos uma vulnerabilidade de segurança. Essas brechas são grandes oportunidades para pessoas mal-intencionadas explorarem seus aplicativos e causarem grandes danos.


Organizações que adotam práticas de DevSecOps (checks de segurança, busca por vulnerabilidades diariamente etc.) reduzem o tempo de recuperação para problemas de segurança em 72% em relação a organizações que rodam essas validações apenas ocasionalmente.


Em ordem de suprir estes pontos de melhoria em um mundo onde o desenvolvimento de softwares está em contante e rápida transformação de forma maravilhosa as comunidades ao nosso redor, a Microsoft traz o GitHub Advanced Security for Azure DevOps (GHAzDO) para dentro do suíte de serviços do Azure DevOps.


 


O que é GHAzDO?


Conforme mencionado, o GHAzDO é um serviço que provê funcionalidades de segurança para implantação de shift-left (prática que consiste em iniciar os testes de aplicação mais cedo e duram todo o ciclo de vida do desenvolvimento de software), tornando mais simples diagnosticar e prevenir brechas de segurança da sua aplicação em estágios de desenvolvimento mais iniciais.


O GHAzDO é dividido em 3 abordagens: Secure Dependencies, Secure Code e Secure Secrets.


 


Secure Dependencies


Ataques a ferramentas de código aberto são cada vez mais frequentes. Com o Dependency Scanning, é possível identificar vulnerabilidades em pacotes presentes no código e receber um conjunto de orientações em como mitigar essas aberturas.


wdepaiva_0-1689719524573.png


 


Secure Code


Com o conceito de Code Scanning, o GHAzDO inclui uma ferramenta de análise estática capaz de detectar centenas de vulnerabilidades de segurança no código como SQL Injection, XPath Injection, Authorization bypass em uma ampla variedade de linguagens (C/C++, C#, Go, Java/Kotlin, Javascript/Typescript, Python etc.). Tudo isso, é executado dentro do contexto do Azure Pipelines sobre o código no Azure Repos. Ou seja, é uma ferramenta nativa focada em ser totalmente natural para os usuários do Azure DevOps.


wdepaiva_1-1689719584549.png


 


Secure Secrets


Metade das brechas de segurança em aplicações estão relacionadas com credenciais expostas. Com a funcionalidade do Secret Scanning, é possível listar todos os segredos expostos no repositório e seus respectivos arquivos. Não só isso, com apenas 1 clique é possível bloquear o envio de segredos para o repositório, impedindo que brechas de segurança sejam causadas.


Uma vez que um segredo está exposto no repositório, ele faz parte do histórico de commits. Em uma situação como essa, é necessário revogar o segredo, e atualizar todos os recursos que potencialmente façam uso deste para um novo. Se porventura algum recurso for esquecido, é possível causar uma indisponibilidade na aplicação. Não o bastante, será necessário também resetar o histórico do repositório para o commit anterior ao momento em que o segredo foi exposto. Se este foi exposto há um tempo considerável, isso pode causar sérios danos em relação ao trabalho que foi desenvolvido até então, gerando um grande atraso a equipe de desenvolvimento.


Portanto, certificar-se de que secrets, credenciais, ou qualquer outra informação sensível nunca seja exposta ao repositório (push protection) é de extrema importância para a saúde e segurança da aplicação.


wdepaiva_2-1689719638254.png


wdepaiva_3-1689719662957.png


Para aprender mais sobre o GitHub Advanced Security para o Azure DevOps, veja: https://aka.ms/advanced-security


 

Unlocking customer value with Microsoft Dynamics 365 Field Service through connected services

Unlocking customer value with Microsoft Dynamics 365 Field Service through connected services

This article is contributed. See the original author and article here.

This post was co-authored by Lax Gopisetty, Vice President, Global Practice Head, Microsoft Business Applications & Digital Workplace Services, Infosys Ltd.

In an age defined by single-click purchases, instant deliveries, and personalized experiences, customer expectations continue to rise, and frontline technicians are expected to meet these ever-changing demands. When customers face a problem, they want it fixed fast and right the first time. For many organizations, customer experience is both a challenge and an opportunity to differentiate from the competition.

It is no longer acceptable for technicians to operate on disparate technologies that individually are good enough to execute work orders, manage assets, and dispatch resources with real-time support. Timely resolution is key in field service, and arming frontline technicians with intuitive solutions that combine workflow automation, scheduling algorithms, and mobility can significantly enhance the customer experience. Tools that empower field technicians with timely inputs to focus on their core responsibilities and enable processes to track each work order closure, along with billing, are now becoming existential.

For example, solutions that unlock efficiencies for telecommunications providers with field service automation, empower medical device service technicians with improved downtime, maintain safe and highly automated facility management operations, and manage smart elevator service with Internet of Things (IoT)-driven field service are all recipes for greater customer satisfaction.

Field service employee looking at a tablet in his hand.

Dynamics 365 Field Service

Transform your service operations and deliver exceptional service.

Microsoft Dynamics 365 Field Service integration supports positive customer experiences

Dynamics 365 Field Service integrates with Outlook, Microsoft Teams, and Microsoft Viva Connections so that frontline workers and managers can create, view, and manage work orders within Outlook and Teams. This integration enhances collaboration between dispatchers, frontline technicians, and managers by enabling work order data to sync automatically between Dynamics 365 and Microsoft 365. Additionally, frontline technicians can quickly start their day with access to key workday information at a glance, with work orders visible as Tasks from the Viva Connections homepage in Teams. Dynamics 365 and Microsoft 365 empower technicians with the right information to resolve issues the first time, which adds a great deal to creating a positive customer experience.

For example, a leading energy supplier based out of the UK partnered with Infosys to establish itself as a leader in the energy-as-a-service market by offering best-in-class customer experience. The connected field service-based solution unified the capabilities of Dynamics 365 and Microsoft 365 to unlock a leaner and flexible business model that also enabled future scalability to ensure:

  • Better workforce management through flexible scheduling, route optimization, and quota management.
  • Field job execution via remote supervision, site awareness/recording, and offline mode.
  • Customer intimacy powered by service history management, technician visibility, voice of customer, and closed loop tracking.
  • Superior job outcomes powered by higher first-time resolution rates and reduced job aborts.

Connected field service helped redefine the leading energy supplier’s customer engagement model with a seamless work order management process. From streamlining work order creation, scheduling the best suited frontline technician, receiving remote expert assistance, and integrating asset management, Dynamics 365 enabled the customer to transform their field operations. Additional engagement highlights include:

  • Seamless migration from more than 20 legacy disparate business apps onto Dynamics 365.
  • Implemented core business functionalities with over 75 percent out-of-the-box feature fitment.
  • Six phased incremental rollouts to enable more than 1,500 field technicians and more than 600 internal users.
  • On track to reduce overall cost of IT operations by over 25 percent.

The leaner, AI-powered, and truly automated business, has unleashed novel revenue streams with infinite potential for the client:

Growth segment Value delivered
Smart new connections Manage the smart new connections—such as customer management (property developers), lead management, opportunity management, quote management, billing, consolidated billing, and disputes.
Smart field connections Provide onsite service for smart field connections—work order management, skills management, scheduling management, capacity management, and resource management.
Electric vehicles (e-mobility) Manage electric vehicle (EV) meter installation services—to manage the sales processes for business-to-business (B2B) customers, including installation.

This UK-based leading energy supplier is now well-positioned to drive its future growth. The organization is supported by a skilled and engaged workforce that works seamlessly with connected and leaner processes that together offer a sustainable competitive advantage.

Standardizing and automating processes through connected field service

Field Service continues to break ground into unexplored industries. Capabilities like GPS and routing, which enable timely visits and quicker resolution, are saving the day for thousands of field service professionals. They are now able to summarize completed tasks with inline Microsoft Power Apps component framework (PCF) capability.

Field service solutions must always be driven by an organization’s unique priorities, pain points, and process nuances. Partners like Infosys are co-innovating with clients to address these challenges with Microsoft Power Platform and its extensibility components. They are enabling nontechnical business users to build applications that cater to their unique requirements without the aid of IT experts.

The emergence of AI-embedded innovations like Copilot in Dynamics 365 Field Service will enhance service further. From creating work orders with the right information and assigning them to the right technicians, to equipping technicians with sufficient support to successfully complete jobs, Copilot will help streamline critical frontline tasks. These advanced functionalities will help companies genuinely standardize and automate field service processes.

Organizations competing in a market with high turnover are using mixed reality-based Microsoft Dynamics 365 Guides for remote support and collaboration. This results in accelerated training with context and seamless transfer of information, insights, and skills, which help in lowering overall costs.

Technology is key to building a scalable and efficient field service operation. However, a significant portion of success still rides on the technician who is delivering the service. So, it is imperative for service organizations to unify field operations, frontline technicians, and customers with connected digital platforms, to unlock value—because service is no longer a cost center for organizations.

Learn more about Dynamics 365 Field Service

Learn how Dynamics 365 Field Service can help you transform your service operations and deliver exceptional service. And read how Copilot in Dynamics 365 Field Service can accelerate service delivery, boost technician productivity, and streamline work order management with next-generation AI. Watch the video below to see it in action.


The post Unlocking customer value with Microsoft Dynamics 365 Field Service through connected services appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Microsoft Learn: Four key features to help expand your knowledge and advance your career

Microsoft Learn: Four key features to help expand your knowledge and advance your career

This article is contributed. See the original author and article here.

As a Microsoft Most Valuable Professional (MVP) and a Microsoft Certified Trainer (MCT), I can say from experience that if you want to improve your skills, expand your knowledge, and advance your career, Microsoft Learn can be an essential resource for you. This family of skill-building offerings brings together all Microsoft technical content, learning tools, and resources, providing practical learning materials both for professionals and beginners. Among the many features that Microsoft Learn offers, four of my favourites are collections, career path training, Practice Assessments, and exam prep videos.


 


1. Collections


Collections let you customise your own learning journey. Often you come across something on Microsoft Learn that’s interesting, and you want to save it for later. This is where collections come in handy. Collections let you organise and group content on Microsoft Learn—whether it’s a module about a particular topic, a learning path, or an article with technical documentation. You can even share your collections via a link with others.


 


I frequently create collections to keep track of all the content that will be useful in preparing for a Microsoft Certification exam. This might include the official learning path, along with any extra documentation that could help during exam prep. To place a module or learning path into a collection, from the Training tab, on the content of interest, select Add. You can revisit collections from your Microsoft Learn profile.


 


Rishona 1.png


The Add button on a Microsoft Learn training module.


 


2. Career path training


As you may have already discovered, one of the challenges to learning new technologies is finding the right resources for your skill-building needs. Perhaps you’re not sure where to begin your learning journey. I’ve found that a good starting point is to explore learning content based on your career path or on one that interests you. You can find this option on the Microsoft Learn Training tab, and it points you to a collection of modules, learning paths, and certifications that are relevant and tailored to your chosen job role. Whether you want to become a business user, a data scientist, a solutions architect, a security engineer, or a functional consultant, you can find the appropriate learning content for your role and level of expertise. Plus, with career path training, you can learn at your own pace, gain practical experience, and validate your skills with Microsoft Certifications.


 


Rishona 2.png


Career path collection options on Microsoft Learn.


 


3. Practice Assessments


If you’re preparing to earn a Microsoft Certification, you can get an idea of what to expect before you take the associated exam by trying a Practice Assessment. This option is available for some certifications and is a great way to gauge the topics you’re strong in and the ones for which you could use more practice. They help you build confidence by giving you a feel for the types of questions, style of wording, and level of difficulty you might encounter during the actual exam.


 


Rishona 3.png


Sample Practice Assessment questions.


 


If your certification exam has a Practice Assessment available, it’s listed on the Microsoft Learn exam page, under Schedule exam. Just select Take a free practice assessment.


 


4. Exam prep videos


Other valuable Microsoft Learn resources to help you get ready for earning a Microsoft Certification are exam prep videos, available for some certifications. These videos are designed to help you review the key concepts and skills that are covered on the exam and to provide tips and tricks on how to approach the questions. They offer an engaging way to absorb essential knowledge and skills, making it easier to grasp technical concepts and their practical applications. The videos, hosted by industry experts, provide a structured, guided approach to the exam topics.


 


These exam prep videos complement your other Microsoft Learn study materials. Even if you consider yourself an expert on a topic, the videos are a good way to refresh your memory before exam day. To browse through available exam prep videos, check out the Microsoft Learn Exam Readiness Zone and search for your topic of interest or exam number, or even filter by product.


 


Share your favourite Microsoft Learn features


Creating your own collections of content, exploring new career paths, or preparing to earn Microsoft Certifications by taking Practice Assessments or watching exam prep videos are just some of the ways that Microsoft Learn can help you achieve your skill-building and certification goals, and they’re some of my favourite features in Microsoft Learn. What are your favourites? Share your top picks with us, and help others on their learning journeys.


 


Meet Rishona Elijah, Microsoft Learn expert 


Rishona Elijah is a Microsoft Most Valuable Professional (MVP) for Business Applications and a Microsoft Certified Trainer (MCT). She works as a Trainer & Evangelist at Microsoft Partner Barhead Solutions, based in Australia. She is also a LinkedIn Learning instructor for Microsoft Power Platform certifications. Rishona has trained thousands of individuals on Microsoft Power Platform and Dynamics 365, delivering impactful training sessions that empower them to use the no-code/low-code technology to build their own apps, chatbots, workflows, and dashboards. She enjoys sharing her knowledge and ideas on her blog, Rishona Elijah, in addition to speaking at community conferences and user groups.


 


“Power Platform with Rishona Elijah” is a Microsoft learning room that provides a supportive and encouraging environment for people starting their Microsoft Power Platform journey. The room offers assistance and guidance on Microsoft Power Platform topics, including certifications, Power Apps, Power Virtual Agents, Power Automate, and AI Builder. It’s also a great space to network with like-minded peers and to celebrate your success along the way. Sign up for the “Power Platform with Rishona Elijah” learning room.


 


Learn more about Rishona Elijah


 

Working With Field Service Mobile Customizations 

Working With Field Service Mobile Customizations 

This article is contributed. See the original author and article here.

Field Service Mobile is a Dynamics Power Platform Model Driven Application. This offers several advantages to the mobile application, including re-use of forms and views and consistency of user experience while accessing on the web, mobile, or tablet.  

The Power Platform also offers significant customization opportunities, both when customizing forms, adding business logic or integrations with other Power Platform capabilities like Power Automate, Canvas, or PCF Controls.  These capabilities make the Field Service Mobile application uniquely positioned to streamline your workflows, improve data quality and enhance your user experience.  

Customization Best Practices 

Customizing the Field Service Mobile application is a balance of enabling an ideal workflow for your business and providing the best possible user experience for your Frontline Workers. This balance must consider data availability of the mobile workforce, along with application performance and the overall user experience.  

In this blog post we’ll share some of the key best practices when evaluating and implementing customizations. 

  1. Use the default Field Service Mobile app module. The out-of-the-box Field Service Mobile app module has all the basic features and functionality your frontline workers require to get started with Field Service. Custom app modules can be used with the Field Service Mobile application but will not include some of the internal business logic such as Travel Calculations. Another advantage of using the default app module is that it will automatically receive product updates over time, while additional effort would be required to merge the same enhancements into a custom app module.  
  2. Avoid using HTML Web Resources.  Web Resources have many limitations on a mobile application when working with offline mode. It is highly recommended to use PowerApps Component Framework (PCF) controls, which are a better option for a more consistent cross-platform experience without the same limitations. 
    • Tip: If your situation necessitates the use of custom web resources, use code splitting and check code coverage in a browser to ensure only the minimum amount of code is loaded. Package shared code in their own shared web resource library instead of duplicating in each consuming resource. 
    • Tip: Be sure to define a unique namespace for custom JavaScript libraries to avoid having functions overwritten by other functions in another library. Learn how to write your first client script in model-driven apps
    • Tip: If using Offline mode, be sure to test your customizations on the mobile device in Airplane mode and variable cellular network conditions.  
  3. Handle errors properly and present the right message to end users. When implementing customizations, it is very important to handle edge-cases and errors in a way that provides a positive experience for your end users. This is especially true for async calls and network errors, where the Frontline Worker may have different results depending on devices network state.  
  4. Use XRM Web APIs instead of XHR/Fetch calls directly to the server.  XRM Web ApIs will route correctly to the local offline database or server based on offline configuration and network state of the app. 
    • Making direct server calls from the mobile application is not recommended as they can be unreliable and fail unexpectedly with poor network conditions.  By ensuring that all dependencies are in the offline data store by configuring the Mobile Offline profile with the correct data necessary for your user scenarios.   
    • If server calls are necessary, build an appropriate user experience to handle cases when the call may fail, or response is slow to return from the server. Making the network calls trigger based on explicit user actions, with a interface giving visual cues that a network call is happening and a response will be needed, will provide a better experience for the Frontline Worker.  
    • If using onload/onchange/command handlers and fetching data using XRM WebAPis, make sure you test the impact of those calls on application performance while in various network conditions. 
  5. Optimized resources for bandwidth. If adding custom JavaScript or images, be sure to optimize files which are downloaded to the device. We recommend to always trim and compress your JavaScript files and using SVG images instead of PNG to save bandwidth.  
  6. Declare solution dependencies between commands, web resources, and strings. Dependencies must be used to make a Web Resource available offline. For example, when an entity/form is enabled for offline usage, the JavaScript which is attached to the form for onload/onsave, would also be available offline.  For other files such as localization XML files, they need to be added as a dependency to your JavaScript so these XML files will also be available offline. Learn more about web resource dependencies
  7. Be aware of timing issues or race conditions. This is especially relevant when dealing with async calls. Test by adding network latency and CPU throttling to ensure a positive experience in real-world conditions. 
  8. Use Business Rules as first choice over custom client-side JavaScript. Business rules provide a mechanism to implement business logic with some guardrails to avoid some of the complexity that comes with custom JavaScript code. Please be aware there are some limitations with business rules, such as cases when OnChange events are required.  It is good to evaluate your business scenarios and choose the best path for your organization. 
    • Tip: If using JavaScript-based business logic, make sure you fetch minimal data and avoid joins/sorting if not needed.  
  9. Leverage out of box controls. As much as possible use out of the box controls, such as the Booking Calendar Control, which will be easier to support and receive product enhancements over time. 
  10. When enabling offline mode, make sure forms and views are aligned with configuration of the mobile offline profile. The individual configuring the forms and views should work closely with the person who will configure the mobile offline profile to ensure tables which are enabled on views will be available while running in offline mode. Be sure to include error handling if there are instances when an entity will not be available while offline.  
  11. Leverage tools to debug customizations. Debugging is important when introducing JavaScript customizations to your experience. Debugging a mobile app has unique challenges versus a web browser. This is especially true with capabilities like Offline mode are enabled on the mobile app. To meet this need, leverage debugging tools shipped with the Android and Windows model driven apps. Detailed steps to debug are found in Power Apps documentation.  

Customizing a model driven application can be a powerful way to enhance the user experience and functionality of your solution. However, it also requires careful planning and testing to ensure optimal performance, usability, and compatibility. In this blog post, we have shared some best practices and tips on how to customize your model driven application effectively. We hope you have found this information useful and that you will apply it to your own projects. Thank you for reading and happy customizing! 

Additional Resources 

The post Working With Field Service Mobile Customizations  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Functions: Node.js v4 programming model is Generally Available

Azure Functions: Node.js v4 programming model is Generally Available

This article is contributed. See the original author and article here.

The Azure Functions team is thrilled to announce General Availability of version 4 of the Node.js programming model! This programming model is part of Azure Functions’ larger effort to provide a more flexible and intuitive experience for all supported languages. You may be aware that we announced General Availability of the new Python programming model for Azure Functions at MS Build this year. The new Node.js experience we ship today is a result of the valuable feedback we received from JavaScript and TypeScript developers through GitHub, surveys, user studies, as well as suggestions from internal Node.js experts working closely with customers. 


 


This blog post aims to highlight the key features of the v4 model and also shed light on the improvements we’ve made since announcing public preview last spring. 


 


What’s improved in the V4 model? 


 


In this section, we highlight several key improvements made in the V4 programming model. 


 


Flexible folder structure  


 


The existing V3 model requires that each trigger be in its own directory, with its own function.json file. This strict structure can make it hard to manage if an app has many triggers. And if you’re a Durable Functions user, having your orchestration, activity, and client functions in different directories decreases code readability, because you must switch between directories to look at the components of one logical unit. The V4 model removes the strict directory structure and gives users the flexibility to organize triggers in ways that makes sense to their Function app. For example, you can have multiple related triggers in one file or have triggers in separate files that are grouped in one directory. 


 


Furthermore, you no longer need to keep a function.json file for each trigger you have in the V4 model as bindings are configured in code! See the HTTP example in the next section and the Durable Functions example in the “More Examples” section. 


 


Define function in code 


 


The V4 model uses an app object as the entry point for registering functions instead of function.json files. For example, to register an HTTP trigger responding to a GET request, you can call app.http() or app.get() which was modeled after other Node.js frameworks like Express.js that also support app.get(). The following shows what has changed when writing an HTTP trigger in the V4 model: 


 





















File Type



V3 



V4 


 JavaScript

module.exports = async function (context, req) { 
  context.log('HTTP function processed a request'); 

  const name = req.query.name 
    || req.body 
    || 'world'; 

  context.res = { 
    body: `Hello, ${name}!` 
  }; 
}; 

 


const { app } = require("@azure/functions"); 

app.http('helloWorld1', { 
  methods: ['GET', 'POST'], 
  handler: async (request, context) => { 
    context.log('Http function processed request'); 

    const name = request.query.get('name')  
      || await request.text()  
      || 'world'; 

    return { body: `Hello, ${name}!` }; 
  } 
}); 

 


JSON

{ 
  "bindings": [ 
    { 
      "authLevel": "anonymous", 
      "type": "httpTrigger", 
      "direction": "in", 
      "name": "req", 
      "methods": [ 
        "get", 
        "post" 
      ] 
    }, 
    { 
      "type": "http", 
      "direction": "out", 
      "name": "res" 
    } 
  ] 
} 

madhurabharadwaj_0-1694818755219.gif  Nothing  madhurabharadwaj_1-1694818755223.gif

 


 



 


Trigger configuration like methods and authLevel that were specified in a function.json file before are moved to the code itself in V4. We also set several defaults for you, which is why you don’t see authLevel or an output binding in the V4 example. 


 


New HTTP Types 


 


In the V4 model, we’ve adjusted the HTTP request and response types to be a subset of the fetch standard instead of types unique to Azure Functions. We use Node.js’s undici package, which follows the fetch standard and is currently being integrated into Node.js core. 


 


HttpRequest – body 














V3 



V4 


// returns a string, object, or Buffer 
const body = request.body; 
// returns a string 
const body = request.rawBody; 
// returns a Buffer 
const body = request.bufferBody; 
// returns an object representing a form 
const body = await request.parseFormBody();

 


 


const body = await request.text(); 
const body = await request.json(); 
const body = await request.formData(); 
const body = await request.arrayBuffer(); 
const body = await request.blob();


 


HttpResponse – status 














V3 



V4 



 

context.res.status(200); 

context.res = { status: 200} 
context.res = { statusCode: 200 }; 

return { status: 200}; 
return { statusCode: 200 }; 

return { status: 200 }; 


 


To see how other properties like header, query parameters, etc. have changed, see our developer guide. 


 


Better IntelliSense 


 


If you’re not familiar with IntelliSense, it covers the features in your editor like autocomplete and documentation directly while you code. We’re big fans of IntelliSense and we hope you are too because it was a priority for us from the initial design stages. The V4 model supports IntelliSense for JavaScript for the first time, and improves on the IntelliSense for TypeScript that already existed in V3. Here are a few examples: 


 


madhurabharadwaj_2-1694818755225.png


 


madhurabharadwaj_3-1694818755228.png


 


More Examples 


 


NOTE: One of the priorities of the V4 programming model is to ensure parity between JavaScript and TypeScript support. You can use either language to write all the examples in this article, but we only show one language for the sake of article length. 


 


Timer (TypeScript) 


 


A timer trigger that runs every 5 minutes: 


 


 

import { app, InvocationContext, Timer } from '@azure/functions'; 

export async function timerTrigger1(myTimer: Timer, context: InvocationContext): Promise { 
    context.log('Timer function processed request.'); 
} 

app.timer('timerTrigger1', { 
    schedule: '0 */5 * * * *', 
    handler: timerTrigger1, 
}); 

 


 


Durable Functions (TypeScript) 


 


Like in the V3 model, you need the durable-functions package in addition to @azure/functions to write Durable Functions in the V4 model. The example below shows one of the common patterns Durable Functions is useful for – function chaining. In this case, we’re executing a sequence of (simple) functions in a particular order. 


 


 

import { app, HttpHandler, HttpRequest, HttpResponse, InvocationContext } from '@azure/functions'; 
import * as df from 'durable-functions'; 
import { ActivityHandler, OrchestrationContext, OrchestrationHandler } from 'durable-functions'; 

// Replace with the name of your Durable Functions Activity 
const activityName = 'hello'; 

const orchestrator: OrchestrationHandler = function* (context: OrchestrationContext) { 
    const outputs = []; 
    outputs.push(yield context.df.callActivity(activityName, 'Tokyo')); 
    outputs.push(yield context.df.callActivity(activityName, 'Seattle')); 
    outputs.push(yield context.df.callActivity(activityName, 'Cairo')); 

    return outputs; 
}; 
df.app.orchestration('durableOrchestrator1', orchestrator); 

const helloActivity: ActivityHandler = (input: string): string => { 
    return `Hello, ${input}`; 
}; 
df.app.activity(activityName, { handler: helloActivity }); 

const httpStart: HttpHandler = async (request: HttpRequest, context: InvocationContext): Promise => { 
    const client = df.getClient(context); 
    const body: unknown = await request.text(); 
    const instanceId: string = await client.startNew(request.params.orchestratorName, { input: body }); 

    context.log(`Started orchestration with ID = '${instanceId}'.`); 

    return client.createCheckStatusResponse(request, instanceId); 
}; 

app.http('durableOrchestrationStart1', { 
    route: 'orchestrators/{orchestratorName}', 
    extraInputs: [df.input.durableClient()], 
    handler: httpStart, 
}); 

 


 


In Lines 8-16, we set up and register an orchestration function. In the V4 model, instead of registering the orchestration trigger in function.json, you simply do it through the app object on the durable-functions module (here df). Similar logic applies to the activity (Lines 18-21), client (Lines 23-37), and Entity functions. This means you no longer have to manage multiple function.json files just to get a simple Durable Functions app working!  


 


Lines 23-37 set up and register a client function to start the orchestration. To do that, we pass in an input object from the durable-functions module to the extraInputs array to register the function. Like in the V3 model, we obtain the Durable Client using df.getClient() to execute orchestration management operations like starting a new orchestration. We use an HTTP trigger in this example, but you could use any trigger supported by Azure Functions such as a timer trigger or Service Bus trigger. 


 


Refer to this example to see how to write a Durable Entity with the V4 model. 


 


What’s new for GA?  


 


We made the following improvements to the v4 programming model since the announcement of Public Preview last spring. Most of these improvements were made to ensure full feature parity between the existing v3 and the new v4 programming model. 


 



  1. AzureWebJobsFeatureFlags no longer needs to be set 
    During preview, you needed to set the application setting “AzureWebJobsFeatureFlags” to “EnableWorkerIndexing” to get a v4 model app working. We removed this requirement as part of the General Availability update. This also allows you to use Azure Static Web Apps with the v4 model. You must be on runtime v4.25+ in Azure or core tools v4.0.5382+ if running locally to benefit from this change.
      

  2. Model v4 is now the default

    We’re confident v4 is ready for you to use everywhere, and it’s now the default version on npm, in documentation, and when creating new apps in Azure Functions Core Tools or VS Code.


      

  3. Entry point errors are now exposed via Application Insights
    In the v3 model and in the preview version of the v4 model, errors in entry point files were ignored and weren’t logged in Application Insights. We changed the behavior to make entry point errors more obvious. It’s a breaking change for model v4 as some errors that were previously ignored will now block your app from running. You can use the app setting “FUNCTIONS_NODE_BLOCK_ON_ENTRY_POINT_ERROR” to configure this behavior. We highly recommend setting it to “true” for all v4 apps. For more information, see the App Setting reference documentation.


  4. Support for retry policy 

    We added support for configuring retry policy when registering a function in the v4 model. The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached. A retry policy is evaluated when a Timer, Kafka, CosmosDB or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Learn more about Azure Functions Retry policy.
     



  5. Support for Application Insights npm package
    Add the Application Insights npm package (v2.8.0+) to your app to discover and rapidly diagnose performance and other issues. This package tracks the following out-of-the-box: incoming and outgoing HTTP requests, important system metrics such as CPU usage, unhandled exceptions, and events from many popular third-party libraries.


  6. Support for more binding types
    We added support for SQL and Table input and output bindings. We also added Cosmos DB extension v4 types. A highlight of the latest Cosmos DB extension is that it allows you to use managed identities instead of secrets. Learn how to upgrade your Cosmos DB extension here and how to configure an app to use identities here 


  7. Support for hot reload

    Hot reload ensures your app will automatically restart when a file in your app is changed. This was not working for model v4 when we announced preview, but has been fixed for GA. 




 


How to get started?


Check out our Quickstarts to get started: 



See our Developer Guide to learn more about the V4 model. We’ve also created an upgrade guide to help migrate existing V3 apps to V4. 


 


Please give the V4 model a try and let us know your thoughts! 


 


If you have questions and/or suggestions, please feel free to drop an issue in our GitHub repo. As this is an open-source project, we welcome any PR contributions from the community. 

Try the new outbound dialing experience in Dynamics 365 Customer Service 

Try the new outbound dialing experience in Dynamics 365 Customer Service 

This article is contributed. See the original author and article here.

In the fast-paced world of customer service, efficient outbound calling communication is the cornerstone of success. Dynamics 365 Customer Service has long been a trusted platform for managing customer interactions. With the upcoming October release, we’ve listened to your feedback and delivered a significant enhancement that is set to transform outbound dialing.  

Currently, modifying the dialed number proves to be cumbersome given the inability to edit a digit. Additionally, the absence of number validation increases the risk of agents dialing incorrect numbers. This is especially true in the event of missing country codes. 

In the October release, you will find a more intuitive, streamlined, and efficient outbound dialing experience.

Editing flexibility

In the new outbound dialing experience, agents can continue to initiate calls from customer records. What’s changed? Now, modifying the dialed number is a breeze. The enhanced interface empowers agents to effortlessly edit the number before placing the call. This new experience also introduces auto-formatting, automatically structuring the number as agents type it. This functionality not only reduces errors but also highlights incomplete or invalid numbers. This newfound flexibility ensures accurate and effective outbound calling experiences.

Smart use of screen real estate

The improved interface is designed to optimize the available screen space. By default, the keypad is hidden, given most agents prefer to use the keyboard, which also allows for a clearer view of essential information. However, should agents need to utilize the keypad, it’s just a click away.

Recall recent numbers

Agents now have the power to swiftly call back recent numbers. With the ability to access the last 20 numbers dialed or received calls, agents can easily reconnect with customers. This feature is a time-saver and helps maintain a seamless communication flow.

Country and region support for outbound dialing

A significant advancement for administrators and agents alike is the support for specific countries and regions. Administrators can customize outbound profiles to allow calls only to selected countries or regions. This prevents accidental calls to unintended destinations, reinforcing precision in customer communication.

Intuitive profile selection and profile matching

Agents with multiple outbound profiles will appreciate the intuitive profile selection process. The dropdown menu displays the collective list of supported countries and regions from all profiles. Simplifying the process even further, agents need only enter the number they wish to dial. The system intelligently identifies the outbound profile supporting the dialed number’s country or region. This feature is coming as a fast follow in October.

The October release of Dynamics 365 Customer Service brings an outbound dialing experience with enhanced editing capabilities, smarter interface design, call history, number auto-formatting and validation, and refined country and region support. Agents can confidently and efficiently connect with customers, bolstering the delivery of exceptional customer service.

Learn more about outbound dialing

Watch a quick video demonstration.

To preview this feature, administrators should update the Settings definition for Enhanced outbound dialer experience to set the environment value to Yes. To learn more, see Call a customer in the voice channel | Microsoft Learn.

The post Try the new outbound dialing experience in Dynamics 365 Customer Service  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.