Lesson Learned #356: Transaction log full in Azure SQL due to CDC job.

This article is contributed. See the original author and article here.

Today, we faced a service request where our customer got the following issue Msg 9002, Level 17, State 2, Line 8
The transaction log for database ‘2d7c3f5a-XXXX-XZY-ZZZ-XXX’ is full due to ‘REPLICATION’ and the holdup lsn is (194XXX:24X:1). Following I would like to share with you what was the lesson learned here.


 


We need to pay attention about the phrase “is full due to”, in this case is REPLICATION that means that could be related about Transaction Replication or Change Data Capture (CDC). 


 


In order to determine the situation, if we are not using Transaction Replication is to review if CDC is enabled running the following query: select name,recovery_model,log_reuse_wait,log_reuse_wait_desc,is_cdc_enabled,* from sys.databases where database_id=db_id() – sys.databases (Transact-SQL) – SQL Server | Microsoft Learn


 


If the value of the column is_cdc_enabled is 1 and you are not using CDC, use the command sys.sp_cdc_disable_db to disable the CDC job. sys.sp_cdc_disable_db (Transact-SQL) – SQL Server | Microsoft Learn


 


During the troubleshooting process during the execution of sys.sp_cdc_disable_db we got another error Msg 22831, Level 16, State 1, Procedure sys.sp_cdc_disable_db_internal, Line 338 [Batch Start Line 6]
Could not update the metadata that indicates database XYZ is not enabled for Change Data Capture. The failure occurred when executing the command ‘(null)’. The error returned was 9002: ‘The transaction log for database ‘xxx-XXX-43bffef44d0c’ is full due to ‘REPLICATION’ and the holdup lsn is (51XYZ:219:1).’. Use the action and error to determine the cause of the failure and resubmit the request. 


 


In this situation, we need to add more space to the transaction log file due there is not possible to register the disabling CDC operation in the transaction log. 


 


Once, we have more space in our transaction log, we were able to disable CDC and after disabling CDC, Azure SQL Database was able to marked as backup the Transaction Log.


 


Finally, in order to try to speed up the truncation of this transaction log we executed several times the command DBCC SHRINKFILE (Transact-SQL) – SQL Server | Microsoft Learn  and we were able to reduce the file size of the transaction log file. 


 


Also, during the troubleshooting we used the following to see how many VLFs that we have and the space usage: sys.dm_db_log_info (Transact-SQL) – SQL Server | Microsoft Learn and sys.database_recovery_status (Transact-SQL) – SQL Server | Microsoft Learn


 

SELECT * FROM sys.dm_db_log_info(db_id()) AS l
select * from sys.database_recovery_status where database_id=db_id()

 


 

Improve Labelling processes with new enhanced capabilities

Improve Labelling processes with new enhanced capabilities

This article is contributed. See the original author and article here.

Introduction

Effective labelling processes and configuration play a crucial role in optimizing warehouse operations. There are several reasons why accurate labelling and configuration are important.

Firstly, proper labelling and configuration enhance efficiency in a warehouse. When items are labelled and organized accurately, warehouse staff can quickly locate and identify products, reducing the time spent searching for items and ultimately boosting productivity.

Furthermore, clear and accurate labelling also reduces the likelihood of picking or shipping errors, which can lead to improved customer satisfaction and decreased costs associated with returns and corrections.

Lastly, proper labelling and configuration contribute to safety and compliance in a warehouse. By adhering to regulations and ensuring that hazardous materials or items with specific storage requirements are handled and stored correctly, the risk of accidents can be reduced.

As technology continues to advance, so do the tools available to improve labelling and configuration processes in warehouses. In Wave 1 2023, Microsoft Dynamics 365 SCM released several enhancements to support more advanced scenarios and bring extra capabilities to the labelling process.

License plate label layout

In 10.0.31 Microsoft Dynamics 365 SCM, a new License plate label layout was introduced for designing license plate labels. This feature lets you build more advanced license plate label layouts. Now LP layouts can have repeating structures and include header, body, and footer elements (for example, if you want to print item labels out of receiving or shipping work (similar to how wave labels currently work)). You can set up custom data sources with joined tables to print information from the related tables and define custom date, time, and number formats. This capability provides more flexibility in designing labels and removes some of the customization work needed to add data to the labels.

Custom label layouts

In 10.0.33 Microsoft Dynamics 365 SCM a new Custom label layout feature was released. 

This feature introduces a new Custom label layout type that allows you to build layouts from any data sources. A new Print button will be displayed automatically when layout exists for corresponding source. Users can print labels for any data including but not limited to Product labels, Location labels, Customer labels and many more.

It gives you the tool you need to create your own labels based on the business requirements. As well as configuring and printing any labels from any source.

Print labels using an external service

In 10.0.34 Microsoft Dynamics 365 SCM provides a quick and simple method for linking Dynamics 365 to many of the most popular enterprise labeling platforms. With Microsoft Dynamics 365 SCM’s seamless integration and flexible configuration options make for a pain-free, rapid implementation. It allows you to create a seamless flow of communication and transactions to optimize your printing workflow.

It allow you to configure the HTTP(S) request that you make, allowing for the integration with cloud native and on-premise (if the firewall is opened or an Azure API created) label printing services, including Zebra’s cloud printing service (https://developer.zebra.com/apis/sendfiletoprinter-model), Loftware NiceLabel Cloud or Seagull Scientific BarTender configured with REST APIs.

Conclusion

In conclusion, the continued evolution of technology is providing ever more sophisticated tools for improving labelling processes and configuration in warehouses. The enhancements released in Wave 1 2023 are just the latest example of how Microsoft Dynamics 365 SCM is staying at the forefront of this evolution and providing users with the tools they need to optimize their warehouse operations.


Would you like to learn more?

Print labels using an external service – Supply Chain
Management | Dynamics 365 | Microsoft Learn

Print labels using the Loftware NiceLabel label service
solution – Supply Chain Management | Dynamics 365 | Microsoft Learn

Print labels using the Seagull Scientific BarTender® label
service solution – Supply Chain Management | Dynamics 365 | Microsoft Learn

License plate label layouts and printing – Supply Chain Management | Dynamics 365 | Microsoft Learn

Custom label layouts and printing – Supply Chain Management | Dynamics 365 | Microsoft Learn

Print labels using an external service – Supply Chain Management | Dynamics 365 | Microsoft Learn

Not yet a Supply Chain Management customer? 

Take a guided tour.

The post Improve Labelling processes with new enhanced capabilities appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Empowering Accessibility: Language and Audio Document Translation Made Simple with Low-Code/No-Code

Empowering Accessibility: Language and Audio Document Translation Made Simple with Low-Code/No-Code

This article is contributed. See the original author and article here.

This solution architecture proposal outlines how to effectively utilize OpenAI’s language model alongside Azure Cognitive Services to create a user-friendly and inclusive solution for document translation. By leveraging OpenAI’s advanced language capabilities and integrating them with Azure Cognitive Services, we can accommodate diverse language preferences and provide audio translations, thereby meeting accessibility standards and reaching a global audience. This solution aims to enhance accessibility, ensure inclusivity, and gain valuable insights through the combined power of OpenAI, Azure Cognitive Services and PowerPlatform.



Dataflow


Here is the process:




  1. Ingest: PDF documents, text files, and images can be ingested from multiple sources, such as Azure Blob storage, Outlook, OneDrive, SharePoint, or a 3rd party vendor.




  2. Move: Power Automate triggers and moves the file to Azure Blob storage. Blob triggers then get the original file and call an Azure Function.




  3. Extract Text and Translate: The Azure Function calls Azure Computer Vision Read API to read multiple pages of a PDF document in natural formatting order, extract text from images, and generate the text with lines and spaces, which is then stored in Azure Blob storage. The Azure Translator then translates the file and stores it in a blob container. The Azure Speech generates a WAV or MP3 file from the original language and translated language text file, which is also stored in a blob container




  4. Notify: Power Automate triggers and moves the file to the original source location and notifies users in outlook and MS teams with an output audio file.




 Without Open AIdocument-translation-for-language-and-audio-for-accessbility.png


 With Open AI


document-translation-for-language-and-audio-for-accessbility (1).png


Refer for OpenAI
Transform your business with automated insights & optimized workflows using Azure OpenAI GPT-3 – Microsoft Community Hub


Alternatives


The Azure architecture utilizes Azure Blob storage as the default option for file storage during the entire process. However, it’s also possible to use alternative storage solutions such as SharePoint, ADLS or third-party storage options. For processing a high volume of documents, consider using Azure Logic Apps as an alternative to Power Automate. Azure Logic Apps can prevent you from exceeding consumption limits within your tenant and is a more cost-effective solution. To learn more about Azure Logic Apps, please refer to the Azure Logic Apps.


 


Components


These are the key technologies used for this technical content review and research:



Scenario details


This solution uses multiple Cognitive Services from Azure to automate the business process of translating PDF documents and creating audio files in wav/mp3 audio format for accessibility and global audience. It’s a great way to streamline the translation process and make content more accessible to people who may speak different languages or have different accessibility needs.


Potential use cases


By leveraging this cloud-based solution idea that can provide comprehensive translation services on demand, organizations can easily reach out to a wider audience without worrying about language barriers. This can help to break down communication barriers and ensure that services are easily accessible for people of all cultures, languages, locations, and abilities.


In addition, by embracing digital transformation, organizations can improve their efficiency, reduce costs, and enhance the overall customer experience. Digital transformation involves adopting new technologies and processes to streamline operations and provide a more seamless experience for customers.


It is particularly relevant to industries that have a large customer base or client base, such as e-commerce, tourism, hospitality, healthcare, and government services.

Introducing Azure App Spaces: Getting your code into the cloud as fast as possible

Introducing Azure App Spaces: Getting your code into the cloud as fast as possible

This article is contributed. See the original author and article here.

We are excited to announce Azure App Spaces (preview), one of the fastest and easiest way to deploy and manage your web apps on Azure. Azure App Spaces is a portal-based experience that takes an app-first approach to building, deploying, and running your apps. App Spaces makes it easier for developers to get started using Azure, without needing to be an expert on the hundreds of different cloud services. 


 


Detect the right Azure services from your repository


 


Screenshot 2023-05-22 at 5.34.14 PM.png


 


App Spaces lets you connect your GitHub repositories to Azure, and through analysis of the code inside your GitHub repository, suggests the correct Azure services you should use. Once you deploy, GitHub Actions is used to create a continuous deployment pipeline between your repositories and your newly provisioned cloud services. Once you’ve deployed your app via App Spaces, changes to your code will immediately be pushed to your connected Azure services. 


 


Bring your own repository or start from a template


 


Screenshot 2023-05-22 at 5.14.50 PM.png


 


App Spaces also provides sample templates, powered by Azure Developer CLI, that provide a helpful blueprint for getting started with Azure. You can use these templates to immediately create a GitHub repository, connect it to Azure, and provision a distinct set of services for the template scenario. Our templates include sample static websites, web apps, and APIs, in a variety of different languages.


 


Manage your app in a consolidated view


 


Screenshot 2023-05-22 at 5.50.59 PM.png


 


 


In addition to making it easier and faster to get started developing, App Spaces also provides a simplified, app-centric management experience. An “App Space” is a loose collection of cloud services that, collectively, comprise the app you are building. You can manage your compute, database, caching, and other key services all within the same, easy-to-use management experience.


 


To get started immediately, you can check out App Spaces here. You can also read our documentation to get a better look at what App Spaces can do for you.


 

Empowering every developer with plugins for Microsoft 365 Copilot

Empowering every developer with plugins for Microsoft 365 Copilot

This article is contributed. See the original author and article here.

Generative AI models are ushering in the next frontier in interactions between humans and computers. Just like graphical user interfaces brought computing within reach of hundreds of millions of people three decades ago, next-generation AI will take it even further, making technology more accessible through the most universal interface—natural language.

The post Empowering every developer with plugins for Microsoft 365 Copilot appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Revolutionize your SAP Security with Microsoft Sentinel’s SOAR Capabilities

Revolutionize your SAP Security with Microsoft Sentinel’s SOAR Capabilities

This article is contributed. See the original author and article here.

First, big kudos to Martin for crafting this amazing playbook and co-authoring this blogpost.
Be sure to check out his SAP-focused blog for more In-Depth Insights!


 


The purpose of this blog post is to demonstrate how the SOAR capabilities of Sentinel can be utilized in conjunction with SAP by leveraging Microsoft Sentinel Playbooks/Azure Logic Apps to automate remedial actions in SAP systems or SAP Business Technology Platform (BTP).


 


Before we dive into the details of the SOAR capabilities in the Sentinel SAP Solution, let’s take a step back and take a very quick run through of the Sentinel SAP Solution.
The Microsoft Sentinel SAP solution empowers organizations to secure their SAP environments by providing threat monitoring capabilities. By seamlessly collecting and correlating both business and application logs from SAP systems, this solution enables proactive detection and response to potential threats. At its core, the solution features a specialized SAP data-connector that efficiently handles data ingestion, ensuring a smooth flow of information. In addition, an extensive selection of content, comprising analytic rules, watchlists, parsers, and workbooks, empowers security teams with the essential resources to assess and address potential risks.
In a nutshell: With the Microsoft Sentinel SAP solution, organizations can confidently fortify their SAP systems, proactively safeguarding critical assets and maintaining a vigilant security posture.


For a complete (and detailed) overview of what is included in the Sentinel SAP solution content, see Microsoft Docs for Microsoft Sentinel SAP solution


Now back to the SOAR capabilities! About a year ago, we published a blog post titled “How to use Microsoft Sentinel’s SOAR capabilities with SAP“, which discussed utilizing playbooks to react to threats in your SAP systems.


The breakthrough which the blogpost talked about was the use of Sentinel’s SOAR (Security Orchestration and Automated Response) capabilities on top of the Sentinel SAP Solution.
This means that we can not only monitor and analyze security events in real-time, we can also automate SAP incident response workflows to improve the efficiency and effectiveness of security operations.


In the previous blog post, we discussed blocking suspicious users using a gateway component, SAP RFC interface, and GitHub hosted sources.


In this post, we showcase the same end-to-end scenario using a playbook that is part of the OOB content of the SAP Sentinel Solution.


And rest assured, no development is needed – it’s all about configuration! This approach significantly reduces the integration effort, making it a smooth and efficient process!


 


Overview & Use case 


Let me set the scene: you’re the defender of your company’s precious SAP systems, tasked with keeping them safe. Suddenly Sentinel warns you that someone is behaving suspiciously on one of the SAP systems. A user is trying to execute a highly sensitive transaction in your system. Thanks to your customization of the OOB “Sensitive Transactions” watchlist and enablement of the OOB rule “SAP – Execution of a Sensitive Transaction Code”, you’re in the loop whenever the sensitive transaction SE80 is being executed. You get an instant warning, and now it’s time to investigate the suspicious behavior.


Sensitive Transactions watchlist with an entry for SE80Sensitive Transactions watchlist with an entry for SE80


 


As part of the security signal triage process, it might be decided to take action against this problematic user and to (temporarily) kick-out them out from ERP, SAP Business Technology Platform or even Azure AD. To accomplish this, you can use the automatic remediation steps outlined in the OOB playbook “SAP Incident handler- Block User from Teams or Email”.


Screenshot for the OOB SAP playbookScreenshot for the OOB SAP playbook


By leveraging an automation rule and the out-of-the-box playbook, you can effectively respond to potential threats and ensure the safety and security of your systems. Specifically, in this blog post, we will use the playbook to promptly react to the execution of the sensitive transaction SE80, employing automation to mitigate any risks that may arise.


 


Now, it’s time to dive deeper into this OOB playbook! Let’s examine it closely to better understand how it works and how it can be used in your environment.


 


Deep dive into the playbook


To start off, we’ll break down the scenario into a step-by-step flow. 


Overview of the SAP user block scenarioOverview of the SAP user block scenario


The core of this playbook revolves around adaptive cards in Teams (see step 5 in the overview diagram), and relies on waiting for a response from engineers. As we covered earlier, Sentinel detects a suspicious transaction being executed (steps 1-4), and an automation rule is set up as a response to the “SAP – Execution of a Sensitive Transaction Code” analytic rule. This sets everything in motion, and the adaptive cards in Teams play a crucial role in facilitating communication between the system and the engineers.


Adaptive card for a SAP incident offering to block the suspicious userAdaptive card for a SAP incident offering to block the suspicious user


As demonstrated in the figure above (which correspond to step 5 in the step-by-step flow), engineers are presented with the option to block the suspicious user (Nestor in this case!) on SAP ERP, SAP BTP or on Azure AD.


Let’s dive into this part of the playbook design to see how it works behind the scenes.:


Screenshot for block user action in the playbookScreenshot for block user action in the playbook


In the screenshot  you’ll notice three distinct paths for the “block user” action, each influenced by the response received in Teams. Of particular interest in this blog is the scenario where blocking a user on SAP ERP is required. This task is achieved through SOAP, providing an efficient means to programmatically lock a backend user using RFC (specifically BAPI_USER_LOCK).
When it comes to sending SOAP requests to SAP, there are various options available. Martin’s blog post provides a comprehensive explanation of these options, offering detailed technical insights and considerations. To avoid duplicating information, I encourage you to head over there for valuable insights on sending the SOAP requests.


 


When reacting to the adaptive cards, we recommend providing a clear and meaningful comment when blocking a user. This comment will be shared back to Sentinel for auditing and helping security operations understand your decision. The same applies when flagging false positives, as it helps Sentinel learn and differentiate between real threats and harmless incidents in the future. 


Screenshot of updated close reason on Sentinel fed with comment from TeamsScreenshot of updated close reason on Sentinel fed with comment from Teams


And there you have it, a lightning-fast rundown of how (parts of) this amazing playbook works! 


 


Final words


And that’s a wrap for this blog post!


But hold on, don’t leave just yet, we’ve got some important closing statements for you:



  • Remember that you have the flexibility to customize this playbook to fit your specific needs. Feel free to delete, add, or modify steps as necessary. We encourage you to try it out on your own and see how it works in your environment!

  • For those who want to dive even deeper into the technical details (especially regarding SAP), be sure to check out Martin’s blog post. As the expert who designed this playbook, he provides an in-depth explanation of how to configure SAP SOAP interfaces, the authorizations for the target Web Service and RFC and much more! Trust me, it’s a fascinating read and you’re sure to learn a lot!

  • On a related note, Martin has also created another playbook that automatically re-enables the audit trail to prevent accidental turn-offs. This playbook is now accessible through the content hub as well.

  • And finally, for those who made it all the way to the end, we hope you enjoyed reading this blog post as much as we enjoyed writing it. Now go forth and automate your security like a boss!


 

How to force filtering on at least one criterion when querying ADX data in direct query mode

How to force filtering on at least one criterion when querying ADX data in direct query mode

This article is contributed. See the original author and article here.

How to force filtering on at least one criterion


Another case for using dynamic M parameters and functions


Scenario


 


Recently I encountered the following issue working with a customer:



  • There are two main ways to slice the data before visualizing it in a PBI report.

  • The user can filter by one column or two columns coming from two different dimension tables.

  • If there is no selection on any of the two columns, the queries fail on lack of resources to perform all the joins and summaries.

  • While moving from filtering on one column to filtering on the other column, it is very natural to move through a state in which both filters are open, and the queries are very expensive and eventually fail.

  • The goal was to prevent these cases and not to attempt a query with no filtering.


We need to allow multiple selection and also allow for selecting all values in any of the slicers, but we can’t require selection in at least one of the two.


We could create a measure that will return blank if no selection is applied but this will not prevent the query from being executed to calculate the list of items on the visuals.


Solution


Using data in the help cluster, we’ll create a PBI report that will demonstrate a solution to our problem.


The data volume is not so big so the report will return values even if no selection is applied but we want to prevent this kind of query and force a selection on one filter at least.


The two columns used are cities and colors in the two dimension tables Customers and Products respectively.


We start by creating a function with two parameters, one with a list of cities and one with a list of colors.


The function returns all rows that fit the criteria.


A special value will be sent in the parameters if no selection was made, or all values are selected.


 


 


 


The function


 


.create-or-alter function FilterColorsCities(ColorsList:dynamic,CitiesList:dynamic) {


  let Catchall=”__SelectAll__”;


  SalesFact


  | where not(Catchall in (ColorsList) and Catchall in(CitiesList))


  | lookup kind=inner


      (Customers  | where CityName  in(CitiesList) or Catchall in(CitiesList))


              on CustomerKey


  | lookup kind=inner


    (Products | where ColorName  in (ColorsList) or Catchall in(ColorsList))


              on ProductKey


}


 


The function applies a filter on the main table that will return 0 rows if both lists include the special value “__SelectAll__”.


At this point, the query will apply the lookups but will terminate immediately and will not use any resources.


Each one of the joined table is filtered by the list of values and the special value returns all values.


You can see the function in the help cluster.


In Power BI


We will navigate to the function and provide the special value “__SelectAll__” as default values for both parameters:


DanyHoter_5-1684659893596.png


 


 


 


 


 


We create two parameters two replace the default values in the step that invokes the function


DanyHoter_6-1684659893598.png


 


 


 


 


DanyHoter_7-1684659893599.png


 


 


 


 


 


We use the Customers table and the Products table to create lists of cities and of colors by removing all other columns and removing duplicate rows.


It is recommended to use these tables in Dual mode.


Each column in these two tables is bound of one of the two parameters.


DanyHoter_8-1684659893600.png


 


 


We need to allow multiple selection and allow selecting all values.


The default special value representing all values is the same as the default value if no selection is done.


Final report


 


Any kind of visuals can use the data returned by the function.


A measure is created to notify the user that a selection is needed.


Empty = if(countrows(FilterColorsCities)=0,”No selection, please select either cities or colors”,””)


 


A button is added to the page that will apply all filters after a selection.


DanyHoter_9-1684659893601.png


 


 


Summary


 


Using KQL functions in conjunction with M dynamic parameters allow more control on the order of operations in the query and in some cases can block runaway queries that can drain resources and affect other users.


 


 


 

Mastering AKS Troubleshooting #1: Resolving Connectivity and DNS Failures

Mastering AKS Troubleshooting #1: Resolving Connectivity and DNS Failures

This article is contributed. See the original author and article here.

Introduction


AKS or Azure Kubernetes Service is a fully managed Kubernetes container orchestration service that enables you to deploy, scale, and manage containerized applications easily. However, even with the most robust systems issues can arise that require troubleshooting. 


 


This blog post marks the beginning of a three-part series, that originated from an intensive one-day bootcamp focused on advanced AKS networking triage and troubleshooting scenarios. It offers a practical approach to diagnosing and resolving common AKS networking issues, aiming to equip readers with quick troubleshooting skills for their AKS environment.


 


Each post walks through a set of scenarios that simulate typical issues. Detailed setup instructions will be provided to build a functional environment. Faults will then be introduced that causes the setup to malfunction. Hints will be provided on how to triage and troubleshoot these issues using common tools such as kubectl, nslookup, and tcpdump. Each scenario concludes with fixes for the issues faced and explanation of the steps taken to resolve the problem. 


 


Prerequisites


Before setting up AKS, ensure that you have an Azure account and subscription, with permissions that allows you to create resource groups and deploy AKS clusters. PowerShell needs to be available as PS scripts will be used.  Follow instructions provided in this Github link to set up AKS and run scenarios. It is also recommended that you read up on troubleshooting inbound and outbound networking scenarios that may arise in your AKS environment.


 


For inbound scenarios, troubleshooting connectivity issues pertains to applications hosted on the AKS cluster. Link describes issues related to firewall rules, network security groups, or load balancers, and provides guidance on verifying network connectivity, checking application logs, and examining network traffic to identify potential bottlenecks.


 


For outbound access, troubleshooting scenarios are related to traffic leaving the AKS cluster, such as connectivity issues to external resources like databases, APIs, or other services hosted outside of the AKS cluster.      


 


Figure below shows the AKS environment, which uses a custom VNet with its own NSG attached to the custom subnet. The AKS setup uses the custom subnet and will have its own NSG created and attached to the Network Interface of the Nodepool. Any changes to the AKS networking are automatically added to its NSG. However, to apply AKS NSG changes to the custom Subnet NSG, they must be explicitly added.


 


varghesejoji_11-1683334250677.png


 


Scenario 1: Connectivity resolution between pods or services in same cluster


Objective: The goal of this exercise is to troubleshoot and resolve connectivity between pods and services within the same Kubernetes cluster.


Layout: AKS cluster layout with 2 Pods created by their respective deployments and exposed using Cluster IP Service.


varghesejoji_13-1683334307156.png


 


Step 1: Set up the environment



  1. Setup up AKS as outlined in this script.

  2. Create namespace student and set context to this namespace


kubectl create ns student
kubectl config set-context –current –namespace=student

# Verify current namespace
kubectl config view –minify –output ‘jsonpath={..namespace}’


  1. Clone solutions Github link and change directory to Lab1 i.e., cd Lab1.


 


Step 2: Create two deployments and respective services



  1. Create a deployment nginx-1 with a simple nginx image:


kubectl create deployment nginx-1 –image=nginx


  1. Expose the deployment as a ClusterIP service:


kubectl expose deployment nginx-1 –name nginx-1-svc –port=80 –target-port=80 –type=ClusterIP


  1. Repeat the above steps to create nginx-2 deployment and a service:


kubectl create deployment nginx-2 –image=nginx
kubectl expose deployment nginx-2 –name nginx-2-svc –port=80 –target-port=80 –type=ClusterIP

 Confirm deployment and service functional. Pods should be running and services listening on Port 80. 


kubectl get all

 


Step 3: Verify that you can access both services from within the cluster by using Cluster IP addresses


# Services returned: nginx-1-svc for pod/nginx-1, nginx-2-svc for pod/nginx-2
kubectl get svc

# Get the values of and
kubectl get pods

# below should present HTML page from nginx-2
kubectl exec -it — curl nginx-2-svc:80

# below should present HTML page from nginx-1
kubectl exec -it — curl nginx-1-svc:80

# check endpoints for the services
kubectl get ep

 


Step 4: Backup existing deployments



  1. Backup the deployment associated with nginx-2 deployment:


kubectl get deployment.apps/nginx-2 -o yaml > nginx-2-dep.yaml


  1. Backup the service associated with nginx-2 service:


kubectl get service/nginx-2-svc -o yaml > nginx-2-svc.yaml

 


Step 5: Simulate service down



  1. Delete nginx-2 deployment


kubectl delete -f nginx-2-dep.yaml


  1. Apply the broken.yaml deployment file found in Lab1 folder


kubectl apply -f broken.yaml


  1. Confirm all pods are running


kubectl get all

 


Step 6: Troubleshoot the issue


Below is the inbound flow. Confirm every step from top down.


varghesejoji_1-1683334820052.png


 



  1. Check the health of the nodes in the cluster to see if there is a node issue


kubectl get nodes


  1. Verify that you can no longer access nginx-2-svc from within the cluster


kubectl exec -it  — curl nginx-2-svc:80
# msg Failed to connect to nginx-2-svc port 80: Connection refused


  1. Verify that you can access nginx-1-svc from within the cluster


kubectl exec -it  — curl nginx-1-svc:80
# displays HTML page


  1. Verify that you can access nginx-2 locally. This confirms no issue with the nginx-2 application.


kubectl exec -it  — curl localhost:80
# displays HTML page


  1. Check the Endpoints using below command and verify that the right Endpoints line up with their Services. There should be at least 1 Pod associated with a service, but none seem to exist for nginx-2 service but nginx-2 service/pod association is fine.


 kubectl get ep

varghesejoji_0-1683345896996.png


 



  1. Check label selector used by the Service experiencing issue, using below command:


kubectl describe service 

Ensure that it matches the label selector used by its corresponding Deployment using describe command:


kubectl describe deployment 

Use ‘k get svc’ and ‘k get deployment’ to get service and deployment names.


Do you notice any discrepancies?


 



  1. Using the Service label selector from #3, check that the Pods selected by the Service match the Pods created by the Deployment using the following command


kubectl get pods –selector=

If no results are returned then there must be a label selector mismatch.


From below figure, selector used by deployment returns pods but not the selector used by corresponding service.


varghesejoji_1-1683345896997.png


 



  1. Check service and pod logs and ensure HTTP traffic is seen. Compare nginx-1 pod  and service logs with nginx-2. Latter does not show GET requests, suggesting no incoming traffic.


k logs pod/ # no incoming traffic
k logs pod/ # HTTP traffic as seen below

k logs svc/
k logs svc/

varghesejoji_2-1683345897001.png


 


Step 7: Restore connectivity



  1. Check the label selector the Service is associated with and get associated pods:


# Get label
kubectl describe service nginx-2-svc

# When attempting to obtain pods using the service label, results in “no resources found” or “no pods available”.
kubectl describe pods -l app=nginx-2


  1. Update deployment and apply changes.


kubectl delete -f nginx-2-dep.yaml

In broken.yaml, update labels ‘app: nginx-02’, to ‘app: nginx-2’, as shown below


varghesejoji_0-1683346259445.png


kubectl apply -f broken.yaml # or apply dep-nginx-2.yaml

k describe pod
k get ep # nginx-2 svc should have pods unlike before


  1. Verify that you can now access the newly created service from within the cluster:


# Should return HTML page from nginx-2-svc
kubectl exec -it — curl nginx-2-svc:80

# Confirm above from logs
k logs pod/      

 


Step 8: Using Custom Domain Names


Currently Services in your namespace ‘student’ will resolve using ..svc.cluster.local. 

Below command should return web page.


k exec -it  — curl nginx-2-svc.student.svc.cluster.local

 



  1. Apply broken2.yaml in Lab1 folder and restart CoreDNS


kubectl apply -f broken2.yaml
kubectl delete pods -l=k8s-app=kube-dns -n kube-system

# Monitor to ensure pods are running
kubectl get pods -l=k8s-app=kube-dns -n kube-system


  1. Validate if DNS resolution works and it should fail wit ‘curl: (6) Could not resolve host:’


k exec -it  — curl nginx-2-svc.student.svc.cluster.local
k exec -it — curl nginx-2-svc


  1. Check the DNS configuration files in kube-system which shows the configmap’s, as below.


k get cm -A -n kube-system | grep dns


  1. Describe each of the ones found above and look for inconsistencies


k describe cm coredns -n kube-system
k describe cm coredns-autoscaler -n kube-system
k describe cm coredns-custom -n kube-system


  1. Since the custom DNS file holds the breaking changes, either edit coredns-custom and remove data section OR delete the ConfigMap ‘coredns-custom’. Deleting kube-dns pods should re-create deleted ConfigMap ‘coredns-custom’. 


kubectl delete cm coredns-custom -n kube-system
kubectl delete pods -l=k8s-app=kube-dns -n kube-system

# Monitor to ensure pods are running
kubectl get pods -l=k8s-app=kube-dns -n kube-system


  1. Confirm DNS resolution now works as before.


kubectl exec -it  — curl nginx-2-svc.student.svc.cluster.local


# Challenge lab: Resolve using FQDN aks.com #


# Run below command to get successful DNS resolution
k exec -it — curl nginx-2-svc.aks.com 

# Solution #
k apply -f working2.yaml
kubectl delete pods -l=k8s-app=kube-dns -n kube-system

# Monitor to ensure pods are running
kubectl get pods -l=k8s-app=kube-dns -n kube-system

# Confirm working using below cmd
k exec -it — curl nginx-2-svc.aks.com 

# Bring back to default
k delete cm coredns-custom -n kube-system
kubectl delete pods -l=k8s-app=kube-dns -n kube-system

# Monitor to ensure pods are running
kubectl get pods -l=k8s-app=kube-dns -n kube-system

 


Step 9: What was in the broken files


In broken.yaml deployment labels didn’t match up with the service i.e., it should have been nginx-2


varghesejoji_8-1683334196415.png


 


In broken2.yaml breaking changes were made that resolved ‘student.svc.cluster.local’ to ‘bad.cluster.local’, which broke DNS resolution.


$kubectl_apply=@”
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  internal-custom.override: | # any name with .server extension
    rewrite stop {
      name regex (.*).svc.cluster.local {1}.bad.cluster.local.
      answer name (.*).bad.cluster.local {1}.svc.cluster.local.
    }
“@
$kubectl_apply | kubectl apply -f –

 


Step 10: Cleanup


k delete deployment/nginx-1 deployment/nginx-2 service/nginx-1-svc service/nginx-2-svc
or just delete namespace >  k delete ns student

 


 


Scenario 2: DNS and External access failure resolution


Objective: The goal of this exercise is to troubleshoot and resolve Pod DNS lookups and DNS resolution failures.


Layout: Cluster layout as shown below has NSG applied to AKS subnet, with Network Policies in effect.


varghesejoji_0-1683347176139.png


 


Step 1: Set up the environment



  1. Setup up AKS as outlined in this script.

  2. Create and switch to the newly created namespace


kubectl create ns student
kubectl config set-context –current –namespace=student

# Verify current namespace
kubectl config view –minify –output ‘jsonpath={..namespace}’


  1. Clone solutions Github link and change directory to Lab2 i.e., cd Lab2.


 


Step 2: Verify DNS Resolution works within cluster



  1. Create pod for DNS validation within Pod


kubectl run dns-pod –image=nginx –port=80 –restart=Never
kubectl exec -it dns-pod — bash

# Run these commands at the bash prompt
apt-get update -y
apt-get install dnsutils -y
exit


  1. Test and confirm DNS resolution resolves to the correct IP address.


kubectl exec -it dns-pod — nslookup kubernetes.default.svc.cluster.local

 


Step 3: Break DNS resolution



  1. From Lab2 folder apply broken1.yaml


kubectl apply -f broken1.yaml


  1. Confirm running below command results in ‘connection timed out; no servers could be reached’


kubectl exec -it dns-pod — nslookup kubernetes.default.svc.cluster.local

 


Step 4: Troubleshoot DNS Resolution Failures



  1. Verify DNS resolution works within the AKS cluster


kubectl exec -it dns-pod — nslookup kubernetes.default.svc.cluster.local
# If response ‘connection timed out; no servers could be reached’ then proceed below with troubleshooting


  1. Validate DNS service which should show port 53 in use


kubectl get svc kube-dns -n kube-system


  1. Check logs for pods associated with kube-dns


$coredns_pod=$(kubectl get pods -n kube-system -l k8s-app=kube-dns -o=jsonpath='{.items[0].metadata.name}’)
kubectl logs -n kube-system $coredns_pod


  1. If a custom ConfigMap is present, verify that the configuration is correct.


kubectl describe cm coredns-custom -n kube-system


  1. Check for networkpolicies currently in effect. If DNS related then describe and confirm no blockers. If network policy is a blocker then have that removed.


kubectl get networkpolicy -A
NAMESPACE     NAME              POD-SELECTOR            
kube-system   block-dns-ingress  k8s-app=kube-dns        

kubectl describe networkpolicy block-dns-ingress -n kube-system
# should show on Ingress path not allowing DNS traffic to UDP 53 


  1. Remove the offending policy


kubectl delete networkpolicy block-dns-ingress -n kube-system


  1. Verify DNS resolution works within the AKS cluster. Below is another way to create a Pod to execute task as nslookup and delete on completion


kubectl run -it –rm –restart=Never test-dns –image=busybox –command — nslookup kubernetes.default.svc.cluster.local
# If the DNS resolution is working correctly, you should see the correct IP address associated with the domain name


  1. Check NSG has any DENY rules that might block port 80. If exists, then have that removed


# Below CLI steps can also be performed as a lookup on Azure portal under NSG

 


Step 5: Create external access via Loadbalancer



  1. Expose dns-pod with service type Load Balancer.


kubectl expose pod dns-pod –name=dns-svc –port=80 –target-port=80 –type LoadBalancer


  1. Confirm allocation of External-IP.


kubectl get svc


  1. Confirm External-IP access works within cluster. 


kubectl exec -it dns-pod — curl 


  1. Confirm from browser that External-IP access fails from internet to cluster.


curl 

 


Step 6: Troubleshoot broken external access via Loadbalancer



  1. Check if AKS NSG applied on the VM Scale Set has an Inbound HTTP Allow rule.

  2. Check if AKS Custom NSG applied on the Subnet has an ALLOW rule and if none then apply as below.


$custom_aks_nsg = “custom_aks_nsg” # <- verify
$nsg_list=az network nsg list –query “[?contains(name,’$custom_aks_nsg’)].{Name:name, ResourceGroup:resourceGroup}” –output json

# Extract Custom AKS Subnet NSG name, NSG Resource Group
$nsg_name=$(echo $nsg_list | jq -r ‘.[].Name’)

$resource_group=$(echo $nsg_list | jq -r ‘.[].ResourceGroup’)
echo $nsg_list, $nsg_name, $resource_group

$EXTERNAL_IP=””
az network nsg rule create –name AllowHTTPInbound `
–resource-group $resource_group –nsg-name $nsg_name `
–destination-port-range 80 –destination-address-prefix $EXTERNAL_IP `
–source-address-prefixes Internet –protocol tcp `
–priority 100 –access allow


  1. After ~60s, confirm from browser that External-IP access succeeds from internet to cluster.


curl 

 


Step 7: What was in the broken files


Broken1.yaml is a Network Policy that blocks UDP ingress requests on port 53 to all Pods


varghesejoji_1-1683347703684.png


 


Step 8: Cleanup


k delete pod/dns-pod 
or
k delete ns student

az network nsg rule delete –name AllowHTTPInbound `
–resource-group $resource_group –nsg-name $nsg_name

 


Conclusion


This post demonstrates common connectivity and DNS issues that can arise when working with AKS. The first scenario focuses on resolving connectivity problems between pods and services within the Kubernetes cluster. We encountered issues where the assigned labels of a deployment did not match the corresponding pod labels, resulting in non-functional endpoints. Additionally, we identified and rectified issues with CoreDNS configuration and custom domain names. The second scenario addresses troubleshooting DNS and external access failures. We explored how improperly configured network policies can negatively impact DNS traffic flow. In the next article, second of the three-part series, we will delve into troubleshooting scenarios related to endpoint connectivity across virtual networks and tackle port configuration issues involving services and their corresponding pods.


 


Disclaimer


The sample scripts are not supported by any Microsoft standard support program or service. The sample scripts are provided AS IS without a warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

Account structure activation performance enhancement 

Account structure activation performance enhancement 

This article is contributed. See the original author and article here.

Introduction  

Account structures in Dynamics 365 Finance use a main account and financial dimensions to create a set of rules that determine the order and allowed values when entering account numbers in transactions. Once an account structure is defined, it must be activated. Historically, the account structure activation process has been time consuming. It was also difficult to view the activation progress or to view any errors with the new configuration. If an account structure configuration change caused an error, a user could not find the root error message on the account structure page, but rather needed to dig through batch job logs to find the error message to understand the problem with the new account structure configuration.  

Feature details  

In order to solve these problems, we have recently released an enhancement to the account structure activation process in application release 10.0.31. This performance enhancement lets you activate account structures more quickly by allowing multiple transaction updates to happen at the same time. An added benefit of this new feature enhancement is allowing the structure to be marked as active immediately after it is validated and before the remaining unposted transactions are updated to the new structure configuration. This allows transaction processing to continue while the existing unposted transactions are updated to the new structure.  

To view the status of the activation, select View activation status above the grid on the Account structures page. You can also view the activation status by selecting View on the Action Pane and then selecting Activation status on the drop-down menu. 

Enable the feature 

In order to use this new functionality, enable the feature “Account structure activation performance enhancement” from within feature management.  

Learn more

More information about this feature can be found at this location: Account structure activation performance enhancement – Finance | Dynamics 365 | Microsoft Learn 

The post Account structure activation performance enhancement  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.