Agentless Container Posture Management in Multicloud

Agentless Container Posture Management in Multicloud

This article is contributed. See the original author and article here.

 


Container security is an integral part of Microsoft Defender for Cloud, a Cloud Native Application Platform (CNAPP) as it addresses the unique challenges presented by containerized environments, providing a holistic approach to securing applications and infrastructure in the cloud-native landscape. As organizations embrace multicloud, the silos between cloud environments can become barriers for a holistic approach to container security. Defender for Cloud continues to adapt, offering new capabilities that resonate with the fluidity of multicloud architecture. Our latest additions to AWS and GCP seamlessly traverse cloud silos and provide a comprehensive and unified view of container security posture. 


 


Container image scanning for AWS and GCP managed repositories  


Container vulnerability assessment scanning powered by Microsoft Defender Vulnerability Management is now extended to AWS and GCP including Elastic Container Registry (ECR), Google Artifact Registry (GAR) and Google Container Registry (GCR). Using Defender Cloud Security Posture Management and Defender for Containers, organizations are now able to view vulnerabilities detected on their AWS and GCP container images at both registry and runtime, all within a single pane of glass. 


 


With this in-house scanner, we provide the following key benefits for container image scanning:


 



  • Agentless vulnerability assessment for containers: MDVM scans container images in your Azure Container Registry (ACR), Elastic Container Registry (ECR) and Google Artifact Registry (GAR) without the need to deploy an agent. After enabling this capability, you authorize Defender for Cloud to scan your container images.  

  • Zero configuration for onboarding: Once enabled, all images stored in ACR, ECR and GAR are automatically scanned for vulnerabilities without extra configuration or user input. 

  • Near real-time scan of new images: Defender for Cloud backend receives a notification when a new image is pushed to the registry; they are added to the queue to be scanned immediately.  

  • Daily refresh of vulnerability reports: Vulnerability reports are refreshed every 24hrs for images previously scanned that were pulled in the last 30 days (Azure only), pushed to the registry in the last 90 days or currently running on the Azure Kubernetes Service (AKS) cluster, Elastic Kubernetes Service (EKS) cluster or Google Kubernetes Engine (GKE).  

  • Coverage for both ship and runtime: Container image scanning powered by MDVM shows vulnerability reports for both images stored in the registry and images running on the cluster.  

  • Support for OS and language packages: MDVM scans both packages installed by the OS package manager in Linux and language specific packages and files, and their dependencies. 

  • Real-world exploitability insights (based on CISA kev, exploit DB and more)  

  • Support for ACR private links: MDVM scans images in container registries that are accessible via Azure Private Link if allow access by trusted services is enabled. 


 


The use of a single, in-house scanner provides a unified experience across all three clouds for detecting and identifying vulnerabilities on your container images. By enabling “Agentless Container Vulnerability Assessment” in Defender for Containers or Defender CSPM, at no additional cost, your container registries in AWS and GCP are automatically identified and scanned without the need for deploying additional resources in either cloud environment. This SaaS solution for container image scanning streamlines the process for discovering vulnerabilities in your multicloud environment and ensures quick integration into your multicloud infrastructure without causing operational friction.  


 


Through both Defender CSPM and Defender for Containers, results from container image scanning powered by MDVM are added into the Security graph for enhanced risk hunting. Through Defender CSPM, they are also used in calculation of attack paths to identify possible lateral movements an attacker could take to exploit your containerized environment.  


 


Discover vulnerable images in Elastic Container RegistriesDiscover vulnerable images in Elastic Container Registries


 


Discover vulnerable images in Google Artifact Registry and Google Container RegistryDiscover vulnerable images in Google Artifact Registry and Google Container Registry


 


 


Unified Vulnerability Assessment solution across workloads and clouds 


Microsoft Defender Vulnerability Management (MDVM) is now the unified vulnerability scanner for container security across Azure, AWS and GCP. In Defender for Cloud, unified Vulnerability Assessment powered by Defender Vulnerability Management, we shared more insights about the decision to use MDVM, with the goal being to enable organizations to have a single, consistent vulnerability assessment solution across all cloud environments. 


 


Vulnerability assessment scanning powered by Microsoft Defender Vulnerability Management for Azure Container Registry images is already generally available. Support for AWS and GCP is now public preview and provides a consistent experience across all three clouds.  


With the general availability of container vulnerability assessment scanning powered by Microsoft Defender Vulnerability Management, we also announced retirement of Qualys container image scanning in Defender for Cloud. Retirement of Qualys container image scanning is set for March 1st, 2024. 


 


 To prepare for the retirement of Qualys container image scanning and consider the following resources: 



 


Agentless Inventory Capabilities & Risk-Hunting with Cloud Security Explorer 


Leaving zero footprint, agentless discovery for Kubernetes performs API-based discovery of your Google Kubernetes Engine (GKE) and Elastic Kubernetes Service (EKS) clusters, their configurations, and deployments. Agentless discovery is a less intrusive approach to Kubernetes discovery as it minimizes impact and footprint on the Kubernetes cluster by avoiding additional installation of agents and resource consumption. 


 


Through the agentless discovery of Kubernetes and integration with the Cloud Security Explorer, organizations can explore the Kubernetes data plane, services, images, configurations of their container environments and more to easily monitor and manage their assets. 


Discover your multicloud Kubernetes cluster in a single view.Discover your multicloud Kubernetes cluster in a single view.


 


 


fkortor_3-1706017660015.png


 


 


View Kubernetes data plane inventoryView Kubernetes data plane inventory


 


 


Using the Cloud Security Explorer, organizations can also hunt for risks to their Kubernetes environments which include Kubernetes-specific security insights such as pod and node level internet exposure, running vulnerable images and privileged containers. 


 


fkortor_5-1706017660020.png


 


 


Hunt for risk such as privileged containersHunt for risk such as privileged containers


 


 


 


Defender Cloud Security Posture Management now complete with multicloud Kubernetes Attack Paths 


 


Multicloud organizations using Defender CSPM can now leverage the Attack path analysis to visualize risks and threats to their Kubernetes environments, allowing them to get a complete view of potential threats across all three cloud environments. Attack path analysis utilizes environment context including insights from Agentless Discovery of Kubernetes and Agentless Container Vulnerability scanning to expose exploitable paths that attackers may use to breach your environment. Reported Attack paths help prioritize posture issues that matter most in your environment and help you get a head of threats to your Kubernetes environment.  


 


fkortor_7-1706017660025.png


 


fkortor_8-1706017660029.png


 


 



 


Next Steps 



 


Reviewers:


Maya Herskovic, Senior PM Manager, Defender for Cloud


Tomer Spivak, Senior Product Manager, Defender for Cloud


Mona Thaker, Senior Product Marketing Manager, Defender for Cloud

Collect Event Ids using Azure Monitor Agent

Collect Event Ids using Azure Monitor Agent

This article is contributed. See the original author and article here.

Introduction


Hello everyone, I am Bindusar (CSA) working with Intune. I have received multiple requests from customers asking to collect specific event IDs from internet-based client machines with either Microsoft Entra ID or Hybrid Joined and upload to Log Analytics Workspace for further use cases. There are several options available like:



  • Running a local script on client machines and collecting logs. Then using “Send-OMSAPIIngestionFile” to upload required information to Log Analytics Workspace.


The biggest challenge with this API is to allow client machines to authenticate directly in Log Analytics Workspace. If needed, Brad Watts already published a techcommunity blog here.


Extending OMS with SCCM Information – Microsoft Community Hub



  • Using Log analytics agent. However, it is designed to collect event logs from Azure Virtual Machines.


Collect Windows event log data sources with Log Analytics agent in Azure Monitor – Azure Monitor | Microsoft Learn



  • Use of Monitoring Agent to collect certain types of events like Warning, Errors, Information etc and upload to Log Analytics Workspace. However, in monitoring agent, it was difficult to customize it to collect only certain event IDs. Also, it will be deprecated soon.


Log Analytics agent overview – Azure Monitor | Microsoft Learn


 


In this blog, I am trying to extend this solution to Azure Monitor Agent instead. Let’s try to take a scenario where I am trying to collect Security Event ID 4624 and upload it to Event Table of Log Analytics Workspace.


 


Event ID 4624 is generated when a logon session is created. It is one of the most important security events to monitor, as it can provide information about successful and failed logon attempts, account lockouts, privilege escalation, and more. Monitoring event ID 4624 can help you detect and respond to potential security incidents, such as unauthorized access, brute force attacks, or lateral movement.


 


In following steps, we will collect event ID 4624 from Windows client machines using Azure Monitor Agent and store this information in Log Analytics workspace. Azure Monitor Agent is a service that collects data from various sources and sends it to Azure Monitor, where you can analyse and visualize it. Log Analytics workspace is a container that stores data collected by Azure Monitor Agent and other sources. You can use Log Analytics workspace to query, alert, and report on the data.


 


Prerequisites


Before you start, you will need the following:



  • A Windows client that you want to monitor. Machine should be Hybrid or Entra ID joined.

  • An Azure subscription.

  • An Azure Log Analytics workspace.

  • An Azure Monitor Agent.


 


Steps


To collect event ID 4624 using Azure Monitor Agent, follow these steps:


If you already have a Log Analytics workspace where you want to collect the events, you can move to step #2 where we need to create a DCR. A table named “Events” (not custom) will be used to collect all the events specified.


1.    Steps to create Log Analytics Workspace


1.1            Login to Azure portal and search for Log analytics Workspace


Bindusar_1-1706026392079.png


 


1.2            Select and Create after providing all required information.


Bindusar_2-1706026392093.png


 


2.    Creating a Data Collection Rule (DCR)


Detailed information about data collection rule can be found at following. However, for the granularity of this blog, we will extract the required information to achieve our requirements.


Data collection rules in Azure Monitor – Azure Monitor | Microsoft Learn


2.1            Permissions


Monitoring Contributor” on Subscription, Resource Group and DCR is required.


Reference: Create and edit data collection rules (DCRs) in Azure Monitor – Azure Monitor | Microsoft Learn


2.2            Steps to create DCR.


For PowerShell lovers, following steps can be referred.


Create and edit data collection rules (DCRs) in Azure Monitor – Azure Monitor | Microsoft Learn



  • Login to Azure portal and navigate to Monitor.


Bindusar_3-1706026440726.png


 



  • Locate Data collection Rules on Left Blade.


Bindusar_4-1706026440737.png


 



  • Create a New Data Collection Rule and Provide required details. Here we are demonstrating Platform Type Windows


Bindusar_5-1706026440745.png


 



  • Resources option talks about downloading an Azure Monitor Agent which we need to install on client machines. Please select link to “Download the client installer” and save it for future steps.


Bindusar_6-1706026440752.png


 



  • Under Collect and deliver, collect talks about “what” needs to be collected and deliver talks about “where” collected data will be saved. Click on Add data source and select Windows Event Logs for this scenario.


Bindusar_7-1706026440756.png


 


Bindusar_8-1706026440759.png


 



  • In this scenario, we are planning to collect Event ID 4624 from Security Logs. By default, under Basic, we do not have such option and thus we will be using Custom.


Bindusar_9-1706026440766.png


 


Customer uses XPath format. XPath entries are written in the form LogName!XPathQuery. For example, in our case, we want to return only events from the Security event log with an event ID of 4624. The XPathQuery for these events would be *[System[EventID=4624]]. Because you want to retrieve the events from the Security event log, the XPath is Security!*[System[EventID=4624]]. To get more information about how to consume event logs, please refer to following doc.


Consuming Events (Windows Event Log) – Win32 apps | Microsoft Learn


 


Bindusar_10-1706026440769.png


 



  • Next is to select the Destination where logs will be stored. Here we are selecting the Log analytics workspace which we created in steps 1.2.


Bindusar_11-1706026440773.png


 



  • Once done, Review and Create the rule.


2.3            Creating Monitoring Object and Associating it with DCR.


You need to create a ‘Monitored Object’ (MO) that creates a representation for the Microsoft Entra tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with. This Monitored Object needs to be created only once for any number of machines in a single Microsoft Entra tenant. Currently this association is only limited to the Microsoft Entra tenant scope, which means configuration applied to the Microsoft Entra tenant will be applied to all devices that are part of the tenant and running the agent installed via the client installer.


Bindusar_12-1706026440778.png


 


Here, we are using a PowerShell script to create and map Monitoring Object to DCR.


Reference: Set up the Azure Monitor agent on Windows client devices – Azure Monitor | Microsoft Learn


Following things to keep in mind:



  1. The Data Collection rules can only target the Microsoft Entra tenant scope. That is, all DCRs associated to the tenant (via Monitored Object) will apply to all Windows client machines within that tenant with the agent installed using this client installer. Granular targeting using DCRs is not supported for Windows client devices yet.

  2. The agent installed using the Windows client installer is designed for Windows desktops or workstations that are always connected. While the agent can be installed via this method on client machines, it is not optimized for battery consumption and network limitations.

  3. Action should be performed by Tenant Admin as one-time activity. Steps mentioned below gives the Microsoft Entra admin ‘owner’ permissions at the root scope.


 


 


#Make sure execution policy is allowing to run the script.
Set-ExecutionPolicy unrestricted

#Define the following information
$TenantID = ""  #Your Tenant ID
$SubscriptionID = "" #Your Subscription ID where Log analytics workspace was created.
$ResourceGroup = "Custom_Inventory" #Your resroucegroup name where Log analytics workspace was created.


$Location = "eastus" #Use your own location. “location" property value under the "body" section should be the Azure region where the Monitor object would be stored. It should be the "same region" where you created the Data Collection Rule. This is the location of the region from where agent communications would happen.


$associationName = "EventTOTest1_Agent" #You can define your custom associationname, must change the association name to a unique name, if you want to associate multiple DCR to monitored object.

$DCRName = "Test1_Agent" #Your Data collection rule name.


#Just to ensure that we have all modules required.

If(Get-module az -eq $null)
{
    Install-Module az
    Install-Module Az.Resources
    Import-Module az.accounts
}

#Connecting to Azure Tenant using Global Admin ID
Connect-AzAccount -Tenant $TenantID

#Select the subscription
Select-AzSubscription -SubscriptionId $SubscriptionID

#Grant Access to User at root scope "/"
$user = Get-AzADUser -UserPrincipalName (Get-AzContext).Account

New-AzRoleAssignment -Scope '/' -RoleDefinitionName 'Owner' -ObjectId $user.Id

#Create Auth Token
$auth = Get-AzAccessToken

$AuthenticationHeader = @{
    "Content-Type" = "application/json"
    "Authorization" = "Bearer " + $auth.Token
    }



#1. Assign ‘Monitored Object Contributor’ Role to the operator.
$newguid = (New-Guid).Guid
$UserObjectID = $user.Id

$body = @"
{
            "properties": {
                "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
                "principalId": `"$UserObjectID`"
        }
}
"@

$requestURL = "https://management.azure.com/providers/microsoft.insights/providers/microsoft.authorization/roleassignments/$newguid`?api-version=2020-10-01-preview"


Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body


##

#2. Create Monitored Object


$requestURL = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview"
$body = @"
{
    "properties":{
        "location":`"$Location`"
    }
}
"@

$Respond = Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body -Verbose
$RespondID = $Respond.id

##

#3. Associate DCR to Monitored Object
#See reference documentation https://learn.microsoft.com/en-us/rest/api/monitor/data-collection-rule-associations/create?tabs=HTTP

$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations/$associationName`?api-version=2021-09-01-preview"
$body = @"
        {
            "properties": {
                "dataCollectionRuleId": "/subscriptions/$SubscriptionID/resourceGroups/$ResourceGroup/providers/Microsoft.Insights/dataCollectionRules/$DCRName"
            }
        }

"@

Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body


#IN case you want to create more than DCR, use following in comments.


#Following step is to query the created objects.
#4. (Optional) Get all the associatation.
$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations?api-version=2021-09-01-preview"
(Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method get).value 

 


 


 


3. Client-side activity


3.1            Prerequisites:


Reference: Set up the Azure Monitor agent on Windows client devices – Azure Monitor | Microsoft Learn



  1. The machine must be running Windows client OS version 10 RS4 or higher.

  2. To download the installer, the machine should have C++ Redistributable version 2015) or higher

  3. The machine must be domain joined to a Microsoft Entra tenant (AADj or Hybrid AADj machines), which enables the agent to fetch Microsoft Entra device tokens used to authenticate and fetch data collection rules from Azure.

  4. The device must have access to the following HTTPS endpoints:

    • global.handler.control.monitor.azure.com

    • .handler.control.monitor.azure.com (example: westus.handler.control.azure.com)

    • .ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com) (If using private links on the agent, you must also add the data collection endpoints)




3.2            Installing the Azure Monitoring Agent Manually



  • Use the Windows MSI installer for the agent which we downloaded in step 1.3 while creating the DCR.

  • Navigate to downloaded file and run that as administrator. Follow the steps like configuring proxy etc as per your need and finish the setup.

  • Following screenshots can be referred to install manually on selected client machines to test.


Bindusar_13-1706026887723.png


 


Bindusar_14-1706026887727.png


 


Bindusar_15-1706026887730.png


 


Bindusar_16-1706026887733.png


 


Bindusar_17-1706026887735.png


 


This needs Admin permissions on local machine.


Bindusar_18-1706026887739.png


 



  • Verify successful installation:

    • Open Services and confirm ‘Azure Monitor Agent’ is listed and shows as Running.




 


Bindusar_19-1706026887746.png


 



  • Open Control Panel -> Programs and Features OR Settings -> Apps -> Apps & Features and ensure you see ‘Azure Monitor Agent’ listed.


 


Bindusar_20-1706026887755.png


 


 


 


3.3            Installation of Azure Monitor Agent using Intune.



  • Login to Intune Portal and navigate to Apps.


Bindusar_21-1706026887757.png


 



  • Click on +Add to create a new app. Select Line-of-business app.


Bindusar_22-1706026887761.png


 



  • Locate the Agent file which was downloaded in section 2.2 during DCR creation.


 


Bindusar_23-1706026887765.png


 



  • Provide the required details like scope tags and groups to deploy.


Bindusar_24-1706026887771.png


 



  • Assign and Create.

  • Ensure that machines are already installed with C++ Redistributable version 2015) or higher. If not, please create another package as dependent of this application. If you do not do that, Azure Monitoring Agent will be stuck in Install Pending State.


 


 


4.    Verification of configuration.


Its time to validate the configuration and data collected.


4.1            Ensure that the Monitoring Object is mapped with data collection rule.


To do this, navigate to Azure Portal > Monitor > Data collection rule > Resources. A new custom monitored object should be created.


Bindusar_25-1706026989001.png


 


4.2            Ensure that Azure Monitor Agents are Connected.


To do this, navigate to Azure Portal > Log Analytics Workspaces > Your workspace which was created at the beginning > Agents > Focus on Windows Computers Connected Via Azure Monitor Windows Agents on Left Side.


Bindusar_26-1706026989008.png


 


4.3            Ensure that the client machines can send required data.


To check this, navigate to Azure Portal > Log Analytics workspaces > Your workspace which was created at the beginning > Tables. Events table must be created.


Bindusar_27-1706026989015.png


 


4.4            Ensure that required data is captured.


To access the event logs captured, navigate to Azure Portal > Log Analytics workspaces > Your workspace which was created at the beginning > Logs and run KQL query.


Event


| where EventID == 4624


Bindusar_28-1706026989034.png


 


Conclusion


Collecting event IDs, like Event ID 4624 from Windows clients is a useful way to track user logon activities and identify any suspicious or unauthorized actions. By using Azure Monitor Agent and Log Analytics workspace, you can easily configure, collect, store, and analyse this data in a scalable and easy way. You can also leverage the powerful features of the Log Analytics query language (KQL) and portal to create custom queries, filters, charts, and dashboards to visualize and monitor the logon events. You can further refer this data in PowerBI reports as well.


We would like to thank you for reading this article and hope you found it useful and informative.


If you want to learn more about Azure Monitor and Log Analytics, you can visit our official documentation page and follow our blog for the latest updates and news.

Troubleshooting Azure Stack HCI 23H2 Preview Deployments

Troubleshooting Azure Stack HCI 23H2 Preview Deployments

This article is contributed. See the original author and article here.

Troubleshooting Azure Stack HCI 23H2 Preview Deployments


With Azure Stack HCI release 23H2 preview, there are significant changes to how clusters are deployed, enabling low touch deployments in edge sites. Running these deployments in customer sites or lab environments may require some troubleshooting as kinks in the process are ironed out. This post aims to give guidance on this troubleshooting.


 


The following is written using a rapidly changing preview release, based on field and lab experience. We’re focused on how to start troubleshooting, rather than digging into specific issues you may encounter.


Understanding the deployment process


Deployment is completed in two steps: first, the target environment and configuration are validated, then the validated configuration is applied to the cluster nodes by a deployment. While ideally any issues with the configuration will be caught in validation, this is not always the case. Consequently, you may find yourself working through issues in validation only to also have more issues during deployment to troubleshoot. We’ll start with tips on working through validation issues then move to deployment issues.


When the validation step completes, a ‘deploymentSettings’ sub-resource is created on your HCI cluster Azure resource.


Logs Everywhere!


When you run into errors in validation or deployment the error passed through to the Portal may not have enough information or context to understand exactly what is going on. To get to the details, we frequently need to dig into the log files on the HCI nodes. The validation and deployment processes pull in components used in Azure Stack Hub, resulting in log files in various locations, but most logs are on the seed node (the first node sorted by name).


Viewing Logs on Nodes


When connected to your HCI nodes with Remote Desktop, Notepad is available for opening log files and checking contents. Another useful trick is to use the PowerShell Get-Content command with the -wait parameter to follow a log and -last parameter to show only recent lines. This is especially helpful to watch the CloudDeployment log progress. For example:


Get-Content C:CloudDeploymentLogsCloudDeployment.2024-01-20.14-29-13.0.log -wait -last 150

Log File Locations


The table below describes important log locations and when to look in each:




































Path



Content



When to use…



C:CloudDeploymentLogsCloudDeployment*



Output of deployment operation



This is the primary log to monitor and troubleshoot deployment activity. Look here when a deployment fails or stalls



C:CloudDeploymentLogsEnvironmentValidatorFull*



Output of validation run



When your configuration fails a validation step



C:ECEStoreLCMECELiteLogsInitializeDeploymentService*



Logs related to the Life Cycle Manager (LCM) initial configuration



When you can’t start validation, the LCM service may not have been fully configured



C:ECEStoreMASLogs



PowerShell script transcript for ECE activity



Shows more detail on scripts executed by ECE—this is a good place to look if CloudDeployment shows an error but not enough detail



C:CloudDeploymentLogscluster*
C:WindowsTemp StorageClusterValidationReport*



Cluster validation report



Cluster validation runs when the cluster is created; when validation fails, these logs tell you why



 


Retrying Validations and Deployments


Retrying Validation


In the Portal, you can usually retry validation with the “Try Again…” button. If you are using an ARM template, you can redeploy the template.


In the Validation stage, your node is running a series of scripts and checks to ensure it is ready for deployment. Most of these scripts are part of the modules found here:

C:Program FilesWindowsPowerShellModulesAzStackHci.EnvironmentChecker


 


Sometimes it can be insightful to run the modules individually, with verbose or debug output enabled.


Retrying Deployment


The ‘deploymentSettings’ resource under your cluster contains the configuration to deploy and is used to track the status of your deployment. Sometimes it can be helpful to view this resource; an easy way to do this is to navigate to your Azure Stack HCI cluster in the Portal and append ‘deploymentsettings/default’ after your cluster name in the browser address bar.


 


mtbmsft_4-1705940538192.png


Image 1 – the deploymentSettings Resource in the Portal


From the Portal


In the Portal, if your Deployment stage fails part-way through, you can usually restart the deployment by clicking the ‘Return Deployment’ button under Deployments at the cluster resource.


 

mtbmsft_5-1705940555689.png


Image 2 – access the deployment in the Portal so you can retry


Alternatively, you can navigate to the cluster resource group deployments. Find the deployment matching the name of your cluster and initiate a redeploy using the Redeploy option.


 

mtbmsft_6-1705940579246.png


Image 3 – the ‘Redploy’ button on the deployment view in the Portal


If Azure/the Portal show your deployment as still in progress, you won’t be able to start it again until you cancel it or it fails.


From an ARM Template


To retry a deployment when you used the ARM template approach, just resubmit the deployment. With the ARM template deployment, you submit the same template twice—once with deploymentMode: “Validate” and again with deploymentMode: “Deploy”. If you’re wanting to retry validation, use “Validate” and to retry deployment, use “Deploy”.


mtbmsft_7-1705940600694.png


Image 4 – ARM template showing deploymentMode setting


Locally on the Seed Node


In most cases, you’ll want to initiate deployment, validation, and retries from Azure. This ensures that your deploymentSettings resource is at the same stage as the local deployment.


 


However, in some instances, the deployment status as Azure understands it becomes out of sync with what is going on at the node level, leaving you unable to retry a stuck deployment. For example, Azure has your deploymentSettings status as “Provisioning” but the logs in CloudDeployment show the activity has stopped and/or the ‘LCMAzureStackDeploy’ scheduled task on the seed node is stopped. In this case, you may be able to rerun the deployment by restarting the ‘LCMAzureStackDeploy’ scheduled task on the seed node:

Start-ScheduledTask -TaskName LCMAzureStackDeploy

If this does not work, you may need to delete the deploymentSettings resource and start again. See: The big hammer: full reset.


Advanced Troubleshooting


Invoking Deployment from PowerShell


Although deployment activity has lots of logging, sometimes either you can’t find the right log file or seem to be missing what is causing the failure. In this case, it is sometimes helpful to retry the deployment directly in PowerShell, executing the script which is normally called by the Scheduled Task mentioned above. For example:

C:CloudDeploymentSetupInvoke-CloudDeployment.ps1 -Rerun


Local Group Membership


In a few cases, we’ve found that the local Administrators group membership on the cluster nodes does not get populated with the necessary domain and virtual service account users. The issues this has caused have been difficult to track down through logs, and likely has a root cause which will soon be addressed.

Check group membership with: Get-LocalGroupMember Administrators


Add group membership with: Add-LocalGroupMember Administrators -Member [,…]

Here’s what we expect on a fully deployed cluster:


























Type



Accounts



Comments



Domain Users



DOMAIN



This is the domain account created during AD Prep and specified during deployment



Local Users



AzBuiltInAdmin (renamed from Administrator)


ECEAgentService
HCIOrchestrator



These accounts don’t exist initially but are created at various stages during deployment. Try adding them—if they are not provisioned, you’ll get a message that they don’t exist.



Virtual Service Accounts



S-1-5-80-1219988713-3914384637-3737594822-3995804564-465921127


S-1-5-80-949177806-3234840615-1909846931-1246049756-1561060998


S-1-5-80-2317009167-4205082801-2802610810-1010696306-420449937


S-1-5-80-3388941609-3075472797-4147901968-645516609-2569184705


S-1-5-80-463755303-3006593990-2503049856-378038131-1830149429


S-1-5-80-649204155-2641226149-2469442942-1383527670-4182027938


S-1-5-80-1010727596-2478584333-3586378539-2366980476-4222230103


S-1-5-80-3588018000-3537420344-1342950521-2910154123-3958137386



These are the SIDs of the various virtual service accounts used to run services related to deployment and continued lifecycle management. The SIDs seem to be hard coded, so these can be added any time. When these accounts are missing, there are issues as early as the JEA deployment step.



 


ECEStore


The files in the ECEStore directory show state and status information of the ECE service, which handles some lifecycle and configuration management. The JSON files in this directory may be helpful to troubleshoot stuck states, but most events also seem to be reported in standard logs. The MASLogs directory in the ECEStore directory shows PowerShell transcripts, which can be helpful as well.


NUGET Packages


During initialization, several NuGet packages are downloaded and extracted on the seed node. We’ve seen issues where these packages are incomplete or corrupted—usually noted in the MASLogs directory. In this case, the The big hammer: full reset option seems to be required.


The Big Hammer: Full Reset


If you’ve pulled the last of your hair out, the following steps usually perform a full reset of the environment, while avoiding needing to reinstall the OS and reconfigure networking, etc (the biggest hammer). This is not usually necessary and you don’t want to go through this only to run into the same problem, so spend some time with the other troubleshooting options first.



  1. Uninstall the Arc agents on all nodes with the Remove-AzStackHciArcInitialization command

  2. Delete the deploymentSettings resource in Azure

  3. Delete the cluster resource in Azure

  4. Reboot the seed node

  5. Delete the following directories on the seed node:




    1. C:CloudContent

    2. C:CloudDeployment

    3. C:Deployment

    4. C:DeploymentPackage

    5. C:EceStore

    6. C:NugetStore




  1. Remove the LCMAzureStackStampInformation registry key on the seed node:
    Get-Item -path HKLM:SOFTWAREMicrosoftLCMAzureStackStampInformation | Remove-Item -whatif

  2. Reinitialize Arc on each node with Invoke-AzStackHciArcInitialization and retry the complete deployment


Conclusion


Hopefully this guide has helped you troubleshoot issues with your deployment. Please feel free to comment with additional suggestions or questions and we’ll try to get those incorporated in this post.


 


If you’re still having issues, a Support Case is your next step!

Logic Apps Mission Critical Series: “We Speak: IBM i: COBOL and RPG Applications”

Logic Apps Mission Critical Series: “We Speak: IBM i: COBOL and RPG Applications”

This article is contributed. See the original author and article here.

In this session, we continue with the “We Speak”, Mission Critical Series with an episode on how Azure Logic Apps can unlock scenarios where is required to integrate with IBM i (i Series or former AS/400) Applications.


 


The IBM i In-App Connector


 


The IBM i In-App connector enables connections between Logic App workflows to IBM i Applications running on IBM Power Systems. 


 


hcamposu_0-1705780996126.png


 


 


Background:


 


More than 50 years ago, IBM released the first midrange systems. IBM advertised them as “Small in size, small in price and Big in performance. It is a system for now and for the future”. Over the years, the midranges evolved and became pervasive in medium size businesses or in large enterprises to extend Mainframe environments. Midranges running IBM i (typically Power systems), support TCP/IP and SNA. Host Integration Server supports connecting with midranges using both.


 


IBM i includes the Distributed Program Calls (DPC) server feature that allows most IBM System i applications to interact with clients such as Azure Logic Apps in request-reply fashion (client-initiated only) with minimum modifications. DPC is a documented protocol that supports program to program integration on an IBM System i, which can be accessed easily from client applications using the TCP/IP networking protocol.


 


IBM i Applications were typically built using the Report Program Generator (RPG)  or the COBOL languages. The Azure Logic Apps connector for IBM i supports integrating with both types of programs. The following is a simple RPG program called CDRBANKRPG.


 


hcamposu_1-1705780996162.png


 


 


As with many of our other IBM Mainframe connectors, it is required to prepare an artifact with the metadata of the IBM i programs to call by using the HIS Designer for Logic Apps tool. The HIS Designer will help you creating a Host Integration Design XML file (HIDX) for use with the IBM i connector. The following is a view of the outcome of the HIDX file for the program above.


 


hcamposu_2-1705780996173.png


 


 


For instructions on how to create this metadata artifacts, you can watch this video:


 


 


Once you have the HIDX file ready for deployment, you will need to upload it in the Maps artifacts of your Azure Logic App and then create a workflow and add the IBM 3270 i Connector.


To set up the IBM i Connector, you will require inputs from the midrange Specialist. You will require at least the midrange IP and Port.


 


hcamposu_3-1705780996187.png


 


In the Parameters section, enter the name of the HIDX file. If the HIDX was uploaded to Maps, then it should appear dynamically:


 


hcamposu_4-1705780996189.png


 


 


And then select the method name:


 


hcamposu_5-1705780996191.png


 


 


The following video include a complete demonstration of the use of the IBM i In-App connector for Azure Logic Apps:


 


Master Microsoft Fabric: Your Ultimate Guide to Certification and Expertise

Master Microsoft Fabric: Your Ultimate Guide to Certification and Expertise

This article is contributed. See the original author and article here.

Below, you’ll find a treasure trove of resources to further your learning and engagement with Microsoft Fabric.


 

aitour-homepage.png



Dive Deeper into Microsoft Fabric


 


Microsoft Fabric Learn Together


Join us for expert-guided live sessions! These will cover all necessary modules to ace the DP-600 exam and achieve the Fabric Analytics Engineer Associate certification. 


Explore Learn Together Sessions


Overview: Microsoft Fabric Learn Together is an expert-led live series that provides in-depth walk-throughs covering all the Learn modules to prepare participants for the DP-600 Fabric Analytics Engineer Associate certification. The series consists of 9 episodes delivered in both India and Americas timezones, offering a comprehensive learning experience for those looking to enhance their skills in Fabric Analytics.


Agenda:



  1. Introduction to Microsoft Fabric: An overview of the Fabric platform and its capabilities.

  2. Setting up the Environment: Guidance on preparing the necessary tools and systems for working with Fabric.

  3. Data Ingestion and Management: Best practices for data ingestion and management within the Fabric ecosystem.

  4. Analytics and Insights: Techniques for deriving insights from data using Fabric’s analytics tools.

  5. Security and Compliance: Ensuring data security and compliance with industry standards when using Fabric.

  6. Performance Tuning: Tips for optimizing the performance of Fabric applications.

  7. Troubleshooting: Common issues and troubleshooting techniques for Fabric.

  8. Certification Preparation: Focused sessions on preparing for the DP-600 certification exam.

  9. Q&A and Wrap-up: An interactive session to address any remaining questions and summarize key takeaways.


This series is designed to be interactive, allowing participants to ask questions and engage with experts live. It’s a valuable opportunity for those looking to specialize in Fabric Analytics and gain a recognized certification in the field.


For more detailed information and to register for the series, you can visit the page on Microsoft Learn. Enjoy your learning journey https://aka.ms/learntogether



 


Hands-On Learning with Fabric


Enhance your skills with over 30 interactive, on-demand learning modules tailored for Microsoft Fabric.


Start Your Learning Journey and then participate in our Hack Together: The Microsoft Fabric Global AI Hack – Microsoft Community Hub



Special Offer: Secure a 50% discount voucher for the Microsoft Fabric Exam by completing the Cloud Skills Challenge between January and June 2024.



 


Easy Learning with Fabric Notes


Unlock the power of Microsoft Fabric with engaging, easy-to-understand illustrations. Perfect for all levels of expertise!


Access Fabric Notes Here


 


 

fabricnotes.png



Your Path to Microsoft Fabric Certification


Get ready for DP-600: Implementing Analytics Solutions Using Microsoft Fabric. Start preparing today to become a certified Microsoft Fabric practitioner.


 


Join the Microsoft Fabric Community


Connect with fellow Fabric enthusiasts and experts. Your one-stop community hub: https://community.fabric.microsoft.com/. Here’s what you’ll find:



 


Stay Ahead: The Future of Microsoft Fabric


Be in the know with the latest developments and upcoming features. Check out the public roadmap