Citrix Releases Security Updates for Workspace Apps, Virtual Apps and Desktops

This article is contributed. See the original author and article here.

Citrix has released security updates to address high-severity vulnerabilities (CVE-2023-24486, CVE-2023-24484, CVE-2023-24485, and CVE-2023-24483) in Citrix Workspace Apps, Virtual Apps and Desktops. A local user could exploit these vulnerabilities to take control of an affected system.

CISA encourages users and administrators to review Citrix security bulletins CTX477618, CTX477617, and CTX477616 for more information and to apply the necessary updates.

Introducing the New Post-delivery Activities Report in Microsoft Defender for Office 365

Introducing the New Post-delivery Activities Report in Microsoft Defender for Office 365

This article is contributed. See the original author and article here.

 


Introducing the New Post-delivery Activities Report in Microsoft Defender for Office 365


 


Attackers are always evolving to adapt to the newest protections enacted by security teams and the products they rely on.  Today, attackers frequently attempt to bypass security tools by sending messages that only become malicious after they have been delivered. This requires a robust post-delivery detection and response mechanism. In this blog, we will explore the evolution of an attack, how Defender for Office 365 provides out of the box post-delivery protection, and how you can see this value for your organization. Today we’re announcing a new report in Microsoft Defender for Office 365 that highlights messages that have been acted upon or moved by Microsoft after they have been delivered to the inbox.  


Post-delivery activities 


Before diving into this new report, we want to start by covering post-delivery activities – what they are and how they work in Defender for Office 365.  


 


How do attacks land in the mailbox? 


Threat actors work on the fact that they can send messages and weaponize them later. Attackers frequently send messages with an inactive URL which won’t be detected at time of delivery, and once the messages have been delivered to inboxes, the URLs are then weaponized. This puts your end users at risk of credential theft and your organization at risk of a widespread attack. Threats can also be reclassified post-delivery, based on this weaponization by attackers.  


That’s where Zero-Hour Auto Purge (ZAP) comes in to protect your organization from these types of attacks. ZAP is powered by Microsoft’s advanced security graph to detect and neutralize threats. ZAP is constantly reviewing your messages to identify and neutralize these threats.  


 


How does Defender for Office 365 detect and respond to these attacks?  


Microsoft Defender for Office 365 includes ZAP, a post-delivery activity which acts on malicious messages after delivery. Upon identifying a malicious Indicator of Compromise (IoC), ZAP can find all messages in user mailboxes that contain the malicious IOC.  Once the messages are identified, ZAP acts on the message based on the specific policy action, securing your end users and your organization. With secure by default, our filtering will keep many potentially dangerous or unwanted messages out of your mailboxes. The secure by default feature enables malware and high confidence phishing messages detected post-delivery to be sent to quarantine by ZAP, with no additional configurations required.  


 


ZAP receives signals from our advanced security graph and utilizes this threat intelligence to remove malicious messages from the inbox, providing out of the box post-delivery protection for all customers. And this isn’t just for Defender for Office 365 customers; we provide ZAP actions for all Microsoft email services, including Exchange Online Protection and even Outlook.com consumer accounts. The quick system-driven actions reduce the exposure time of your end users, securing your organization in a timely and effective way. There is no need for any admin intervention to identify and trigger an action. Upon detection of the malicious content, ZAP removes the message from the inbox. 


 


 


Post-delivery protection with ZAP 


Where can I review messages that were neutralized by ZAP? 


With our Microsoft Defender for Office 365 P2 and E5 licenses, you can review messages that are neutralized by ZAP within Advanced Hunting and Threat Explorer. You can learn more here 


          Sehrish4khan_0-1675982747378.jpeg


 


Introducing the new Post-delivery activity report  


We’ve heard customer feedback that understanding when ZAP took action can be challenging. As a result, we’re happy to announce the launch of a new Post-delivery activity report. The report will display all the ZAP events that occurred in your organization. If the verdict assigned to a message has been changed, the new report will display this updated data, making it easier to investigate the messages.  


 


You can find the Post-delivery activities report under Email & collaboration reports. 


 


EmailAndCollab (1).jpg


Figure 1: Access the Post-delivery activities report under Email & collaboration reports 


 


Post-deliveryActivities2-blur.jpg


Figure 2: Post-delivery activities report 


 


 


From the report, you have direct access to the email entity side panel to review additional information about the message:


Post-deliveryEmailEntityBlur.jpg


Figure 3: Access the email entity summary panel from the report view 



 


 


Learn more about the report by viewing our documentation 


You can use the following PowerShell cmdlets to access the report information for your organization.  



  • GetAggregateZapReport  

  • GetDetailZapReport 


You can learn more about these PowerShell cmdlets here. 


 


If you are part of a Security Operations team or a Cyber Threat Intelligence team, get started by navigating to security.microsoft.com/reports/PostDeliveryActivities, to review the messages we have blocked in your organization post-delivery.  


 


For questions or feedback about Microsoft Defender for Office 365, engage with the community and Microsoft experts in the Defender for Office 365 forum. 

Apple Releases Security Updates for Multiple Products

This article is contributed. See the original author and article here.

Apple has released security updates to address vulnerabilities in multiple products. An attacker could exploit these vulnerabilities to take control of an affected device.

CISA encourages users and administrators to review the Apple security updates page for the following products and apply the necessary updates as soon as possible:
•   Safari 16.3.1
•   iOS 16.3.1 and iPadOS 16.3.1
•   macOS 13.2.1

Lesson Learned #329: DATABASEPROPERTYEX( DB_NAME() ,  ‘Updateability’ ) and db_datareader role

Lesson Learned #329: DATABASEPROPERTYEX( DB_NAME() ,  ‘Updateability’ ) and db_datareader role

This article is contributed. See the original author and article here.

Today, we got a question where our customer asked that if using ApplicationIntent=ReadWrite with an user with db_datareader permision, the results of DATABASEPROPERTYEX(DB_NAME(), ‘Updateability’) will be affected or not.


 


In this situation, let’s try to create a business critical database with readscale out enabled and create the following user. Right now, the answer is not affected.


 


 

create user UserName with password = 'PasswordX2X3X1!'
alter role db_datareader add member UserName

 


 


Once we have established the connection using SQL SERVER Management Studio using this user and execute the query


 


 

SELECT DATABASEPROPERTYEX(DB_NAME(), 'Updateability');

 


 


The results will be:


 


Jose_Manuel_Jurado_0-1676308307894.png


 


However, using applicationIntent=Readonly with the same user the results will be the expected one:


 


Jose_Manuel_Jurado_1-1676308371449.png


 


Additionaly, I would like to share an article that explain the behaviour when we are using Transparent Failover Group and ApplicationIntent at the same time – Lesson Learned #131: ReadScale Out and Failover Group – Microsoft Community Hub


 


Enjoy!

Create Azure Container Registry

Create Azure Container Registry

This article is contributed. See the original author and article here.

In this article we will learn how to setup Azure container registry. As you know we use container registry stores and manages private container images and other artifacts, like the way Docker Hub stores public Docker container images. Let’s create a container in Visual Studio that we can push to GitHub and then deploy to Azure Container Registry. If you want to follow along on your local computer, you’ll need Docker Desktop installed. This allows you to run containers locally before pushing them to a remote hosting environment, like Azure App Service, or a more complex orchestration environment, like Azure Kubernetes Service. You can download the Docker Desktop installed from this link. Before we move to next step I believe that Docker Desktop is successfully setup at your workstation.


 


Let’s start by opening Visual Studio and creating a new project. I’ll choose the ASP.NET Core template. Click Next, give the project a name, and then let’s enable Docker. This will create the Dockerfile and configure the project to run locally on Docker Desktop. You can choose whether to create a Linux or a Windows container as shown in below screenshots. Let’s just leave Linux. Once the project is created, you can see there’s a Dockerfile here, and it shows that the ASP.NET 5 image from Microsoft is being used as the base and the SDK image is being used for the build.


 


VinodSoni_0-1676212310080.png


 


 


VinodSoni_1-1676212310091.png


 


Let’s close this file and let’s change this to the name of the image that will be getting built and let’s run this project (Click on Docker run button). The container’s window opens at the bottom, and then the browser shows the code running in the container.


 


VinodSoni_2-1676212310093.png


 


VinodSoni_3-1676212310097.png


 


Now let’s close this and go to the Git menu and create a Git repository. I’m already logged into GitHub, and it’s going to create a local repo, as well as the remote repo in GitHub with the same name. I’ll leave these defaults, and let’s create this. Let’s go back to the browser and refresh the homepage here. And there’s the repo that got created. The code has been uploaded, and you can see there is the Dockerfile.


 


VinodSoni_4-1676212310098.png


 


VinodSoni_5-1676212310101.png


 


VinodSoni_6-1676212310105.png


 


 


Now let’s create the Azure Container Registry and the credentials needed for GitHub actions to push this container to the registry.


 


I have the Azure portal open here, and I’m logged in as an administrator. I’m going to open up the Azure Cloud Shell, so we’ll have a Bash Shell here where we can run commands.


 


VinodSoni_7-1676212310110.png


 


You can do this from your desktop, too, but you’ll need the Azure CLI installed locally. Everything is already configured in the Cloud Shell, and I don’t need to log into the CLI, either. Now, I’m going to be running commands, and they’ll be using variables, so you’ll be able to just copy the following standard variables (required during setup the Azure container registry) into new script file called variables.sh.


 


VinodSoni_8-1676212310112.png


 


VinodSoni_9-1676212310114.png


 


But these variables need to be set up front. So rather than type them in individually, I’m going to upload a file to Azure, and the Cloud Shell is backed by your own file share (as shown in below screenshot), which makes this possible. I have this file called variables.sh. Let’s upload this.


 


VinodSoni_10-1676212310115.png


 


VinodSoni_11-1676212310117.png


 


And now if I click on this Edit button, there’s the files on the left, and the uploaded file is at the bottom. Click on it, and you can edit it right here in the browser. Change the values of these variables to whatever you want the resources to be called. We’ll need a resource group, a name for the Azure Container Registry instance that we’ll be creating, and this needs to be unique across Azure. Then, a service principal name, and this is just a security account that we’ll be granting privileges to for GitHub to use for deployments. And finally, an Azure region for the location. I’ll use East US. Now let’s save this file.


 


VinodSoni_12-1676212310119.png


 


And we can run this file by typing a period, a space, and then the file name. This allows these variables to be available within the current Bash context. If I echo out the value of one of these variables, you can see that it’s available.


 


VinodSoni_13-1676212310121.png


 


Okay, let’s run our first command. First, let’s create a resource group. We’ll use the name and location variables, and this will give us a container that we can keep everything in and delete it all as a group later. Next, let’s get the ID of the resource group using this az group show with a query for the ID. And I forgot to preface the variable name with a dollar sign. Okay, there it is.


 

az group create --name $RG_NAME --location $RG_LOCATION

 


VinodSoni_14-1676212310124.png


 

RG_ID=$(az group show --name $RG_NAME --query id --output tsv)

 


VinodSoni_15-1676212310125.png


 


Now let’s create the service principal that GitHub will use to deploy the container. We’re using az ad sp create for rbac, scoping it to just the resource group, and adding this sdk auth parameter. This will output the result in a file format that we can paste into a GitHub secret and use for authentication within a GitHub action later. I just masked the values for security reasons. This gives us back info about the service principal, including the password, and it’s called a client secret here. You won’t be able to see this password again, and we’re going to need all this later. So let’s copy it and open up Notepad and paste the service principal information here.


 

az ad sp create-for-rbac --name $SP_NAME --scope $RG_ID --role Contributor --sdk-auth

 


VinodSoni_16-1676212310136.png


 


Okay, now rather than copy the client ID, let’s do a query to store it in a variable because we’ll need it for other commands. I’ll just echo this out to make sure it’s the same. Good. Okay, now we’re ready to create the Azure Container Registry instance. We do that with a az acr create. With a resource group name, a name for the registry, and a SKU, we can just use the basic pricing tier for testing. Let’s run this, and I’ll close this editor. When it completes, we get the resource info back and the login server URL is listed here. Let’s go to All services and search for Container and open up the Container registries. It can take a few seconds to show up. I’ll just refresh this.


 

SP_ID=$(az ad sp list --display-name $SP_NAME --query "[].appId" --output tsv)

 


VinodSoni_17-1676212310138.png


 


And there’s the new registry. Now let’s go back to the Cloud Shell at the bottom of the browser window here, and let’s run this command to get the ID of the container registry. I’ll just print that out. Okay, now we need to assign a permission to the service principal that will allow GitHub to push containers into the registry using this credential.


 


We do that with az role assignment, and the role we’re assigning to the service principal is the AcrPush role. Okay, now we have the resource created and the permissions we need. The next step is to create the GitHub actions workflow to build and push the container image, and we’ll configure GitHub secrets for the service principal values that the workflow will need. Let’s do that next.


 

az acr create --resource-group $RG_NAME --name $ACR_NAME --sku Basic

 


VinodSoni_18-1676212310144.png


 

ACR_ID=$(az acr show --name $ACR_NAME --query id --output tsv)

 


VinodSoni_19-1676212310146.png


 

az role assignment create --assignee $SP_ID --scope $ACR_ID --role AcrPush

 


VinodSoni_20-1676212310148.png


 


VinodSoni_21-1676212310152.png


 


VinodSoni_22-1676212310155.png


 


VinodSoni_23-1676212310159.png