Lesson Learned #456: Invalid Key Column Type Error in Azure SQL Database DataSync

This article is contributed. See the original author and article here.

One such error for Azure SQL Database users employing DataSync is: “Database provisioning failed with the exception ‘Column is of a type that is invalid for use as a key column in an index.” This article aims to dissect this error, providing insights and practical solutions for database administrators and developers.


 


Understanding the Error:


 


This error signifies a mismatch between the column data type used in an index and what is permissible within Azure SQL DataSync’s framework. Such mismatches can disrupt database provisioning, a critical step in synchronization processes.


 


Data Types and Index Restrictions in DataSync:


 


Azure SQL Data Sync imposes specific limitations on data types and index properties. Notably, it does not support indexes on columns with nvarchar(max)that our customer has. Additionally, primary keys cannot be of types like sql_variant, binary, varbinary, image, and xmlWhat is SQL Data Sync for Azure? – Azure SQL Database | Microsoft Learn


 


Practical Solutions:


 



  1. Modify Data Types: If feasible, alter the data type from nvarchar(max) to a smaller variant .

  2. Index Adjustments: Review your database schema and modify or remove indexes that include unsupported column types.

  3. Exclude Problematic Columns: Consider omitting columns with unsupported data types from your DataSync synchronization groups.


 


 

Windows Containers on AKS Customer Stories

Windows Containers on AKS Customer Stories

This article is contributed. See the original author and article here.

We have published a new page on Azure to highlight Windows Container customer stories on AKS with M365 (supporting products like Office and Teams), Forza (XBOX Game Studios), Relativity and Duck Creek


 


If you are looking for a way to modernize your Windows applications, streamline your development process, and scale your business with Azure, you might be interested in learning how other customers have achieved these goals by using Windows Containers on Azure Kubernetes Service (AKS). 


 


Fady_Azmy_0-1701382958318.png


 


 


Windows Containers on AKS is a fully managed Kubernetes service that allows you to run your Windows applications alongside Linux applications in the same cluster, with seamless integration and minimal code modifications. Windows Containers on AKS offers a number of benefits, such as: 



  • Reduced infrastructure and operational costs 

  • Improved performance and reliability 

  • Faster and more frequent deployments 

  • Enhanced security and compliance 

  • Simplified management and orchestration 


 


Stay tuned for new stories that will be published soon, featuring customers from new industries and with new scenarios using Windows Containers. 


 


In the meantime, we invite you to check out the Windows Container GitHub repository, where you can find useful resources, documentation, samples, and tools to help you get started. You can also share your feedback, questions, and suggestions with the Windows Container product team and the community of users and experts. 

Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central

Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central

This article is contributed. See the original author and article here.

In the rapidly evolving AI landscape, Microsoft Dynamics 365 Business Central is taking the lead with innovations that have equipped more than 30,000 small and medium-sized businesses to achieve success. Powered by next-generation AI, Microsoft Copilot offers new ways to enhance workplace efficiency, automate mundane tasks, and unlock creativity. At a time when nearly two in three people say they struggle with having the time and energy to do their job, Copilot helps to free up capacity and enables employees to focus on their most meaningful work.1 

Dynamics 365 Business Central brings the power of AI to small and medium-sized businesses to help companies work smarter, adapt faster, and perform better. AI in Dynamics 365 Business Central improves the way work gets done, enabling you to:

  • Get answers quickly and easily using natural language.
  • Save time by automating tedious, repetitive tasks.
  • Spark creativity with creative content ideas.
  • Anticipate and overcome business challenges.

Reclaim time for important work

In a small or medium-sized business, there is often a lot to do and not many people to help get it all done, so it’s important to make the most of your limited resources to accomplish your goals. Everyday activities like tracking down documents and bringing new employees up to speed can drain your valuable time. What if you had an AI-powered assistant ready to help you find exactly what you need without the hassle?

Available in early 2024, conversational chat using Copilot in Dynamics 365 Business Central helps you answer questions quickly and easily, locate records faster, and even learn new skills—all using natural language. Save time and effort by navigating to documents without having to use traditional menus, and rapidly onboard new users with answers to questions on how, when, or why to do things. Copilot is your everyday AI companion, helping you to speed through tasks, build momentum, and free time for your most impactful work. 

Streamline month-end tasks with enhanced bank reconciliation

Reconciling bank statement transactions with your financial system has often been a tedious monthly chore. Meticulously matching every line item to new or existing accounting entries takes time (and isn’t the most exciting way to spend an afternoon.) In the past, Business Central helped by auto-matching many of the simple one-to-one transactions, but the logic wasn’t able to decipher more complex scenarios such as when multiple charges were paid in a single transaction.

Now, Copilot in Business Central makes bank reconciliation even easier by analyzing bank statements that you import into Business Central, matching more transactions, and proposing entries for transactions that weren’t auto-matched. By comparing and interpreting transaction descriptions, amounts, dates, and patterns across fields, Copilot can help you improve the accuracy of your bank reconciliation while reducing manual effort.

Unlock creativity with marketing text suggestions

Copilot in Business Central helps product managers save time and drive sales with compelling AI-generated marketing text suggestions. Using key attributes like color and material, Copilot can create product descriptions in seconds tailored to your preferred tone, format, and length. Once you’ve made any adjustments, you can easily publish to Shopify or other ecommerce platforms with just a few clicks. Discover how Copilot can help you banish writer’s block and launch new products with ease.

Boost customer service with inventory forecasting

Effective inventory management is crucial in a competitive business environment as it can significantly influence a company’s success and customer retention. This process involves balancing customer service with cost control. Maintaining low inventory reduces working capital, but risks missing sales due to stock shortages. Using AI, the Sales and Inventory Forecast extension uses past sales data to forecast future demand, helping to prevent stockouts. Once a shortfall is identified, Business Central streamlines the replenishment process by generating vendor requests, helping you keep your customers happy by fulfilling their orders on time, every time.  

Reduce risk with late payment prediction

Managing receivables effectively is vital for a business’s financial wellbeing. With the Late Payment Prediction extension, you can reduce outstanding receivables and refine your collections approach by forecasting if outstanding sales invoices are likely to be paid on time. For instance, if a payment is anticipated to be delayed, you could modify the payment terms or method for that customer. By proactively addressing potential late payments and adapting accordingly, you can minimize overdue receivables, reduce risk of non-payment, and ultimately improve your financial performance.

Improve financial stability with Cash Flow Analysis

Powered by AI, Business Central can create a comprehensive Cash Flow Analysis to help you monitor your company’s cash position. Cash flow is a critical indicator of a company’s solvency, and cash flow analysis is an important future-focused planning tool that helps you maintain control over your financial health and make proactive adjustments to meet all your financial commitments. With insights from Business Central, you can pivot quickly to safeguard your company’s fiscal wellbeing, such as obtaining loans to cover cash shortfalls or cutting back on credit when you have surplus cash.

Work smarter with Copilot in Business Central

Copilot in Business Central gives your company an edge with AI-powered innovations that are a catalyst for unleashing human potential, fostering creativity, and driving efficiency in ways previously unimaginable. The integration of AI into everyday business processes is not just about staying ahead in a competitive market, it’s about redefining what’s possible in the workplace. With Business Central, your company is empowered to navigate today’s complex business environment with agility, precision, and a renewed focus on what truly matters.

Customer using a tablet while wearing headphones and working securely remote from a café.

Dynamics 365 Business Central

Work smarter, adapt faster, and perform better with Business Central.


Sources

1 Microsoft Work Trend Index Annual Report, May 2023

The post Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Managed Lustre with Automatic Synchronisation to Azure BLOB Storage

Azure Managed Lustre with Automatic Synchronisation to Azure BLOB Storage

This article is contributed. See the original author and article here.

Introduction


This blog post walks through how to setup an Azure Managed Lustre Filesystem (AMLFS) that will automatically synchronise to an Azure BLOB Storage container. The synchronisation is achieved using the Lustre HSM (Hierarchical Storage Management) interface combined with the Robinhood policy engine and a tool that reads the Lustre changelog and synchronises metadata with the archived storage. The lfsazsync repository on GitHub contains a Bicep template to deploy and setup a virtual machine for this purpose.


 



Disclaimer: The lfsazsync deployment is not a supported Microsoft product you are responsible for the deployment and operation of the solution. There are updates that need applying to AMLFS that will require a Support Request to be raised through the Azure Portal. These updates could effect the stabaility of AMLFS and customer requiring the same level of SLA should speak to their Microsoft representative.



Initial Deployment


The following is required before running the lfsazsync Bicep template:



  • Virtual Network

  • Azure BLOB Storage Account and container (HNS is not supported)

  • AMLFS deployed without HSM enabled


The lfsazsync repository contains a test/infra.bicep example to create the required resources:


 


lfsazsync-prerequisite.jpg


 


To deploy, first create a resource group, e.g.



TODO: set the variables below
resource_group=
location=
az group create –name $resource_group –location $location

 


Then deploy into this resource group:



az deployment group create –resource-group $resource_group –template-file test/infra.bicep

 



Note: The bicep file has parameters for names, ip ranges etc. that should be set if you do not want the default values.



 


Updating the AMLFS settings


Once deployment is complete, navigate to the Azure Portal, locate the AMLFS resource and click on “New Support Request”. The following shows the suggested request to get AMLFS updated:


 


amlfs-support-request.jpg


 


The lctl commands needed are listed here.


 


Deploying Azure BLOB Storage Synchronisation


The lfsazsync deployment sets up a single virtual machine for all tasks. The HSM copytools could be run on multiple virtual machines to increase transfer peformance. The bandwidth for archiving and retrieval is constrained to approximately half the network bandwidth available to the virtual machine. It is important to note that the same network will be utilized for both accessing the Lustre filesystem and accessing Azure Storage. This should be considered when deciding the virtual machine size. The virtual machine sizes and expected network performance is available here.


 


The Bicep template has the following parameters:


 























































Parameter Description
subnet_id The ID of the subnet to deploy the virtual machine to
vm_sku The SKU of the virtual machine to deploy
admin_user The username of the administrator account
ssh_key The public key for the administrator account
lustre_mgs The IP address/hostname of the Lustre MGS
storage_account_name The name of the Azure storage account
storage_container_name The container to use for synchonising the data
storage_account_key A SAS key for the storage account
ssh_port The port used by sshd on the virtual machine
github_release Release tag where the robinhood and lemur will be downloaded from
os The OS to use for the VM (options: ubuntu2004 or almalinux87)

 


The SAS key can be generated using the following Azure CLI command:



# TODO: set the account name and container name below
account_name=
container_name=

start_date=$(date -u +”%Y-%m-%dT%H:%M:%SZ”)
expiry_date=$(date -u +”%Y-%m-%dT%H:%M:%SZ” –date “next month”)

az storage container generate-sas
–account-name $account_name
–name $container_name
–permissions rwld
–start $start_date
–expiry $expiry_date
-o tsv


 


The following Azure CLI command can be used to get the subnet ID:



# TODO: set the variable below
resource_group=
vnet_name=
subnet_name=

az network vnet subnet show –resource-group $resource_group –vnet-name $vnet_name –name $subnet_name –query id –output tsv


 


The following Azure CLI command can be used to deploy the Bicep template (as an alterative to setting environment variables, the parameters could be set in a parameters.json file):



# TODO: set the variables below
resource_group=
subnet_id=
vmsku=”Standard_D32ds_v4″
admin_user=
ssh_key=
lustre_mgs=
storage_account_name=
storage_container_name=
storage_sas_key=
ssh_port=
github_release=”v1.0.1″
os=”almalinux87″

az deployment group create
–resource-group $resource_group
–template-file lfsazsync.bicep
–parameters
subnet_id=”$subnet_id”
vmsku=$vmsku
admin_user=”$admin_user”
ssh_key=”$ssh_key”
lustre_mgs=$lustre_mgs
storage_account_name=$storage_account_name
storage_container_name=$storage_container_name
storage_sas_key=”$storage_sas_key”
ssh_port=$ssh_port
github_release=$github_release
os=$os


 


After this call completes the virtual machine will be deployed although it will take more time to install and import the metadata from Azure BLOB storage into the Lustre filesystem. The progress can be monitored by looking at the /var/log/cloud-init-output.log file on the virtual machine.


 


Monitoring


The install will set up three systemd services for lhsmd, robinhood and lustremetasync. The log files are located here:



  • ‘lhsmd’: /var/log/lhsmd.log

  • ‘robinhood’: /var/log/robinhood*.log

  • ‘lustremetasync’: /var/log/lustremetasync.log


 


Default archive settings


The synchronisation parameters can be controlled through the Robinhood config file, /opt/robinhood/etc/robinhood.d/lustre.conf. Below are some of the default settings and their locations in the config file:


 




















































Name Default Location
Archive interval 5 minutes lhsm_archive_parameters.lhsm_archive_trigger
Rate limit 1000 files lhsm_archive_parameters.rate_limit.max_count
Rate limit interval 10 seconds lhsm_archive_parameters.rate_limit.period_ms
Archive threshold last modified time > 30 minutes lhsm_archive_parameters.lhsm_archive_rules
Release trigger 85% of OST usage lhsm_archive_parameters.lhsm_release_trigger
Small file release last access > 1 year lhsm_archive_parameters.lhsm_release_rules
Default file release last access > 1 day lhsm_archive_parameters.lhsm_release_rules
File remove removal time > 5 minutes lhsmd.lhsmd_remove_rules

 


To update the config file, edit the file and then restart the robinhood service, systemctl restart robinhood.


The lustremetasync service is processing the Lustre ChangeLog continuously. Therefore, actions will happen immediately unless there is a lot of IO all at once where it may take a few minutes to catch up. The following operations will be handled:


 




  • Create/delete directories


    Directories are created in BLOB storage as an empty object with the name of the directory. There is metadata on this file to indicate that it is a directory. The same object is deleted when removed on the filesystem.




  • Create/delete symbolic links


    Symbolic links are create in BLOB storage as an empty object with the name of the symbolic link. There is metadata on this file to indicate that it is a symbolic link and this contains the path that it is linking to. The same object is deleted when removed on the filesystem.




  • Moving files or directories


    Moving files or directories requires everything being moved to be restored to the Lustre filesystem. The files are then marked as dirty in their new location and the existing files are deleted from BLOB storage. Robinhood will handle archiving the files again in their new location.




  • Updating metadata (e.g. ownership and permissions)


    The metadata will only be updated for archived files that isn’t modified. Modified files will have the metadata set when Robinhood updated the archived file.




 


References


Defender EASM – Performing a Successful Proof of Concept (PoC)

Defender EASM – Performing a Successful Proof of Concept (PoC)

This article is contributed. See the original author and article here.

Welcome to an introduction of the concepts and simple approach required for executing a successful Proof of Concept (PoC) for Microsoft Defender External Attack Surface Management (Defender EASM). This article will serve as a high-level guide to help you execute a simple framework for evaluating Defender EASM, and other items to consider when embarking on the journey to understand the Internet exposed digital assets that comprise your external attack surface, so you can view risks through the same lens as a malicious threat actor.


 


Planning for the PoC 


To ensure success, the first step is planning. This entails understanding the value of Defender EASM, identifying stakeholders who need to be involved, and scheduling planning sessions to determine use cases & requirements and scope before beginning


For example, one of the core benefits of the Defender EASM solution is that it provides high value visibility to Security and IT (Information Technology) teams that enables them to: 



  • Identify previously unknown assets 

  • Prioritize risk 

  • Eliminate threats 

  • Extends vulnerability and exposure control beyond the firewall 


Next, you should identify all relevant stakeholders, or personas, and schedule in 1-2 short planning sessions to document the tasks and expected outcomes, or requirements. These sessions will establish the definition of success for the PoC.  


Who are the common stakeholders that should participate in the initial planning sessions? The answer to that question will be unique to each organization, but some common personas include the following: 



  • Vulnerability Management Teams 

  • IT personnel responsible for Configuration Management, Patching, Asset Inventory Databases 

  • Governance, Risk, & Compliance (GRC) Teams 



  • (Optional) GRC aligned Legal, Brand Protection, & Privacy Teams 

  • Internal Offensive Penetration Testing and Red Teams 

  • Security Operations Teams 

  • Incident Response Teams 

  • Cyber Threat Intelligence, Hunting, and Research Teams 


 


Use Cases & Requirements 


Based on the scope, you can begin collaborating with the correct people to establish use cases & requirements to meet the business goals for the PoC. The requirements should clearly define the subcomponents of the overarching business goals within the charter of your External Attack Surface Management Program. Examples of business goals and high-level supporting requirements might include: 



  • Discover Uknown Assets 



  • Find Shadow IT 



  • Discover Abandoned Assets 



  • Resulting from Mergers, Acquistions, or Divestitures 

  • Insufficient Asset Lifecycle Management in Dev/Test/QA Environments  



  • Identification of Vulnerabilities 



  • Lack of Patching or Configuration Management 



  • Assignment of Ownership to Assets 



  • Line of Business or Subsidiary 

  • Based on Geographic Location 

  • On-Prem vs Cloud 



  • Reporting, Automation, and Defender EASM Data Integrations 




  • Use of a reporting or visualization tool, such as PowerBI 

  • Logic Apps to automate management of elements of your attack surface 


 


Prerequisites to Exit the Planning Phase 



  • Completion of the Planning Phase! 

  • Configure an Azure Active Directory or personal Microsoft account. Login or create an account here. 

  • Set up a Free 30-day Defender EASM Trial 


– Visit the following link for information related to setting up your Defender EASM attack surface today for free. 



  • Deploy & Access the Defender EASM Platform 


– Login to Defender EASM 


– Follow the deployment Quick Start Guide 


 


Measuring Success?


Determining how success will establish the criteria for a successful or failed PoC. Success and Acceptance Criteria should be established for each requirement identified. Weights may be applied to requirements, but measuring success can be as simple as writing out criteria as below:


 


Requirement: Custom Reporting


Success Criteria: As a vulnerability manager, I want to view a daily report that shows the assets with CVSSv2 and CVSSv3 scores of 10.


Acceptance Criteria:



  • Data must be exported to Kusto

  • Data must contain assets & CVSS (Common Vulnerability Scoring System) scores

  • Dashboards must be created with PowerBI and accessible to user

  • Dashboard data must be updated daily


Validation: Run a test to validate that acceptance criteria has been met.


Pass / Fail: Pass


 


Executing the PoC 


 


Implementation and Technical Validation 


We will now look at five different use cases & requirements, define the success and acceptance criteria for each, and validate that the requirements are met by observing the outcome of each in Defender EASM. 


 


Use Case 1: Discover Unknown Assets, Finding Shadow IT 


 


Success Criteria: As a member of the Contoso GRC team, I want to identify Domain assets in our attack surface that have not been registered with the official company email address we use for domain registrations.


 


Acceptance Criteria: 



  • Defender EASM allows for searches of Domain WHOIS data that returns the “Registrant Email” field in the result set.  


Validation: 



  1. Click the “Inventory” link on the left of the main Defender EASM page. 


Michael_Lindsey_0-1701204783503.png


Figure: Launch the inventory query screen



  1. Execute a search in Defender EASM that excludes Domains registered with our official company email address of ‘domainadmin@constoso.com’ and returns all other Domains that have been registered with an email address that contains the email domain ‘contoso.com’.


Michael_Lindsey_3-1701205628727.png


Figure: Query for incorrectly registered Domain assets


 



  1. Click on one of the domains in the result set to view asset details. For example, “woodgrovebank.com” domain.

  2. When the asset details open and confirm that the domain ‘woodgrovebank.com’ is in the upper left corner.

  3. Click on the “Whois” tab.

  4. Note that this Domain asset has been registered with an email address that does not match the corporate standard (i.e., “employeeName@contoso.com”) and should be investigated for the existence of Shadow IT.


Michael_Lindsey_2-1701205587513.png


Figure: WHOIS asset details


Resources:



 


Use Case 2: Abandoned Assets, Acquisitions


 


Success Criteria: As a member of the Contoso Vulnerability Management team, who just acquired Woodgrove Bank, I want to ensure acquired web sites using the domain “woodgrovebank.com” are redirected to web sites using the domain “contoso.com”.  I need to obtain results of web sites that are not redirecting as expected, as those may be abandoned web sites.


 


Acceptance Criteria:



  • Defender EASM allows for search of specific initial and final HTTP (Hypertext Transfer Protocol) response codes for Page assets

  • Defender EASM allows for search of initial and final Uniform Resource Locator (URL) for Page assets


Validation:



  1. Run a search in Defender EASM that looks for Page assets that have:

    1. Initial response codes that cause HTTP redirects (i.e., “301”, “302”)

    2. Initial URLs that contain “woodgrovebank.com”

    3. Final HTTP response codes of “200”

    4. Final URL, post HTTP redirect, that do not contain “contso.com”




Michael_Lindsey_4-1701205855923.png


Figure: Query for incorrect page redirection


 



  1. Click one of the Page assets in the result set to see the asset details.


Michael_Lindsey_5-1701205913876.png


Figure: Page asset overview



  1. Validate:

    1.  Initial URL contains “woodgrovebank.com”

    2. Initial response code is either “301” or “301”

    3. Final URL does not contain “contoso.com”

    4. Final response code is “200”




 


Resources:



 


Use Case 3: Identification of Vulnerabilities, Lack of Patching or Configuration Management


 


Success Criteria: As a member of the Contoso Vulnerability Management team, I need the ability to retrieve a list of assets with high priority vulnerabilities and remediation guidance in my attack surface.


 


Acceptance Criteria:



  • Defender EASM provides a dashboard of prioritized risks in my external attack surface

  • Defender EASM provides remediation guidance for each prioritized vulnerability

  • Defender EASM provides an exportable list of assets impacted by vulnerability


 


Validation:



  1. From the main Defender EASM page, click “Attack Surface Summary” to view the “Attack Surface Summary” dashboard

  2. Click the link that indicates the number of assets impacted by a specific vulnerability to view a list of impacted assets


 


Michael_Lindsey_6-1701206045013.png


Figure: Attack Surface Insights Dashboard


 



  1. Validate that Defender EASM provides additional information about vulnerabilities and remediation guidance.

  2. Click the link in the upper right corner titled “Download CSV report” and validate the contents within


 


Michael_Lindsey_7-1701206105942.png


Figure: Vulnerability remediation details


 


Resources:



 


Use Case 4: Assignment of Ownership to Assets, Line of Business or Subsidiary


 


Success Criteria: As a member of the Contoso GRC team, I need the ability to assign ownership of assets to specific business units through, along with a mechanism to quickly visualize this relationship.


 


Acceptance Criteria:



  • Defender EASM provides an approach to assigning ownership via labels

  • Defender EASM allows users to apply labels to assets that meet specific indicators that indicate affiliation with a specific business unit

  • Defender EASM provides the ability to apply labels in bulk


 


Validation:



  1. Click the “Inventory” link on the left of the main Defender EASM page to launch the search screen

  2. Run a search that returns all Page assets that are on the IP Block “10.10.10.0/24”. The Page assets on this network all belong to the Financial Services line of business, so it is the only indicator of ownership needed in this example.


 


Michael_Lindsey_8-1701206174848.png


Figure: Query to determine Page asset ownership by IP Block


 



  1. Select all assets in the result set by clicking the arrow to the right of the checkbox as shown in the following image and choose the option for all assets.


Michael_Lindsey_9-1701206241726.png


Figure: Selecting assets for bulk modification


 



  1. Click the link to modify assets, followed by the link to “Create a new label” on the blade that appears.

  2. A new screen will appear that allows the creation of a label. Enter a descriptive “Label name”, an optional “Display name”, select a desired color, and click “Add” to finish creating a label.


Michael_Lindsey_10-1701206295535.png


Figure: Link to modify assets and create a label


 


Michael_Lindsey_11-1701206324191.png


Figure: Create label detail


 



  1. After creating the label, you will be directed back to the screen to modify assets. Validate that the label was created successfully.

  2. Click into the label text box to see a list of labels available to choose from and select the one that was just created.

  3. Click “Update”


Michael_Lindsey_12-1701206369370.png


Figure: Label selected assets


 



  1. Click the bell icon to view task notifications to validate the status of labels update.


Michael_Lindsey_13-1701206435096.png


Figure: View status of label update task


 



  1. When the task is complete, run the search again to validate that labels have been applied to the assets owned by the Financial Services organization.


Michael_Lindsey_14-1701206500913.png


Figure: Query to validate labels have been applied to assets


 


Resources:



 


Finishing the PoC


 


Summarize Your Findings


Identify how the Defender EASM solution has provided increased visibility to your organization’s attack surface in the PoC.



  1. Have you discovered unknown assets related to Shadow IT?

  2. Were you able to find potentially abandoned assets related to an acquisition?

  3. Has your organization been able to better prioritize vulnerabilities to focus on the most severe risks?

  4. Do you know have a better view of asset ownership in your organization?


 


Feedback?


We would love to hear any ideas you may have to improve our Defender EASM platform or where and how you might use Defender EASM data elsewhere in the Microsoft Security ecosystem or other security 3rd party applications. Please contact us via email at mdesam-pm@microsoft.com to share any feedback you have regarding Defender EASM.


 


Interested in Learning About New Defender EASM Features?


 Please join our Microsoft Security Connection Program if you are not a member and follow our Private & Public Preview events. You will not have access to this exclusive Teams channel until you complete the steps to become a Microsoft Security Connection Program member. Users that would like to influence the direction/strategy of our security products are encouraged to participate in our Private Preview events. Members who participate in these events will earn credit for respective Microsoft product badges delivered by Credly.


 


Conclusion


You now understand how to execute a simple Defender EASM PoC, to include deploying your first Defender EASM resource, identifying common personas, how to set requirements, and measure success. Do not forget! – you can enjoy a free 30-day trial by clicking on the link below.


You can discover your attack surface discovery journey today for free.