Create a data maintenance strategy for Dynamics 365 finance and operations data (part two)

Create a data maintenance strategy for Dynamics 365 finance and operations data (part two)

This article is contributed. See the original author and article here.

A well-defined data maintenance strategy improves the quality and performance of your database and reduces storage costs. In part one of this series, we covered the roles and responsibilities of your data strategy team, tools for reviewing storage usage, and data management features in Dynamics 365 finance and operations apps that your strategy should include. We recommended that you start your planning by decommissioning unneeded sandbox environments in your tenant. In this post, we focus on creating a data retention strategy for tables as part of your overall storage maintenance strategy.

Create a data retention strategy for tables

After sandbox environments, tables have the greatest impact on total storage volume. Your data maintenance strategy should include a plan for how long to retain the data in specific tables, especially the largest ones—but don’t overlook smaller, easily manageable tables.

Review table storage by data category

In the Power Platform admin center capacity report for the production environment, drill down to the table details.

The Finance and operations capacity report showing database usage by table

Identify the largest tables in your production environment. For each one, determine the members of your data strategy team who should be involved and an action based on the table’s data category. The following table provides an example analysis.

Data category and examples Strategy Team members
Log and temporary data with standard cleanup routines

SALESPARMLINE, USERLOG, BATCHHISTORY, *STAGING

This category of data is temporary by design unless it’s affected by a customization or used in a report. Run standard cleanup after testing in a sandbox.
Note: If reports are built on temporary data, consider revisiting this design decision.
• System admin
• Customization partner or team if customized
• BI and reporting team
Log and temporary data with retention settings

DOCUHISTORY, SYSEMAILHISTORY

This data is temporary by design but has an automatically scheduled cleanup. Most automatic jobs have a retention setting. Review retention parameters and update after testing in a sandbox.   • System admin
• Customization partner or team if customized
Log data used for auditing purposes

SYSDATABASELOG

Establish which department uses the log data and discuss acceptable retention parameters and cleanup routines. • System admin
• Business users
• Controllers and auditors
Workbook data with standard cleanup routines

SALESLINE, LEDGERJOURNALTRANS

Data isn’t temporary by design, but is duplicated when posted as financial. Discuss with relevant department how long workbook data is required in the system, then consider cleanup or archiving data in closed periods. • System admin
• Business users related to the workbook module
• BI and reporting team for operational and financial reports
Columns with tokens or large data formats

CREDITCARDAUTHTRANS

Some features have in-application compression routines to reduce the size of data. Review the compression documentation and determine what data is suitable for compression. • System admin
• Business users
Financial data in closed periods

GENERALJOURNALACCOUNTENTRY

Eventually you can remove even financial data from the system. Confirm with controlling team or auditors when data can be permanently purged or archived outside of Dynamics 365. • System admin
• Controllers and auditors
• Financial business unit
• BI and reporting team for financial reports
Log or workbook data in ISV or custom tables

Should start with the ISV’s three-letter moniker

Discuss ISV or custom code tables with their developers. • System admin
• Customization partner or team
• ISV
• BI and reporting team, depending on the customization

Consider whether table data needs to be stored

For each large table, continue your analysis with the following considerations:

  • Current business use: Is the data used at all? For instance, was database logging turned on by accident or for a test that’s been completed?
  • Retention per environment: Evaluate how long data should be in Dynamics 365 per environment. For instance, your admin might use 30 days of batch history in the production environment to look for trends but would be content with 7 days in a sandbox.
  • Data life cycle after Dynamics 365: Can the data be purged? Should it be archived or moved to long-term storage?

With the results of your analysis, your data strategy team can determine a retention strategy for each table.

Implement your data retention strategy

With your data retention strategy in place, you can start implementing the actions you decided on—running standard cleanups, updating retention settings, configuring archive functions, or reaching out to your ISV or customization partner.

Keep in mind that implementing an effective strategy takes time. You need to test the effect of each action in a sandbox environment and coordinate with multiple stakeholders.

As you implement your strategy, here are some best practices to follow:

  • Delete or archive data only after all stakeholders have confirmed that it’s no longer required.
  • Consider the impact of the data life cycle on customizations, integrations, and reports.
  • Choose the date range or the amount of data to target in each cleanup or archive iteration based on the expected duration and performance of the cleanup or archiving routine, as determined by testing in a sandbox.

Need more help?

Creating a data maintenance strategy for Dynamics 365 finance and operations apps is a complex and ongoing task. It requires a thorough analysis and collaboration among different roles and departments. For help or guidance, contact your Microsoft representative for a Dynamics 365 finance and operations storage capacity assessment.

Learn more

Not yet a Dynamics 365 customer? Take a tour and start a free trial.

The post Create a data maintenance strategy for Dynamics 365 finance and operations data (part two) appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Microsoft Copilot in Azure – Unlock the benefits of Azure Database for MySQL with your AI companion

Microsoft Copilot in Azure – Unlock the benefits of Azure Database for MySQL with your AI companion

This article is contributed. See the original author and article here.

Microsoft Copilot in Azure (Public Preview) is an AI-powered tool to help you do more with Azure. Copilot in Azure extends capabilities to Azure Database for MySQL, allowing users to gain new insights, unlock untapped Azure functionality, and troubleshoot with ease. Copilot in Azure leverages Large Language Models (LLMs) and the Azure control plane, all of this is carried out within the framework of Azure’s steadfast commitment to safeguarding the customer’s data security and privacy.


 


The experience now supports adding Azure Database for MySQL self- help skills into Copilot in Azure, empowering you with self-guided assistance and the ability to solve issues independently. 


 


You can access Copilot in Azure right from the top menu bar in the Azure portal. Throughout a conversation, Copilot in Azure answers questions, suggests follow-up prompts, and makes high-quality recommendations, all while respecting your organization’s policy and privacy.


 


For a short demo of this new capability, watch the following video!


 


 


 


Discover new Azure Database for MySQL features with Microsoft Copilot in Azure


 


MicrosoftTeams-image (7).png


 


 


Explore when to enable new features to supplement real-life scenarios


 


image (1).png


 


Learn from summarized tutorials to enable features on-the-go


 


image.png


 


Troubleshoot your Azure Database for MySQL issues and get expert tips


 


copilot_cpu usage.png


 



Join the preview



 


To enable access to Microsoft Copilot in Azure for your organization, complete the registration form. You only need to complete the application process one time per tenant. Check with your administrator if you have questions about joining the preview.


 


For more information about the preview, see Limited access. Also be sure to review our Responsible AI FAQ for Microsoft Copilot in Azure.


 


Thank you!

Create a data maintenance strategy for Dynamics 365 finance and operations data (part one)

Create a data maintenance strategy for Dynamics 365 finance and operations data (part one)

This article is contributed. See the original author and article here.

Data maintenance—understanding what data needs to be stored where and for how long—can seem like an overwhelming task. Cleanup routines can help, but a good data maintenance strategy will make sure that you’re using your storage effectively and avoiding overages. Data management in Dynamics 365 isn’t a one-size-fits-all solution. Your strategy will depend on your organization’s implementation and unique data footprint. In this post, the first of a two-part series, we describe the tools and features that are available in Dynamics 365 finance and operations apps to help you create an effective storage maintenance plan. Part two focuses on implementing your plan.

Your data maintenance team

Data maintenance is often thought to be the sole responsibility of system admins. However, managing data throughout its life cycle requires collaboration from all stakeholders. Your data maintenance team should include the following roles:

  • Business users. It goes without saying that users need data for day-to-day operations. Involving them in your planning helps ensure that removing old business data doesn’t interfere with business processes.
  • BI and reporting team. This team comprehends reporting requirements. They can provide insights into what data is essential for operational reports and should be kept in live storage or can be exported to a data warehouse.
  • Customization team. Customizations might rely on data that’s targeted by an out-of-the-box cleanup routine. Your customization partner or ISV should test all customizations and integrations before you run a standard cleanup in the production environment.
  • Auditors and controllers. Even financial data doesn’t need to be kept indefinitely. The requirements for how long you need to keep posted data differ by region and industry. The controlling team or external auditors can determine when outdated data can be permanently purged.
  • Dynamics 365 system admins. Involving your admins in data maintenance planning allows them to schedule cleanup batch jobs during times when they’re least disruptive. They can also turn on and configure new features.
  • Microsoft 365 system admins.The finance and operations storage capacity report in the Power Platform admin center is helpful when you’re creating a data maintenance strategy, and these admins have access to it.

Tools for reviewing storage usage

After you assemble your team, the next step is to gather information about the size and footprint of your organization’s finance and operations data using the following tools:

  • The finance and operations storage capacity report shows the storage usage and capacity of your Dynamics 365 environments down to the table level.
  • Just-in-time database access allows you to access the database of a sandbox environment that has been recently refreshed from production. Depending on the storage actions you have set up or the time since the last database restore, the sandbox might not exactly match the production environment.

Features for managing storage

A comprehensive data maintenance strategy takes advantage of the data management features of Dynamics 365 finance and operations apps. The following features should be part of your plan.

Environment life cycle management is the process of creating, refreshing, and decommissioning sandbox environments according to your testing and development needs. Review your environments’ storage capacity and usage on the Finance and operations page of the capacity report.

Screenshot of the capacity report.
The Finance and operations capacity report in the Power Platform admin center

Critically assess the environments and their usage and consider decommissioning sandboxes that you no longer need. For instance, if the system is post go-live, can you retire the training environment? Are performance tests less frequent and easier to run in the QA environment when users aren’t testing?

We highly recommend that you don’t skip the sandbox decommissioning discussion. Reducing the number of sandboxes has a far greater effect on total storage usage than any action that targets a specific table.

Cleanup routines are standard or custom functions that automatically delete temporary or obsolete data from the system.

Retention settings schedule automatic cleanup of certain data after a specified length of time. For example, document history includes a parameter that specifies the number of days to retain history. These cleanup routines might run as batch jobs or behind the scenes, invisible to admins.

Archiving functions move historical data to a separate storage location.

Compression routines reduce the size of data in storage. For example, the Compress payment tokens feature applies compression to stored payment property tokens.

Next step

In this post, we covered the roles and responsibilities of your data strategy team, tools for reviewing database storage, and data management features beyond cleanup routines. We suggested that you begin your planning process by reviewing your sandboxes. In part two, we discuss a strategy for specific tables and actions to take.

Learn more

Not yet a Dynamics 365 customer? Take a tour and start a free trial.

The post Create a data maintenance strategy for Dynamics 365 finance and operations data (part one) appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Elevating email efficiency using Copilot in Dynamics 365 Sales and the rich text editor 

Elevating email efficiency using Copilot in Dynamics 365 Sales and the rich text editor 

This article is contributed. See the original author and article here.

In today’s digital-first environment, effective communication is crucial for maintaining strong business relationships and driving sales success. Copilot in Dynamics 365 Sales enhances this aspect by integrating with the rich text editor, revolutionizing how professionals manage their email interactions. This blog delves into how the Copilot’s capabilities can simplify and refine the email drafting process, ensuring every message is crafted to engage and convert. 

Use Copilot to draft and adjust emails 

Copilot integrates seamlessly with the rich text editor, providing a sophisticated platform for composing emails. This integration facilitates the use of AI-driven suggestions during the drafting process, enabling quick creation of precise and impactful communications. The combination of the Rich Text Editor’s user-friendly interface with Copilot’s intelligent recommendations bridges the gap between rapid email drafting and maintaining content quality.

AI-Powered drafting for enhanced precision and relevance

The seller can prompt Copilot to draft an email 

Copilot transforms email drafting into a more efficient and targeted process. Leveraging AI, it offers contextual suggestions based on the customer’s interaction history and previous communications. This not only speeds up the drafting process but also ensures that each email is personalized and relevant, significantly enhancing the quality and effectiveness of outbound communications.

Dynamic adjustments for tailored email interactions

Adjust the length and the tone of the email using Copilot 

Beyond basic drafting, the rich text editor equipped with Copilot allows for dynamic adjustments to emails. For example, fine-tuning aspects like language, tone, and style to better match the recipient’s expectations and the specific sales context. This adaptive functionality ensures that each email is crafted to maximize engagement and impact, fostering stronger customer connections and driving superior business results.

Advancing email communications with Copilot

The synergy between Copilot in Dynamics 365 Sales and the rich text editor marks a significant advancement in how sales professionals handle email communications. By employing AI for both drafting and refining emails, sales teams can optimize their time on high-value sales activities. As businesses navigate the complexities of digital interactions, Copilot emerges as an indispensable tool, empowering sales organizations to achieve efficiency and effectiveness in their communication strategies.

Next steps

Read more on Copilot in D365 Sales email integration: 

Add the Copilot control to the rich text editor  

Use Copilot in the email rich text editor  

Add the rich text editor control to a model-driven app  

Manage feature settings – Power Platform   

Not a Dynamics 365 Sales customer yet? Take a guided tour and sign up for a free trial at Dynamics 365 Sales overview.   

The post Elevating email efficiency using Copilot in Dynamics 365 Sales and the rich text editor  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Recover an ADCS platform from compromise

This article is contributed. See the original author and article here.

The crucial role of backup and restore in ADCS


Active Directory Certificate Services (ADCS) serves as a pivotal part within identity and access management (IAM), playing a critical role in ensuring secure authentication and encryption. These functionalities are integral for fostering trust across the enterprise application and service ecosystem. In modern organizations, the significance of Active Directory Certificate Services has grown exponentially, fortifying digital identities, communication channels and data. Given its pervasive role, the potential loss of this service due to systemic identity compromise or a ransomware attack could be catastrophic. Microsoft advocates platform owners adopt an “assume breach” mindset as an initiative-taking measure against these sophisticated cybersecurity threats to ensure and preserve confidentiality, integrity, and availability of IAM based services. 


 


As part of an “assume breach” approach, organizations must prioritize comprehensive backup and restore strategies within their ADCS infrastructure. These strategies are paramount for ensuring swift recovery and restoration of essential certificate services following a cyberattack or data breach. By keeping up-to-date backups and implementing effective restoration procedures, organizations can minimize downtime, mitigate potential damage, and uphold operational continuity amidst evolving security challenges. 


 


Let us look at some of the services and features of an ADCS platform which organizations are dependent on: 


 



  • Certificate enrollment and renewal: ADCS facilitates automated enrolment and renewal processes, ensuring prompt issuance and rotation of cryptographic keys to maintain security. 
     

  • Key archival and recovery: Organizations can utilize ADCS to archive private keys associated with issued certificates, enabling authorized personnel to recover encrypted data or decrypt communications when necessary. 
     

  • Certificate revocation and management: ADCS provides mechanisms for revoking and managing certificates in real-time, allowing organizations to promptly respond to security incidents or unauthorized access attempts. 


 



  • Public Key Infrastructure (PKI) integration: ADCS seamlessly integrates with existing PKI infrastructures, enabling organizations to use established cryptographic protocols and standards to enhance security across their networks. 
      

  • Enhanced security controls: ADCS offers advanced security controls, such as role-based access control (RBAC) and audit logging, empowering organizations to enforce granular access policies and keep visibility into certificate-related activities.


Now that we know what this service offers, imagine your organization as a fortified stronghold, wherein Active Directory Certificate Services and Active Directory Domain Services form the bedrock of the Identity and Access Management infrastructure. In case of a cybersecurity breach penetrating this stronghold, the backup and restoration process acts as a crucial defensive measure. It is not merely about restoring ADCS services: it is about swiftly and effectively rebuilding the stronghold. This guarantees the continuation of trust relationships and the seamless operation of vital IT services within the stronghold, such as remote access VPNs, consumer web services, and third-party self-service password reset tools—each of which are essential components for operational continuity, customer experience and business productivity. Without effective backup measures, the stronghold is vulnerable, lacking the protective mechanisms akin to a portcullis or moat. 


 


The significance of thoroughly assessing all backup and recovery procedures cannot be overstated. This is akin to conducting regular fire drills, ensuring that the IT team is adept and prepared to respond to crises effectively. IT administrators must have the requisite knowledge and readiness to execute restoration operations swiftly, thereby upholding the integrity and security of the organization’s IT environment. Additionally, recognizing the potential exploitation of ADCS for keeping persistence underscores the imperative for vigilance in monitoring and securing ADCS components against unauthorized manipulation or access.  


What are the key elements for a successful backup and recovery?


From a technical perspective, Active Directory Certificate Services (ADCS) backups must cover the foundational pillars of the service. These include the private key, the Certificate Authority (CA) database, the server configuration (registry settings) and the CAPolicy.inf file. Let us explain each in detail:



  • CA private key: The most critical logical part of a CA is its private key material. This key is stored in an encrypted state on the local file system by default. The use of devices like Hardware Security Modules (HSMs) is encouraged to protect this material. The private key is static, so it is recommended to create a backup directly after the deployment and to store it in a safe, redundant location.


  • CA database: By default, this repository holds a copy of all issued certificates, every revoked certificate, and a copy of failed and pending requests. If the CA is configured for Key Archival and recovery, the database will include the private keys for those issued certificates whose templates are configured for the feature.


  • Server configuration: These are the settings and configurations that dictate ADCS operations. From security policies to revocation lists settings, safeguarding the server configurations ensures that the ADCS can be restored with identical functionality.


  • CAPolicy.inf: The CAPolicy.inf file is used during the setup of ADCS and then during CA certificate renewal. This file may be used to specify default settings, prevent default template installation, define the hierarchy, and specify a Certificate Policy and Practice Statement.



How is ADCS backed up?


A practical approach to performing a backup involves utilizing ‘certutil,’ a command-line tool integrated into the Windows operating system. This tool offers a range of functions tailored for managing certificates and certificate services. Other methods encompass employing the graphical user interface (GUI) or PowerShell. To start a backup of the CA database using ‘certutil,’ adhere to the outlined example below:


 

certutil -backupdb -backupkey "C:BackupFolder"

 


The command syntax is as follows:


 



  • backupdb: Starts the backup process for the database.

  • backupkey: Safeguards the private key of the CA (requires providing a password).

  • C:BackupFolder: Specifies the path where the backup will be stored. It is important to use a secure location, ideally on a separate drive or device. Note: this folder must be empty.


Running this command starts the creation of a backup encompassing the CA database and the CA’s private key, thereby guaranteeing the safeguarding of the fundamental elements of the CA. Safeguarding these components is imperative, as malevolent actors may exploit the backup for nefarious purposes.


 


In addition to preserving the CA Database and the CA’s private key, for comprehensive restoration onto a new server, it is crucial to back up the registry settings associated with ADCS using the following command:  


 

Reg export “HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesCertSvcConfiguration” C:BackupFolderCAConfig.reg

 


All settings on the earlier location of the CA database, as well as configurations related to certificate validity settings, CRL and AIA extensions, can be utilized during the recovery process.


 


If the source CA utilizes a custom CAPolicy.inf, it is advisable to replicate the file to the identical backup location. The CAPolicy.inf file is typically found in the %windir% directory (default location being C:Windows).


How can the service be restored?


Remove ADCS role(s)


If the source server is still available and a CA Backup is available, remove the CA role from it. This is required for Enterprise CAs that are domain-joined. If present, remove the “Web Server” based roles/features before the Certification Authority role.


 


Remove the source server from the domain


Reusing the same host name on the destination server requires that the source server either be renamed or removed from the domain and the associated computer object removed from Active Directory before renaming and joining the destination server to the domain.


 


Adding ADCS role(s)


After ensuring that the destination server has the correct hostname and is successfully integrated into the domain, continue to assign the CA role to it. If the destination server is already part of the domain, it needs Enterprise Admin permission to configure the ADCS role as an Enterprise CA.


Before advancing, transfer the backup folder to a local drive, and, if accessible, move the original CAPolicy.inf file to the %windir% folder on the destination server.



  • Launch the Add Roles wizard from Server Manager.

  • Review the “Before You Begin” page, then select Next.

  • On the “Select Server Roles” page, select Active Directory Certificate Services, then Next, then Next again on the Intro to ADCS page.

  • On the “Select Role Services” page, ensure only Certificate Authority is selected, then click Next. (Do not choose any other roles)



Configuring ADCS:


Now configure a clean ‘empty’ CA. This is done prior to restoring the configuration and database content:



  • Select the choice to “Configure Active Directory Certificate Services on this server.”

  • Confirm that the correct credentials are in place depending on the installation: Local Admin for Standalone CA, Enterprise Administrator needed for Enterprise certification authority.

  • Check the box for “Certification Authority.”

  • Select the desired option based on the source CA configuration (“Standalone” or “Enterprise”) on the “Specify Setup Type” page, then click “Next.”

  • Select “Root” or “Subordinate CA” on the “Specify CA Type” page, then click “Next.”

  • Select “Use existing key” on the “Set Up Private Key” page, then click “Next.”

  • Import the Private key from the backup folder copied previously. Select the key and click “Next.”

  • Configure the desired path on the “Configure Certificate Database” page, then select “Next,” then “Install.”


At this point we have restored the CA and have an empty database with default server settings.



  • Open “Certificate Authority” manager from Server Manager or from Administrative Tools.

  • Expand “Certificate Authority (Local)” right click “CAName,” and select “All Tasks,” and click on “Restore CA.”

  • Click “OK” to stop the service.

  • Select “Next” on the “Welcome to the Certification Authority Restore Wizard.”

  • Check only “Certificate Database” and “Certificate Database Log,” click “Browse” and target the backup folder. “C:BackupFolder”, click “Next” and click “Finish” then wait until the restore completes.

  • Click “Yes” to continue and start the service.

  • Expand “Certificate Authority (Local)” right click “CAName” and select “Issued Certificates” to verify the database was restored.



Restore registry settings:


After the database is restored, import the configuration settings that were backed up from the source CA’s registry.



  • Create a registry backup of the destination server:

Reg export “HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesCertSvcConfiguration” C:BackupFolderDestinationCAConfig.reg


  • Locate the “C:BackupFolderCAConfig.reg” file and double click on it to merge the settings, click “Yes” to continue and then “OK” on the Registry Editor confirmation window.

  • Restart the ADCS Service to verify the restored settings.

  • After everything is verified, restart the server to ensure it belongs to the “Cert Publishers” group.



Verify server status:



  • Open “Certificate Authority” manager from Server Manager or from Administrative Tools.

  • Expand “Certificate Authority (Local),” then “CAName” right click “Revoked Certificates” select “All tasks” then “Publish.” Select “OK” at the popup.

  • Run ADCSView.msc to verify the health of the destination CA server.



Test certificate issuance:


With the CA restored, test certificate issuance to ensure full functionality.



  • Publish any templates that were published before and ensure the CA issues certificates are issued as expected.


Note! We recommend an assessment be conducted on all certificate templates to confirm security settings and to reduce the number of templates if possible.


Conclusion


This article highlights the necessity of setting up and upholding a robust “back-and-restore” strategy as a primary defence mechanism against cyber threats. it becomes much more likely that recovery for ADCS will not be successful, and a complete rebuild will be required.


In addition to this, adopting a defence-in-depth approach is equally imperative. This involves implementing supplementary protective measures such as endpoint detection and response through Defender for Endpoint (MDE), or monitoring user and entity behaviour analytics with Microsoft Defender for Identity (MDI). These measures empower cybersecurity operatives to swiftly respond across multiple phases of MITRE ATT&CK, thereby safeguarding the organization’s digital ecosystem, particularly the pivotal identity and access management services.


Integrating the strategic management of ADCS (Active Directory Certificate Services) with these advanced security solutions further strengthens organizational defences against the continually evolving landscape of cyber threats. This strategy augments the resilience of the cybersecurity framework and ensures the continuity and integrity of organizational operations, particularly during the transition to a more secure ADCS infrastructure.


In conclusion, the adoption of a robust backup and restoration strategy, complemented by a multi-faceted defence framework that integrates ADCS management with innovative security solutions, creates a formidable shield against cyber threats. This approach bolsters cybersecurity resilience and fortifies organizational continuity and operational integrity in the dynamic landscape of evolving security challenges.