This article is contributed. See the original author and article here.
A well-defined data maintenance strategy improves the quality and performance of your database and reduces storage costs. In part one of this series, we covered the roles and responsibilities of your data strategy team, tools for reviewing storage usage, and data management features in Dynamics 365 finance and operations apps that your strategy should include. We recommended that you start your planning by decommissioning unneeded sandbox environments in your tenant. In this post, we focus on creating a data retention strategy for tables as part of your overall storage maintenance strategy.
Create a data retention strategy for tables
After sandbox environments, tables have the greatest impact on total storage volume. Your data maintenance strategy should include a plan for how long to retain the data in specific tables, especially the largest ones—but don’t overlook smaller, easily manageable tables.
The Finance and operations capacity report showing database usage by table
Identify the largest tables in your production environment. For each one, determine the members of your data strategy team who should be involved and an action based on the table’s data category. The following table provides an example analysis.
This category of data is temporary by design unless it’s affected by a customization or used in a report. Run standard cleanup after testing in a sandbox. Note: If reports are built on temporary data, consider revisiting this design decision.
• System admin • Customization partner or team if customized • BI and reporting team
Log and temporary data with retention settings
DOCUHISTORY, SYSEMAILHISTORY
This data is temporary by design but has an automatically scheduled cleanup. Most automatic jobs have a retention setting. Review retention parameters and update after testing in a sandbox.
• System admin • Customization partner or team if customized
Log data used for auditing purposes
SYSDATABASELOG
Establish which department uses the log data and discuss acceptable retention parameters and cleanup routines.
• System admin • Business users • Controllers and auditors
Data isn’t temporary by design, but is duplicated when posted as financial. Discuss with relevant department how long workbook data is required in the system, then consider cleanup or archiving data in closed periods.
• System admin • Business users related to the workbook module • BI and reporting team for operational and financial reports
Columns with tokens or large data formats
CREDITCARDAUTHTRANS
Some features have in-application compression routines to reduce the size of data. Review the compression documentation and determine what data is suitable for compression.
• System admin • Business users
Financial data in closed periods
GENERALJOURNALACCOUNTENTRY
Eventually you can remove even financial data from the system. Confirm with controlling team or auditors when data can be permanently purged or archived outside of Dynamics 365.
• System admin • Controllers and auditors • Financial business unit • BI and reporting team for financial reports
Log or workbook data in ISV or custom tables
Should start with the ISV’s three-letter moniker
Discuss ISV or custom code tables with their developers.
• System admin • Customization partner or team • ISV • BI and reporting team, depending on the customization
Consider whether table data needs to be stored
For each large table, continue your analysis with the following considerations:
Current business use: Is the data used at all? For instance, was database logging turned on by accident or for a test that’s been completed?
Retention per environment: Evaluate how long data should be in Dynamics 365 per environment. For instance, your admin might use 30 days of batch history in the production environment to look for trends but would be content with 7 days in a sandbox.
Data life cycle after Dynamics 365: Can the data be purged? Should it be archived or moved to long-term storage?
With the results of your analysis, your data strategy team can determine a retention strategy for each table.
Implement your data retention strategy
With your data retention strategy in place, you can start implementing the actions you decided on—running standard cleanups, updating retention settings, configuring archive functions, or reaching out to your ISV or customization partner.
Keep in mind that implementing an effective strategy takes time. You need to test the effect of each action in a sandbox environment and coordinate with multiple stakeholders.
As you implement your strategy, here are some best practices to follow:
Delete or archive data only after all stakeholders have confirmed that it’s no longer required.
Consider the impact of the data life cycle on customizations, integrations, and reports.
Choose the date range or the amount of data to target in each cleanup or archive iteration based on the expected duration and performance of the cleanup or archiving routine, as determined by testing in a sandbox.
Need more help?
Creating a data maintenance strategy for Dynamics 365 finance and operations apps is a complex and ongoing task. It requires a thorough analysis and collaboration among different roles and departments. For help or guidance, contact your Microsoft representative for a Dynamics 365 finance and operations storage capacity assessment.
This article is contributed. See the original author and article here.
Microsoft Copilot in Azure (Public Preview) is an AI-powered tool to help you do more with Azure. Copilot in Azure extends capabilities to Azure Database for MySQL, allowing users to gain new insights, unlock untapped Azure functionality, and troubleshoot with ease. Copilot in Azure leverages Large Language Models (LLMs) and the Azure control plane, all of this is carried out within the framework of Azure’s steadfast commitment to safeguarding the customer’s data security and privacy.
The experience now supports adding Azure Database for MySQL self- help skills into Copilot in Azure, empowering you with self-guided assistance and the ability to solve issues independently.
You can access Copilot in Azure right from the top menu bar in the Azure portal. Throughout a conversation, Copilot in Azure answers questions, suggests follow-up prompts, and makes high-quality recommendations, all while respecting your organization’s policy and privacy.
For a short demo of this new capability, watch the following video!
Discover new Azure Database for MySQL features with Microsoft Copilot in Azure
Explore when to enable new features to supplement real-life scenarios
Learn from summarized tutorials to enable features on-the-go
Troubleshoot your Azure Database for MySQL issues and get expert tips
Join the preview
To enable access to Microsoft Copilot in Azure for your organization,complete the registration form. You only need to complete the application process one time per tenant. Check with your administrator if you have questions about joining the preview.
This article is contributed. See the original author and article here.
Data maintenance—understanding what data needs to be stored where and for how long—can seem like an overwhelming task. Cleanup routines can help, but a good data maintenance strategy will make sure that you’re using your storage effectively and avoiding overages. Data management in Dynamics 365 isn’t a one-size-fits-all solution. Your strategy will depend on your organization’s implementation and unique data footprint. In this post, the first of a two-part series, we describe the tools and features that are available in Dynamics 365 finance and operations apps to help you create an effective storage maintenance plan. Part two focuses on implementing your plan.
Your data maintenance team
Data maintenance is often thought to be the sole responsibility of system admins. However, managing data throughout its life cycle requires collaboration from all stakeholders. Your data maintenance team should include the following roles:
Business users. It goes without saying that users need data for day-to-day operations. Involving them in your planning helps ensure that removing old business data doesn’t interfere with business processes.
BI and reporting team. This team comprehends reporting requirements. They can provide insights into what data is essential for operational reports and should be kept in live storage or can be exported to a data warehouse.
Customization team. Customizations might rely on data that’s targeted by an out-of-the-box cleanup routine. Your customization partner or ISV should test all customizations and integrations before you run a standard cleanup in the production environment.
Auditors and controllers. Even financial data doesn’t need to be kept indefinitely. The requirements for how long you need to keep posted data differ by region and industry. The controlling team or external auditors can determine when outdated data can be permanently purged.
Dynamics 365 system admins. Involving your admins in data maintenance planning allows them to schedule cleanup batch jobs during times when they’re least disruptive. They can also turn on and configure new features.
Microsoft 365 system admins.The finance and operations storage capacity report in the Power Platform admin center is helpful when you’re creating a data maintenance strategy, and these admins have access to it.
Tools for reviewing storage usage
After you assemble your team, the next step is to gather information about the size and footprint of your organization’s finance and operations data using the following tools:
Just-in-time database access allows you to access the database of a sandbox environment that has been recently refreshed from production. Depending on the storage actions you have set up or the time since the last database restore, the sandbox might not exactly match the production environment.
Features for managing storage
A comprehensive data maintenance strategy takes advantage of the data management features of Dynamics 365 finance and operations apps. The following features should be part of your plan.
Environment life cycle management is the process of creating, refreshing, and decommissioning sandbox environments according to your testing and development needs. Review your environments’ storage capacity and usage on the Finance and operations page of the capacity report.
The Finance and operations capacity report in the Power Platform admin center
Critically assess the environments and their usage and consider decommissioning sandboxes that you no longer need. For instance, if the system is post go-live, can you retire the training environment? Are performance tests less frequent and easier to run in the QA environment when users aren’t testing?
We highly recommend that you don’t skip the sandbox decommissioning discussion. Reducing the number of sandboxes has a far greater effect on total storage usage than any action that targets a specific table.
Cleanup routines are standard or custom functions that automatically delete temporary or obsolete data from the system.
Retention settings schedule automatic cleanup of certain data after a specified length of time. For example, document history includes a parameter that specifies the number of days to retain history. These cleanup routines might run as batch jobs or behind the scenes, invisible to admins.
Compression routines reduce the size of data in storage. For example, the Compress payment tokens feature applies compression to stored payment property tokens.
Next step
In this post, we covered the roles and responsibilities of your data strategy team, tools for reviewing database storage, and data management features beyond cleanup routines. We suggested that you begin your planning process by reviewing your sandboxes. In part two, we discuss a strategy for specific tables and actions to take.
This article is contributed. See the original author and article here.
In today’s digital-first environment, effective communication is crucial for maintaining strong business relationships and driving sales success. Copilot in Dynamics 365 Sales enhances this aspect by integrating with the rich text editor, revolutionizing how professionals manage their email interactions. This blog delves into how the Copilot’s capabilities can simplify and refine the email drafting process, ensuring every message is crafted to engage and convert.
Use Copilot to draft and adjust emails
Copilot integrates seamlessly with the rich text editor, providing a sophisticated platform for composing emails. This integration facilitates the use of AI-driven suggestions during the drafting process, enabling quick creation of precise and impactful communications. The combination of the Rich Text Editor’s user-friendly interface with Copilot’s intelligent recommendations bridges the gap between rapid email drafting and maintaining content quality.
AI-Powered drafting for enhanced precision and relevance
The seller can prompt Copilot to draft an email
Copilot transforms email drafting into a more efficient and targeted process. Leveraging AI, it offers contextual suggestions based on the customer’s interaction history and previous communications. This not only speeds up the drafting process but also ensures that each email is personalized and relevant, significantly enhancing the quality and effectiveness of outbound communications.
Dynamic adjustments for tailored email interactions
Adjust the length and the tone of the email using Copilot
Beyond basic drafting, the rich text editor equipped with Copilot allows for dynamic adjustments to emails. For example, fine-tuning aspects like language, tone, and style to better match the recipient’s expectations and the specific sales context. This adaptive functionality ensures that each email is crafted to maximize engagement and impact, fostering stronger customer connections and driving superior business results.
Advancing email communications with Copilot
The synergy between Copilot in Dynamics 365 Sales and the rich text editor marks a significant advancement in how sales professionals handle email communications. By employing AI for both drafting and refining emails, sales teams can optimize their time on high-value sales activities. As businesses navigate the complexities of digital interactions, Copilot emerges as an indispensable tool, empowering sales organizations to achieve efficiency and effectiveness in their communication strategies.
Next steps
Read more on Copilot in D365 Sales email integration:
This article is contributed. See the original author and article here.
The crucial role of backup and restore in ADCS
Active Directory Certificate Services (ADCS) serves as a pivotal part within identity and access management (IAM), playing a critical role in ensuring secure authentication and encryption. These functionalities are integral for fostering trust across the enterprise application and service ecosystem. In modern organizations, the significance of Active Directory Certificate Services has grown exponentially, fortifying digital identities, communication channels and data. Given its pervasive role, the potential loss of this service due to systemic identity compromise or a ransomware attack could be catastrophic. Microsoft advocates platform owners adopt an “assume breach” mindset as an initiative-taking measure against these sophisticated cybersecurity threats to ensure and preserve confidentiality, integrity, and availability of IAM based services.
As part of an “assume breach” approach, organizations must prioritize comprehensive backup and restore strategies within their ADCS infrastructure. These strategies are paramount for ensuring swift recovery and restoration of essential certificate services following a cyberattack or data breach. By keeping up-to-date backups and implementing effective restoration procedures, organizations can minimize downtime, mitigate potential damage, and uphold operational continuity amidst evolving security challenges.
Let us look at some of the services and features of an ADCS platform which organizations are dependent on:
Certificate enrollment and renewal: ADCS facilitates automated enrolment and renewal processes, ensuring prompt issuance and rotation of cryptographic keys to maintain security.
Key archival and recovery: Organizations can utilize ADCS to archive private keys associated with issued certificates, enabling authorized personnel to recover encrypted data or decrypt communications when necessary.
Certificate revocation and management: ADCS provides mechanisms for revoking and managing certificates in real-time, allowing organizations to promptly respond to security incidents or unauthorized access attempts.
Public Key Infrastructure (PKI) integration: ADCS seamlessly integrates with existing PKI infrastructures, enabling organizations to use established cryptographic protocols and standards to enhance security across their networks.
Enhanced security controls: ADCS offers advanced security controls, such as role-based access control (RBAC) and audit logging, empowering organizations to enforce granular access policies and keep visibility into certificate-related activities.
Now that we know what this service offers, imagine your organization as a fortified stronghold, wherein Active Directory Certificate Services and Active Directory Domain Services form the bedrock of the Identity and Access Management infrastructure. In case of a cybersecurity breach penetrating this stronghold, the backup and restoration process acts as a crucial defensive measure. It is not merely about restoring ADCS services: it is about swiftly and effectively rebuilding the stronghold. This guarantees the continuation of trust relationships and the seamless operation of vital IT services within the stronghold, such as remote access VPNs, consumer web services, and third-party self-service password reset tools—each of which are essential components for operational continuity, customer experience and business productivity. Without effective backup measures, the stronghold is vulnerable, lacking the protective mechanisms akin to a portcullis or moat.
The significance of thoroughly assessing all backup and recovery procedures cannot be overstated. This is akin to conducting regular fire drills, ensuring that the IT team is adept and prepared to respond to crises effectively. IT administrators must have the requisite knowledge and readiness to execute restoration operations swiftly, thereby upholding the integrity and security of the organization’s IT environment. Additionally, recognizing the potential exploitation of ADCS for keeping persistence underscores the imperative for vigilance in monitoring and securing ADCS components against unauthorized manipulation or access.
What are the key elements for a successful backup and recovery?
From a technical perspective, Active Directory Certificate Services (ADCS) backups must cover the foundational pillars of the service. These include the private key, the Certificate Authority (CA) database, the server configuration (registry settings) and the CAPolicy.inf file. Let us explain each in detail:
CA private key: The most critical logical part of a CA is its private key material. This key is stored in an encrypted state on the local file system by default. The use of devices like Hardware Security Modules (HSMs) is encouraged to protect this material. The private key is static, so it is recommended to create a backup directly after the deployment and to store it in a safe, redundant location.
CA database: By default, this repository holds a copy of all issued certificates, every revoked certificate, and a copy of failed and pending requests. If the CA is configured for Key Archival and recovery, the database will include the private keys for those issued certificates whose templates are configured for the feature.
Server configuration: These are the settings and configurations that dictate ADCS operations. From security policies to revocation lists settings, safeguarding the server configurations ensures that the ADCS can be restored with identical functionality.
CAPolicy.inf: The CAPolicy.inf file is used during the setup of ADCS and then during CA certificate renewal. This file may be used to specify default settings, prevent default template installation, define the hierarchy, and specify a Certificate Policy and Practice Statement.
How is ADCS backed up?
A practical approach to performing a backup involves utilizing ‘certutil,’ a command-line tool integrated into the Windows operating system. This tool offers a range of functions tailored for managing certificates and certificate services. Other methods encompass employing the graphical user interface (GUI) or PowerShell. To start a backup of the CA database using ‘certutil,’ adhere to the outlined example below:
certutil -backupdb -backupkey "C:BackupFolder"
The command syntax is as follows:
backupdb: Starts the backup process for the database.
backupkey: Safeguards the private key of the CA (requires providing a password).
C:BackupFolder: Specifies the path where the backup will be stored. It is important to use a secure location, ideally on a separate drive or device. Note: this folder must be empty.
Running this command starts the creation of a backup encompassing the CA database and the CA’s private key, thereby guaranteeing the safeguarding of the fundamental elements of the CA. Safeguarding these components is imperative, as malevolent actors may exploit the backup for nefarious purposes.
In addition to preserving the CA Database and the CA’s private key, for comprehensive restoration onto a new server, it is crucial to back up the registry settings associated with ADCS using the following command:
All settings on the earlier location of the CA database, as well as configurations related to certificate validity settings, CRL and AIA extensions, can be utilized during the recovery process.
If the source CA utilizes a custom CAPolicy.inf, it is advisable to replicate the file to the identical backup location. The CAPolicy.inf file is typically found in the %windir% directory (default location being C:Windows).
How can the service be restored?
Remove ADCS role(s)
If the source server is still available and a CA Backup is available, remove the CA role from it. This is required for Enterprise CAs that are domain-joined. If present, remove the “Web Server” based roles/features before the Certification Authority role.
Remove the source server from the domain
Reusing the same host name on the destination server requires that the source server either be renamed or removed from the domain and the associated computer object removed from Active Directory before renaming and joining the destination server to the domain.
Adding ADCS role(s)
After ensuring that the destination server has the correct hostname and is successfully integrated into the domain, continue to assign the CA role to it. If the destination server is already part of the domain, it needs Enterprise Admin permission to configure the ADCS role as an Enterprise CA.
Before advancing, transfer the backup folder to a local drive, and, if accessible, move the original CAPolicy.inf file to the %windir% folder on the destination server.
Launch the Add Roles wizard from Server Manager.
Review the “Before You Begin” page, then select Next.
On the “Select Server Roles” page, select Active Directory Certificate Services, then Next, then Next again on the Intro to ADCS page.
On the “Select Role Services” page, ensure only Certificate Authority is selected, then click Next. (Do not choose any other roles)
Configuring ADCS:
Now configure a clean ‘empty’ CA. This is done prior to restoring the configuration and database content:
Select the choice to “Configure Active Directory Certificate Services on this server.”
Confirm that the correct credentials are in place depending on the installation: Local Admin for Standalone CA, Enterprise Administrator needed for Enterprise certification authority.
Check the box for “Certification Authority.”
Select the desired option based on the source CA configuration (“Standalone” or “Enterprise”) on the “Specify Setup Type” page, then click “Next.”
Select “Root” or “Subordinate CA” on the “Specify CA Type” page, then click “Next.”
Select “Use existing key” on the “Set Up Private Key” page, then click “Next.”
Import the Private key from the backup folder copied previously. Select the key and click “Next.”
Configure the desired path on the “Configure Certificate Database” page, then select “Next,” then “Install.”
At this point we have restored the CA and have an empty database with default server settings.
Open “Certificate Authority” manager from Server Manager or from Administrative Tools.
Expand “Certificate Authority (Local)” right click “CAName,” and select “All Tasks,” and click on “Restore CA.”
Click “OK” to stop the service.
Select “Next” on the “Welcome to the Certification Authority Restore Wizard.”
Check only “Certificate Database” and “Certificate Database Log,” click “Browse” and target the backup folder. “C:BackupFolder”, click “Next” and click “Finish” then wait until the restore completes.
Click “Yes” to continue and start the service.
Expand “Certificate Authority (Local)” right click “CAName” and select “Issued Certificates” to verify the database was restored.
Restore registry settings:
After the database is restored, import the configuration settings that were backed up from the source CA’s registry.
Create a registry backup of the destination server:
Locate the “C:BackupFolderCAConfig.reg” file and double click on it to merge the settings, click “Yes” to continue and then “OK” on the Registry Editor confirmation window.
Restart the ADCS Service to verify the restored settings.
After everything is verified, restart the server to ensure it belongs to the “Cert Publishers” group.
Verify server status:
Open “Certificate Authority” manager from Server Manager or from Administrative Tools.
Expand “Certificate Authority (Local),” then “CAName” right click “Revoked Certificates” select “All tasks” then “Publish.” Select “OK” at the popup.
Run ADCSView.msc to verify the health of the destination CA server.
Test certificate issuance:
With the CA restored, test certificate issuance to ensure full functionality.
Publish any templates that were published before and ensure the CA issues certificates are issued as expected.
Note! We recommend an assessment be conducted on all certificate templates to confirm security settings and to reduce the number of templates if possible.
Conclusion
This article highlights the necessity of setting up and upholding a robust “back-and-restore” strategy as a primary defence mechanism against cyber threats. it becomes much more likely that recovery for ADCS will not be successful, and a complete rebuild will be required.
In addition to this, adopting a defence-in-depth approach is equally imperative. This involves implementing supplementary protective measures such as endpoint detection and response through Defender for Endpoint (MDE), or monitoring user and entity behaviour analytics with Microsoft Defender for Identity (MDI). These measures empower cybersecurity operatives to swiftly respond across multiple phases of MITRE ATT&CK, thereby safeguarding the organization’s digital ecosystem, particularly the pivotal identity and access management services.
Integrating the strategic management of ADCS (Active Directory Certificate Services) with these advanced security solutions further strengthens organizational defences against the continually evolving landscape of cyber threats. This strategy augments the resilience of the cybersecurity framework and ensures the continuity and integrity of organizational operations, particularly during the transition to a more secure ADCS infrastructure.
In conclusion, the adoption of a robust backup and restoration strategy, complemented by a multi-faceted defence framework that integrates ADCS management with innovative security solutions, creates a formidable shield against cyber threats. This approach bolsters cybersecurity resilience and fortifies organizational continuity and operational integrity in the dynamic landscape of evolving security challenges.
This article is contributed. See the original author and article here.
Microsoft Copilot is already helping individual employees boost productivity, creativity and time savings. With the announcements at Microsoft Build 2024, we’re delivering an entirely new set of capabilities that unlock Copilot’s ability to drive bottom-line business results for every organization.
This article is contributed. See the original author and article here.
In today’s fast-paced sales landscape, prioritizing core selling activities over low-value tasks is crucial. Time spent on tasks that don’t directly contribute to sales represents missed opportunities to connect with prospects and close deals. With Dynamics 365 Sales, we’re committed to using AI to support sellers in focusing their time on what truly matters: forging meaningful connections, establishing trust, and nurturing long-term relationships to increase their sales productivity. Copilot empowers sellers to achieve greater results with less effort, enhancing your sales organization’s effectiveness. We’re happy to share that the following features are releasing this month.
Copilot chat Q&A in Dynamics 365 Sales
Copilot chat with Q&A transforms how sellers access data in your customer relationship management (CRM) system. Instead of building complicated queries or manually searching for information, sellers can ask questions using natural language. They can access vital information immediately, allowing them to focus on high-value activities like engaging customers and closing deals. The result is more time for meaningful interactions, potentially leading to higher conversion rates and increased revenue.
Natural-language Q&A is particularly valuable in fast-paced sales environments, ensuring quick, informed actions. This feature elevates customer interactions, positioning teams for higher sales productivity. Its impact extends beyond convenience, shaping the efficiency and effectiveness of the entire sales process.
Copilot chat in Dynamics 365 Sales makes it easy to retrieve information from Dataverse and your CRM system.
Sales-specific chat experience
One of the key features of Copilot in Dynamics 365 Sales is that the chat experience is specific to the sales process. Sellers can use common sales terms and phrases to ask questions and get answers from the CRM system, without having to navigate through complex menus or screens. This saves time and effort for sellers, allowing them to focus on their customers and prospects.
Some of the sales terms that Copilot understands are conversion rate, deal cycle, pipeline, deal size, win rate, and deal value. Sellers and managers can use these terms to query various aspects of the sales process, like the performance of individual sellers, teams, or regions, the progress of opportunities, and the trends and forecasts of sales outcomes. Copilot can also handle complex queries with multiple terms, filters, and aggregations.
For example, you can ask Copilot:
“Show the opportunity conversion rate for the last 4 quarters by quarter.”
“What’s the win rate for Kenny Smith?”
“What is the average deal size for successful opportunities?”
Copilot in Dynamics 365 Sales understands sales-specific terms expressed in natural language.
These examples illustrate how Copilot can help sellers access relevant information from your CRM system in a natural and intuitive way, using sales-specific terms in a chat experience. Copilot chat Q&A enhances your sales team’s productivity and efficiency and their ability to meaningfully engage with customers and prospects.
Your CRM data is always secure
Copilot respects the security and user access privilege settings of your CRM system. This means that if a seller doesn’t have permission to view or edit certain records, those records aren’t included in Copilot’s responses. For example, if you ask Copilot about the pipeline value for a region that you aren’t assigned to, Copilot informs you that you don’t have sufficient privileges to view the requested data. This ensures that Copilot maintains the integrity and confidentiality of your CRM data while providing insights and recommendations.
Immersive Copilot workspace
We are also launching the public preview of a new immersive Copilot experience in Dynamics 365 Sales. An expanded workspace enhances focus on productive conversations with Copilot, while real-time insights and effortless natural language chat functionality help sellers efficiently manage sales activities, nurture customer relationships, and drive sales success. Seamless access to insights from CRM data simplifies prioritizing actions and smarter decision-making.
The new immersive Copilot workspace in Dynamics 365 Sales helps sellers focus on sales activities.
The immersive experience works in sync with the Copilot chat pane. Start a conversation in the immersive workspace, select a record, and continue the conversation in the Copilot chat. The coherent experience makes it easy to navigate in the app without losing context.
Use the immersive workspace
The immersive experience is in preview so that we can make improvements based on your valuable feedback. To use the immersive experience in your environment, you’ll need to turn on preview features for Copilot in Dynamics 365 Sales. In the Sales Hub app, Copilot is automatically added to the site map under My Work. If you use a custom app, add the Copilot page to your app’s site map. To enter the immersive workspace, select My Work > Copilot.
Enter Copilot in immersive mode through the site map in Sales Hub or your custom app.
Transform your sales processes with Copilot
Copilot in Dynamics 365 Sales helps your sellers save time and stay focused on the things that really matter. They get the information they need faster with less context switching, making their day-to-day activities more efficient and boosting your team’s overall sales productivity.
This article is contributed. See the original author and article here.
Sellers are often faced with situations where they need to sift through a lot of information to find the one piece they need. There are often extensive knowledge bases where sellers need to search for information, and lots of precious time is lost in the process.
We are here to help with that!
With our new features outlined below, sellers can access relevant sales information from SharePoint through the Copilot chat interface in Dynamics 365 Sales.
By automating the extraction of critical insights from sales documents, Copilot in Dynamics 365 Sales frees up valuable time for sales teams to focus on nurturing leads, closing deals, and delivering exceptional customer experiences. With Copilot in Dynamics 365 Sales, businesses can streamline their sales processes, gain deeper customer insights, and ultimately drive greater revenue growth. Copilot in D365 Sales empowers sales teams to work smarter, not harder, and achieve unparalleled efficiency in their daily operations.
Contextual content recommendations
With this feature, the system seamlessly reads the CRM context, and intelligently recommends relevant product and account-related files. For example, sellers are provided with content recommendations regarding the products added to opportunities. From PDFs to Word documents and PowerPoint presentations, the Copilot pane in D365 Sales provides instant access to the most pertinent sales materials, empowering sales reps to make informed decisions and deliver personalized experiences to customers. This could include sales pitch decks, account strategy collaterals, product brochures and training materials that are made available to sellers. As a result, sales interactions are tailored and impactful, driving stronger customer engagement and business growth.
“Show product-related files” appears as a trailing prompt to opportunity summary
Users effortlessly access contextual file recommendations in Copilot in D365 Sales by selecting from the sparkle icon (marked in the image below) or typing queries in their preferred language. Sorted by relevance, the latest files and most popular results appear first. Files can be viewed, downloaded, or shared via email, ensuring seamless collaboration. Additionally, users can specify keywords for targeted searches, enhancing efficiency while upholding data security. Copilot in D365 Sales respects user permissions, displaying only accessible SharePoint files.
Access related files in Copilot in D365 Sales – through sparkles menu, natural language prompts, associated products.
SharePoint Q&A
Sellers can now easily navigate through sales documents and literature by simply asking questions. Leveraging Azure OpenAI technology, Copilot in D365 Sales swiftly scans through data and literature, summarizing pertinent information from SharePoint documents. This seamless integration empowers sellers to swiftly access insights, enhancing productivity and enabling quick, informed responses to customer inquiries.
Invoke SharePoint Q&A and get summaries from relevant documents, with citations of references.
In Copilot in D365 Sales, accessing answers is seamlessly integrated with your SharePoint documents. Simply type your question in the Copilot pane using natural language and hit Enter – no need to navigate through any of your files and folders! For instance, inquire about warranty periods or prices directly. Copilot initiates a search in SharePoint. Should the answer reside in one or more files in SharePoint, Copilot offers a concise response alongside links to relevant documents, ensuring comprehensive insights are just a click away.
Next steps
Increasing your sales team’s efficiency could be as simple as having all the information just a click away!
Not a Dynamics 365 Sales customer yet? Take a guided tour and sign up for a free trial at Dynamics 365 Sales overview.
AI solutions built responsibly.
Enterprise grade data privacy at its core. Azure OpenAI offers a range of privacy features, including data encryption and secure storage. It allows users to control access to their data and provides detailed auditing and monitoring capabilities. Copilot is built on Azure OpenAI, so enterprises can rest assured that it offers the same level of data privacy and protection.
Responsible AI by design. We are committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are putting those principles into practice across the company to develop and deploy AI that will have a positive impact on society.
Dan Kershaw
Normal
Dan Kershaw
3
6436
2024-05-15T17:40:00Z
2024-05-15T17:43:00Z
1
786
4484
37
10
5260
16.00
Clean
Clean
false
false
false
false
EN-GB
X-NONE
X-NONE
We’re thrilled to announce that Bicep templates for Microsoft Graph resources will be in public preview starting May 21st. Bicep templates bring declarative infrastructure-as-code (IaC) capabilities to Microsoft Graph resources. This new capability will initially be available for core Microsoft Entra ID resources.
Bicep templates for Microsoft Graph resources allow you to define the tenant infrastructure you want to deploy, such as groups or applications, in a file, then use the file throughout the development lifecycle to repeatedly deploy your infrastructure. The file uses the Bicep language, a domain-specific language (DSL), that uses declarative syntax to deploy resources typically used in DevOps and infrastructure-as-code solutions.
What problems does this solve?
Azure Resource Manager or Bicep templates allow you to declare Microsoft Azure resources in files and deploy those resources into your infrastructure. Configuring and managing your Azure services and infrastructure often includes managing Microsoft Entra ID resources, like applications and groups. Until now, you had to orchestrate your deployments between two mechanisms using ARM or Bicep template files for Azure resources and Microsoft Graph PowerShell for Microsoft Entra ID resources.
Now, with the Microsoft Graph Bicep release, you can declare the Microsoft Entra ID resources in the same Bicep files as your Azure resources, making configurations easier to define, and deployments more reliable and repeatable.
Let’s look at how this works and then we’ll run through an example.
The Microsoft Graph Bicep extension
To provide support for Bicep templates for Microsoft Graph resources, we have released the new Microsoft Graph Bicep extension that allows you to author, deploy, and manage supported Microsoft Graph resources (initially Microsoft Entra ID resources) in Bicep template files either on their own, or alongside Azure resources.
Authoring experience
You get the same first-class authoring experience of the Bicep Extension for VS Code when you use it to create your Microsoft Graph resource types in Bicep files. The editor provides rich type-safety, IntelliSense, and syntax validation.
Editing a Bicep file containing Microsoft Graph resources
Once you have authored your Bicep file, you can deploy it using familiar tools such as Azure PowerShell and Azure CLI. When the deployment request is made to the Azure Resource Manager the deployments engine orchestrates the deployment of interdependent resources so they’re created in the correct order, including the Microsoft Graph resources.
The following image shows a Bicep template file where the Microsoft Graph group creation is dependent on the managed identity resource, as it is being added as a group member. The deployments engine first sends the managed identity request to the Resource Manager, which routes it to the Microsoft.ManagedIdentity resource provider. Next, the deployments engine sees that Microsoft.Graph/groups is an extensible resource, so it knows to route this resource request to the Microsoft Graph Bicep extension. The Microsoft Graph Bicep extension then translates the groups resource request into a request to Microsoft Graph.
Deploying a Bicep file containing Microsoft Graph resources
Scenario: Using managed identities with security groups and app roles
Using a Microsoft Entra ID group to assigned roles to managed identities
However, this configuration isn’t possible using a Bicep or Resource Manager template. With Microsoft Graph Bicep extension, this limitation is removed. Rather than assigning and managing multiple Microsoft Azure role assignments, role assignments can be managed via a security group through a single Bicep file.
Bicep file declaring an Microsoft Entra ID group with a managed identity member In the example above, a security group can be created and referenced, whose members can be managed identities. With Bicep templates for Microsoft Graph resources, declaring Microsoft Graph and Microsoft Azure resources together in the same Bicep files, enables new and simplifies existing deployment scenarios, bringing reliable and repeatable deployments.
This article is contributed. See the original author and article here.
Introduction
As ransomware attacks grow in number and sophistication every year, threat actors can quickly impact business operations if organizations are not well prepared. In this blog, we detail an investigation into a ransomware event. During this intrusion the threat actor progressed through the full attack chain, from initial access through to impact, in less than five days, causing significant business disruption for the victim organization.
During the investigation, the Microsoft Incident Response team (formerly known as DART) identified the threat actor employing a range of tools & techniques to achieve their objectives, including:
Exploitation of unpatched internet exposed Microsoft Exchange Servers
Web Shell deployment facilitating remote access
Use of living of the land tools for persistence and reconnaissance
Cobalt Strike beacons for command and control
Process Hollowing and the use of vulnerable drivers for defense evasion
Deployment of custom developed backdoors to facilitate persistence
Deployment of a custom developed data collection and exfiltration tool
Forensic analysis
Initial Access
In order to obtain initial access into the victim’s environment, the Threat Actor was observed exploiting known vulnerabilities (ProxyShell) on unpatched Microsoft Exchange Servers:
CVE-2021-34473
CVE-2021-34523
CVE-2021-31207
The exploitation of these vulnerabilities allowed the Threat Actor to:
Attain SYSTEM level privileges on the compromised Exchange host
Enumerate LegacyDN of users by sending an Autodiscover requests, including SIDs of users
Construct a valid authentication token and use it against the Exchange Powershell backend
Impersonate domain admin users and creates a web shell by using the New-MailboxExportRequest cmdlet
Create web shells in order to obtain remote control on the affected servers
The Threat Actor was observed operating from the following IP to exploit ProxyShell and access the web shell:
185.225.73[.]244
Persistence
Backdoor
Microsoft IR identified the creation of Registry Run Keys, a common persistence mechanism employed by threat actors to maintain access to a compromised device, where a payload is executed each time a specific user logs in.
api-msvc.dll, detected by Microsoft Defender Antivirus as Trojan:Win32/Kovter!MSR, was determined to be a backdoor capable of collecting system information such as installed antivirus products, device name and IP address. This information is then sent via HTTP POST request to a command and control (C2) channel:
Unfortunately, the organization was not using Microsoft Defender as the primary AV/EDR solution, preventing to take action against the malicious code.
An additional file name,api-system.png, was identified with similarities to api-msvc.dll. This file behaved like a DLL, had the same default export function, and also leveraged Run Keys for persistence.
Cobalt Strike Beacon
The threat actor leveraged Cobalt Strike, a common commercial penetration testing tool, to achieve persistence. The file sys.exe, detected by Microsoft Defender Antivirus as Trojan:Win64/CobaltStrike!MSR, was determined to be a Cobalt Strike beacon and was downloaded directly from the file sharing service temp.sh:
hxxps://temp[.]sh/szAyn/sys.exe
This beacon was configured to communicate with the following command and control (C2) channel:
Microsoft IR frequently observes threat actors leveraging legitimate remote access during an intrusion, in an effort to blend in on a victim network. In this case, the threat actor utilized AnyDesk, a common remote administration tool to maintain persistence and move laterally within the network. AnyDesk was installed as a Service and was executed from the following paths:
C:systemtestanydeskAnyDesk.exe
C:Program Files (x86)AnyDeskAnyDesk.exe
C:ScriptsAnyDesk.exe
Successful connections were observed in AnyDesk Logs (ad_svc.trace) involving anonymizer service IP addresses linked to TOR and MULLVAD VPN. This is a common technique that actors employ to obscure their source IP ranges.
Reconnaissance and Privilege Escalation
Microsoft IR found the presence and execution of the network discovery tool NetScan being used by the threat actor to perform network enumeration, under the following executable names:
Evidence of likely Mimikatz usage, a credential theft tool commonly used by threat actors, was also uncovered, through the presence of a related log file mimikatz.log.
Microsoft IR assesses that Mimikatz was likely used to attain credentials for privileged accounts.
Lateral Movement
Using compromised domain admin credentials, the threat actor used Remote Desktop Protocol and Powershell Remoting to obtain access to other servers in the environment, including Domain Controllers.
Data Staging and Data Exfiltration
A suspicious file named “explorer.exe” was identified. The file was recognized by Microsoft Defender Antivirus as “Trojan:Win64/WinGoObfusc.LK!MT” and quarantined, but after disabling Windows Defender Antivirus service, the threat actor was able to execute the file using the following command:
Explorer.exe was reverse engineered by Microsoft IR and determined to be ExByte, a GoLang based tool developed and commonly used in BlackByte ransomware attacks for collection and exfiltration of files from victim networks.
The binary is capable of enumerating files of interest across the network, and upon execution creates a log file containing a list of files and associated metadata.
Multiple log files were uncovered during the investigation in the path:
C:ExchangeMSExchLog.log
Analysis of the binary revealed a list of file extensions which are targeted for enumeration.
Binary analysis showing file extensions enumerated by explorer.exe
Forensic analysis identified a file named data.txt that was created and later deleted after ExByte execution. This file contained obfuscated credentials which ExByte leveraged to authenticate to the popular file sharing platform Mega NZ, via it’s API at:
hxxps://g.api.mega.co[.]nz
Binary analysis showing explorer.exe functionality for connecting to file sharing service MEGA NZ
Microsoft IR also determined that this tool was crafted specifically for the victim, as it contained a hardcoded device name belonging to the victim and an internal IP address.
Execution Flow
Upon execution ExByte decodes several strings and checks if the process is running with privileged access by reading .PHYSICALDRIVE0:
If this check fails, ShellExecuteW is invoked with IpOperation parameter RunAs which runs explorer.exe with elevated privilege.
After this access check, explorer.exe attempts to read data.txt file in the current location:
If the text file doesn’t exist, it invokes a command for self-deletion and exits from memory:
If data.txt exists, explorer.exe reads the file, passes the buffer to Base64 decode function and then decrypts the data using the key provided in the command-line. The decrypted data is then parsed as JSON below and fed for login function:
{
“a”:”us0”,
“user”:””
}
Finally, it then forms an URL for login to the API of file sharing service MEGA NZ:
hxxps://g.api.mega.co[.]nz/cs?id=1674017543
Data Encryption and Destruction
MICROSOFT IR found several devices where files had been encrypted and identified suspicious executables, detected by Microsoft Defender Antivirus as Trojan:Win64/BlackByte!MSR, with the following names:
wEFT.exe
schillerized.exe
The files were analyzed and determined to be BlackByte 2.0 binaries responsible for encryption across the environment. This binary requires an 8-digit key number to encrypt files.
Two modes of execution were identified:
When the -s parameter is provided, the ransomware self-deletes and encrypts the machine it was executed on
When the -a parameter is provided, the ransomware conducts enumeration and uses an UPX packed version of PsExec to deploy across the network.
Several domain admin credentials were hardcoded in the binary, facilitating the deployment of the binary across the network.
Depending on the switch (-s or -a), execution may create below files:
C:SystemDataM8yl89s7.exe (Random Name – UPX Packed PsExec)
Some capabilities identified for the BlackByte 2.0 ransomware were:
AV/EDR Bypass:
The file rENEgOtiAtES created matches RTCore64.sys, a vulnerable driver (CVE-2049-16098) that allows any authenticated user to read/write to arbitrary memory.
The BlackByte binary then creates and starts a service named RABAsSaa calling rENEgOtiAtES, and exploits this service to evade detection by installed AV/EDR software.
Process Hollowing
Invokes svchost.exe, injects to it to complete device encryption, and self-deletes by executing the following command:
The table below shows IOCs observed during our investigation. We encourage our customers to investigate these indicators in their environments and implement detections and protections to identify past related activity and prevent future attacks against their systems.
Originating IP address for ProxyShell exploitation and web shell interaction
NOTE: These indicators should not be considered exhaustive for this observed activity.
Detections
Microsoft 365 Defender
Microsoft Defender Antivirus
Trojan:Win32/Kovter!MSR
Trojan:Win64/WinGoObfusc.LK!MT
Trojan:Win64/BlackByte!MSR
HackTool:Win32/AdFind!MSR
Trojan:Win64/CobaltStrike!MSR
Microsoft Defender for Endpoint
Microsoft Defender for Endpoint customers should watch for these alerts that can detect behavior observed in this campaign. Note however that these alerts are not indicative of threats unique to the campaign or actor groups described in this report.
‘CVE-2021-31207’ exploit malware was detected
An active ‘NetShDisableFireWall’ malware in a command line was prevented from executing.
Suspicious registry modification.
‘Rtcore64’ hacktool was detected
Possible ongoing hands-on-keyboard activity (Cobalt Strike)
A file or network connection related to a ransomware-linked emerging threat activity group detected
Suspicious sequence of exploration activities
A process was injected with potentially malicious code
| where ProcessCommandLine has_any (“ExcludeDumpster”,”New-ExchangeCertificate”) and ProcessCommandLine has_any ((“-RequestFile”,”-FilePath”)
Suspicious Vssadmin Events
DeviceProcessEvents
| where ProcessCommandLine has_any (“vssadmin”,”vssadmin.exe”) and ProcessCommandLine has “Resize ShadowStorage” and ProcessCommandLine has_any (“MaxSize=401MB”,” MaxSize=UNBOUNDED”)
Conclusions
BlackByte Ransomware attacks are still targeting organizations having infrastructure with old unpatched vulnerabilities, allowing them to accomplish their objectives with a minimum effort. According to Shodan, at the time this blog was written, there are nearly 3300 public facing servers still affected to ProxyShell vulnerabilities, making this an easy target for threat actors looking to impact organizations around the world.
As Microsoft shows in theMicrosoft Digital Defense Report, key practices like “Keep up to date” in conjunction to other good practices mentioned from a basic security hygiene strategy, could protect against 98 percent of attacks.
As new tools are being developed by threat actors, a modern threat protection solution M365 Defender is necessary to prevent and detect the multiple techniques used in the attack chain, especially where the threat actor attempts to evade or disable specific defense mechanisms.
Hunting for malicious behavior should be performed regularly in order to detect potential attacks that could evade detections, as a complementary activity for continuous monitoring from security tools alerts and incidents.
To understand how Microsoft can help you secure your network and respond to network compromise, visit https://aka.ms/MicrosoftIR.
Appendix
Encryption
Different file extensions are targeted by BlackByte binary for Encryption:
.4dd
.4dl
.accdb
.accdc
.accde
.accdr
.accdt
.accft
.adb
.ade
.adf
.adp
.arc
.ora
.alf
.ask
.btr
.bdf
.cat
.cdb
.ckp
.cma
.cpd
.dacpac
.dad
.dadiagrams
.daschema
.db
.db-shm
.db-wal
.db3
.dbc
.dbf
.dbs
.dbt
.dbv
.dbx
.dcb
.dct
.dcx
.ddl
.dlis
.dp1
.dqy
.dsk
.dsn
.dtsx
.dxl
.eco
.ecx
.edb
.epim
.exb
.fcd
.fdb
.fic
.fmp
.fmp12
.fmpsl
.fol
.fp3
.fp4
.fp5
.fp7
.fpt
.frm
.gdb
.grdb
.gwi
.hdb
.his
.ib
.idb
.ihx
.itdb
.itw
.jet
.jtx
.kdb
.kexi
.kexic
.kexis
.lgc
.lwx
.maf
.maq
.mar
.masmav
.mdb
.mpd
.mrg
.mud
.mwb
.myd
.ndf
.nnt
.nrmlib
.ns2
.ns3
.ns4
.nsf
.nv
.nv2
.nwdb
.nyf
.odb
.ogy
.orx
.owc
.p96
.p97
.pan
.pdb
.pdm
.pnz
.qry
.qvd
.rbf
.rctd
.rod
.rodx
.rpd
.rsd
.sas7bdat
.sbf
.scx
.sdb
.sdc
.sdf
.sis
.spg
.sql
.sqlite
.sqlite3
.sqlitedb
.te
.temx
.tmd
.tps
.trc
.trm
.udb
.udl
.usr
.v12
.vis
.vpd
.vvv
.wdb
.wmdb
.wrk
.xdb
.xld
.xmlff
.abcddb
.abs
.abx
.accdw
.and
.db2
.fm5
.hjt
.icg
.icr
.kdb
.lut
.maw
.mdn
.mdt
File extensions targeted by BlackByte binary for encryption
Also, the following Shared Folders are targeted to encrypt:
Recent Comments