This article is contributed. See the original author and article here.
Introduction
Hi folks! My name is Felipe Binotto, Cloud Solution Architect, based in Australia.
We all know how frustrating it can be to receive a call about a storage account not replicating or being unable to fail over. To help prevent this from happening, I am going to show you how to monitor the replication of your storage accounts. Keep in mind that replication logs are not available as part of the storage account’s diagnostic settings.
Pre-Requisites
Before we begin, please ensure that you have the following prerequisites in place:
Azure Subscription
Automation Account
Log Analytics workspace
High-Level Steps
The process for monitoring storage account replication can be broken down into several high-level steps, which we will go through in the following order:
Clone the repository that contains the runbook.
Create a user-assigned managed identity.
Provide the identity with the necessary access.
Assign the identity to the Automation Account.
Import the runbook to the Automation Account.
Provide values for the Runbook variables.
Create a new Automation Account variable.
Run the runbook to retrieve the storage account replication information.
Now assign the identity to the Automation Account.
# Get the automation account
$automationAccount = Get-AzAutomationAccount -ResourceGroupName REPLACE_WITH_YOUR_RG -Name REPLACE_WITH_YOUR_AA
# Assign the identity to the automation account
$automationAccount | Set-AzAutomationAccount -Identity $id
Import the runbook to your automation account. Make sure you run the next command from the folder which was cloned.
Open the script in VS Code or you can edit straight in your automation account. Now I will highlight some of the important sections, so you have a clear understanding of what is going on.
The script is used to collect and send replication logs for Azure Storage Accounts to a Log Analytics workspace. By using this script, you can monitor the replication of your Storage Accounts, so that you can be alerted if there are any issues and act before it becomes a problem.
The script starts by setting some variables, including the ID of the Log Analytics workspace, the primary key for authentication, and the name of the record type that will be created.
The primary key is retrieved from an Automation Account variable, so we don’t expose it in clear text. Run the following command to create the variable.
The script then defines two functions: Build-Signature and Post-LogAnalyticsData.
The Build-Signature function creates an authorization signature that will be used to authenticate the request to the Log Analytics API. The function takes in several parameters, including the ID of the Log Analytics workspace, the primary key, the date, the content length, the method, the content type, and the resource.
The Post-LogAnalyticsData function creates and sends the request to the Log Analytics API. This function takes in the ID of the Log Analytics workspace, the primary key, the body of the request (which contains the replication logs), the log type, and the timestamp field.
The script also includes a line of code (Disable-AzContextAutosave) that ensures that the runbook does not inherit an AzContext, which can cause issues when running the script.
Finally, the script calls the Post-LogAnalyticsData function, sending the replication logs to the Log Analytics workspace.
At this point you can run the Runbook. Once the logs have been sent, you can create Azure Alerts based on KQL queries to notify you of any issues with the replication.
For example, the following code would return Storage Accounts which have not replicated in the last 8 hours.
StorageReplicationHealth_CL
| where todatetime(Storage_LastSyncTime_s) < ago(8h)
In Part 2 of this post, I will demonstrate how you can leverage Logic Apps to send out customized emails when your Storage Account is not replicating.
Conclusion
In conclusion, monitoring the replication of your Azure Storage Accounts is crucial to ensure the availability and reliability of your data. In this blog post, we have shown you how to set up monitoring for your Storage Accounts using Log Analytics and Azure Automation. By following the steps outlined in this post and using the provided script, you will be able to monitor the replication status of your Storage Accounts and receive alerts if there are any issues. This will allow you to act quickly and prevent any disruptions to your services. With this solution in place, you can have peace of mind knowing that your data is safe and available.
I hope this was informative to you and thanks for reading!
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
This article is contributed. See the original author and article here.
Odds are, if you are impacted by the Cybersecurity Maturity Model Certification (CMMC) mandates, you already know it. Odds are, if you are reading this post, you are doing research because you are impacted by the mandates. If you are impacted by the mandates, this post is for you. This post is to give you ideas that [we hope] help you on your compliance journey.
The open question is likely “how do I become compliant”? Ultimately, there are two options. But before we get to the options of how to become compliant, we first need to address the scope of what needs to become compliant.
What about scope?
There are thousands of other published pages on the scope of CMMC, and that’s not the point of this post. The point here is to state the following:
Today, you have N applications in your Portfolio
A subset (maybe 100%, and maybe a smaller percentage) of those applications and their data must be compliant with CMMC by certain dates depending on your contracts and business requirements
Every business that is beholden to the mandates needs to make a list of the applications (and data) that are in-scope. Many companies will do that rapid assessment on their own. Other companies will enlist the help of partners/vendors to help them move faster and more confidently. Either way is fine.
Once you have the list of apps (and data) that are in-scope for you, then what? Then, it is time to choose an option.
Option 1: Work on the running engine
The challenge with working on a running engine is the increased risk of losing a finger :smiling_face_with_smiling_eyes:. Honestly, if you had to spend time quantifying your Portfolio, then it stands to reason that there may be things that you missed in that assessment. But leaving that point aside, there is always the option to assess every app, every piece of data, every server, every switch, etc to become compliant. That is a very difficult journey because of years of technical debt. Can you really clean out all shared-credential-service accounts in your environment without breaking something critical?
Option 2: Build a new engine and rapidly move to it
Surely there are exceptions, but we have yet to see one. The best answer [is usually] to build a new engine. Not only is the right answer to build a new engine, but the right answer is to build a new engine in the cloud.
Why the cloud?
They are already compliant (e.g. Microsoft Azure Government [MAG] and Government Commercial Cloud High [GCCH])
You will not invest more in cybersecurity and compliance than Microsoft Cloud will, so they are and will be, more secure than you can be
If you leverage the cloud, you then only have to worry about securing the pieces and parts that are unique to YOUR business: your enclave(s) and tenant(s), your application(s), your data.
How do you continuously monitor (and automatically respond) to anomalies that would take you out of compliance?
How do you give the auditor a real-time dashboard to speed up the audit(s)?
Azure: Commercial Azure, Azure Government as IL2, Azure Government as IL4, Azure Government as IL5, or a combination
Which one(s) do you need?
How do you rapidly set them up and harden them?
How do you continuously monitor (and automatically respond) to anomalies that would take you out of compliance?
How do you give the auditor a real-time dashboard to speed up the audit(s)?
For every enclave and/or tenant, how will it be managed on Day 1? Day N? (often, the goal is to “manage it myself” on Day N, but folks are unclear and aren’t ready to manage it on Day 1)
Step B: Move Applications (and Data)
How do you prioritize your applications based on timelines and resourcing?
For each application, should it
Lift and Shift?
Have slight tweaks? (e.g. converted to PaaS? Converted to hardened containers per DevSecOps Reference Architecture and DoD Standards? Other?)
Rewrite?
Other?
For every application (and data), how will it be managed on Day 1? Day N? (Often, the goal is to “manage it myself” on Day N, but folks are unclear and aren’t ready to manage it on Day 1)
Step C: What about Client Devices?
Are your laptops and desktops managed in such a way that they are compliant?
What about mobile devices?
Can you detect and minimize spillage?
Do you understand your Data Loss posture?
Step D: What about Policies?
For example, is your Data Loss Prevention Policy where it needs to be for CMMC?
Are the written policies tactically implemented for the Enclaves, Tenants, Apps and Data defined as you establish the enclaves and move the applications?
Step E: What about Auditability?
When the auditor shows up, will you spend days and weeks with them, or will you show them your real-time dashboards?
When the auditor shows up, will you do tabletop exercises with them? Will you introduce an out-of-compliance-server and watch the automation turn off the server? Will automation also create a security incident in parallel? Is it true that the only way to end up with an errant server in this new, pristine engine is that someone went around the process as defined by the policy?’
Surely, you will choose Option 2.
Insource, Outsource or Hybrid?
Now, the only remaining question is whether you will figure it all out on your own or will you bring in someone to help you? Given the impact of getting it wrong and given the timeline, most companies will bring in someone to help them.
Which Partner?
There are two courses of action:
Pay someone to “consult” with you while doing the work yourself
Pay someone to do it for you including Day 1 thru Day N management
Most companies prefer B, but they assume that there is no such unicorn. And, if they assume there is a unicorn, they fear that they cannot afford it.
The ideal partner will help you in the following ways:
Rapidly define the in-scope apps and data
Ask a series of repeatable business questions
Rapidly establish the enclave(s) and tenant(s)….ideally by using automation to save you time and money
Rapidly move applications and data to the new enclave(s) and tenant(s) while making the necessary application tweaks (and being willing to take accountability for full application re-writes as necessary)….ideally using automation to refactor and/or re-write the apps
Manage the clients and mobile devices and/or work through and with your existing client/mobile team to take accountability for the client and mobile posture….ideally using automation
Manage the enclave(s), tenant(s), applications and data to keep them current and compliant….ideally using automation
Work through and with your Policy team(s) to update Policies as necessary to match the actual implementation
Stand at the ready to host your auditors when they show up …. ideally using automation
Partner Requirements
Already doing this same work in DoD IL5/CUI environments
Already doing this work in Commercial environments including for Defense Industrial Base
Already doing this work for small customers (e.g. 5 seats) through huge customers (e.g. 150k seats)
Willing to take the risk to do the work as Firm-Fixed-Fee on a committed timeline
Willing to commit to pricing of operations and maintenance pricing for years 2 through 5 (and beyond) on day 1
Willing to provide significant multi-year discounts
Call to action:
Quantify the applications (and data) that will fall within your CMMC scope
Leverage Microsoft Azure Government and GCCH to meet the requirements
Leverage an experienced partner to help you skip the learning curve
About the Author:
Carroll Moon is the CTO and Co-Founder of CloudFit Software. Prior to CloudFit, Carroll spent almost 18 years at Microsoft helping to build and run Microsoft’s Clouds. CloudFit Software aims to securely run every mission critical workload in the universe. CloudFit is a DoD company that also intentionally serves commercial companies. Commercial customers (including Microsoft’s Product Groups) keep CloudFit on the cutting edge of cloud and cloud apps—that makes CloudFit attractive to DoD customers. DoD customers require that CloudFit be a leader in cybersecurity—that makes CloudFit attractive to commercial customers. This intersection of DoD and Commercial uniquely positions CloudFit Software to help customers comply with cybersecurity mandates like CMMC, and the build-and-run-the-hyperscale-cloud pedigree of CloudFit’s executive team means that CloudFit is executing on their charter with software and automation rather than with people. CloudFit Software’s patented platform enables increased repeatability, decreased costs, increased availability and increased security in all areas from establishing hardened cloud enclaves to migrating (and re-factoring) workloads to operating securely in the cloud. Beyond the IT/Cloud charter, CloudFit Software exists to fund two 501c3 charities: KidFit (providing hope and opportunities to youth using sports as the enabler) and JobFit (providing hope and opportunities to adults and young adults using IT training and paid internships as the enablers). Carroll lives in Lynchburg, VA with his wife and two children. CMMC | CloudFit Software
This article is contributed. See the original author and article here.
The sales pipeline is a visual representation of where prospects are within the sales funnel. Managing the pipeline is one of the core activities of any seller; it helps sellers to stay organized and focused on moving deals forward. A seller who can successfully master the sales pipeline is likely to drive more revenue.
But mastering a sales pipeline is not easy, especially when sellers must balance multiple active deals, an array of contacts, and conversations across multiple channels, while trying to figure out when the next interaction will occur, what next steps are required, and which app or process will help accomplish the job.
The opportunity pipeline view in Dynamics 365 Sales is now available for public preview and offers an updated user experience by putting the seller at the center of their workflows, enabling them to view their full pipeline, gather context quickly, take action efficiently, and work in their preferred manner.
Let’s see an overview of how to manage deals in Dynamics 365 Sales:
This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.
The easiest way to get an overview of the sales pipeline is by visualizing the deals on a chart. Charts form a key component of the opportunity pipeline view. Not only do they provide key insights on opportunities, but in the opportunity pipeline view these charts are interactive, update in real time and allow sellers to quickly locate and focus on the right deals.
In this release, two charts are available out of the box:
A bubble chart that allows sellers to track risky deals on a timeline.
A funnel chart that allows sellers to see where deals are in the sales process.
These charts are configurable by administrators. In future releases, we will introduce additional charts.
Keeping track of key metrics
Another tool that keeps sellers informed are key performance indicators (KPIs). In the opportunity pipeline view, we’ve introduced tools to track and highlight metrics that help sellers stay on top of their most important KPIs. Sellers can choose from a subset of metrics, re-order them, or even create their own metrics.
A modern, seller-optimized grid experience
When it comes to managing deals, it’s no wonder that sellers often use spreadsheets. Spreadsheets provide a table view of all opportunities, with aggregation, quick filtering, sorting, grouping with pivot tables, re-ordering of columns, and the ability to edit fields inline easily. Unfortunately, data in a spreadsheet is static and not connected to the CRM.
The opportunity pipeline view comes with an inline grid that can be edited. This grid behaves just as a spreadsheet would. Sellers can:
Edit any cell inline
Filter by any column
Hide and show any column from the table
Sort records on the grid
Re-order the columns
Access control on the columns
Getting context without navigating away
With the opportunity pipeline view, useful information is easily accessible. When you select a record in the workspace, an optimized form appears in the side panel. This form contains a modern task management experience and provides useful information, such as:
Administrators can customize the form to select the most relevant fields for your business.
Dynamics 365 Sales opportunity view
Steps to begin with the opportunity pipeline view
Where can I see the opportunity pipeline view? Click on the “show as” button on the top left in the command bar and select “pipeline view” from the dropdown.
What if I do not see the option in the dropdown? If you do not see the “pipeline view” in the dropdown, ask your administrator to opt in for early access. Please refer the documentation to opt in for early access.
Can I set pipeline view as the default view? Yes, the admin has the capability to set it as the default view. Please refer the documentation to set pipeline view as default view.
What if I do not want to see it in the dropdown menu? Ask your administrator to disable the view.
Next steps
Opportunity pipeline view is available in early access now. For more information on how to enable the experience in your environment,read the documentation and watch a brief video
This article is contributed. See the original author and article here.
Microsoft Learn offers a huge library of written content, including technical documentation and learning paths. But what if you need something a little more visual and demonstrative while learning new skills? Enter Microsoft Learn’s vast collection of video content.
Whether you’re searching for a walk-through of Microsoft Azure or wanting to know the newest trends within the tech world, Microsoft Learn offers a wide variety of unique video content. Produced as both stand-alone how-tosand episodic shows on Microsoft Learn, videos will help you attain new skills and knowledge while keeping up with the latest Microsoft technology.
Although Microsoft Learn offers content that fits learners at every stage of their journey, these sevenvideos in particular can help new users take the first step towards achieving their learning goals.
1. Getting Started with Microsoft Learn
New to Microsoft Learn and need some help navigating its content? Then this video is a must-watch.Take this virtual walk-through of Microsoft Learn with Ashley Johnson, Senior Technical Product Manager at Microsoft, to explore valuable features that can help you make the most of your experience.
If you need to prepare for a Microsoft Certification exam and you don’t know where to begin, check out this show that offers study tips, content overviews, and sample questions and answers for each featured exam.
Interact with Microsoft Azure engineers in real-time via livestreams.Geared towards helping you migrate or initiate new workloads in Azure, this series will give you added confidence when preparing for highly technical implementations.
Learn how to develop and optimize applications and processes with Microsoft Power Platform direct from industry experts. Focused on low code solutions, this seriesis a great resource for developers of all backgrounds.
Catch up on the latest trends and news snippets within the developer community in this engaging and informative series. Watchhighlights of interesting projects and discover tips and tricks for developers of all backgrounds and skillsets.
This multi-part series introduces Microsoft Graph basics. Best of all, itfeaturesinteractive exercises that showcase how to use Microsoft Graph for connecting Microsoft 365 data with app development platforms.
Learn about what’s new in artificial intelligence in this Friday evening series. Watch as host, Seth Juarez, works on machine learning and AI projects while offering tips for getting started on your own.
Whetheryou’re looking for a live demonstration of complex skills or a last-minute knowledge check before a certification exam, videos on Microsoft Learnare here to help. Check out what’s available today!
This article is contributed. See the original author and article here.
CISA has added one new vulnerability to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation. This type of vulnerability is a frequent attack vector for malicious cyber actors and poses a significant risk to the federal enterprise. Note: To view the newly added vulnerabilities in the catalog, click on the arrow in the “Date Added to Catalog” column, which will sort by descending dates.
Although BOD 22-01 only applies to FCEB agencies, CISA strongly urges all organizations to reduce their exposure to cyberattacks by prioritizing timely remediation of Catalog vulnerabilities as part of their vulnerability management practice. CISA will continue to add vulnerabilities to the Catalog that meet the specified criteria.
Recent Comments