System Page Latch Concurrency Enhancements (Ep. 6) | Data Exposed

This article is contributed. See the original author and article here.

Over the past several SQL Server releases Microsoft has improved the concurrency and scalability of the tempdb database. Starting in SQL Server 2016 several improvements address best practices in the setup process, i.e. when there are multiple tempdb data files all files autogrow and grow by the same amount.


 


Additionally, starting in SQL Server 2019 we added the memory optimized metadata capability to tempdb and eliminated most PFS contention with concurrent PFS updates.


 


In SQL Server 2022 we are now addressing another common area of contention by introducing concurrent GAM and SGAM updates.


In previous releases, we may witness GAM contention different threads want to allocate or deallocate extents represented on the same GAM pages. Because of this contention, throughput is decreased and workloads that require many updates to the GAM page will take longer to complete. This is due to the workload volume and the use of repetitive create-and-drop operations, table variables, worktables that are associated with CURSORS, ORDER BYs, GROUP BYs, and work files that are associated with HASH PLANS.


 


The Concurrent GAM Updates feature in SQL Server 2022 adds the concurrent GAM and SGAM updates capability to avoid tempdb contention.


With GAM and SGAM contention being addressed, customer workloads will be much more scalable and will provide even better throughput.


 


SQL Server has improved tempdb in every single release and SQL Server 2022 is no exception.


 


Resources:


tempdb database


Recommendations to reduce allocation contention in SQL Server tempdb database


Learn more about SQL Server 2022​


Register to apply for the SQL Server 2022 Early Adoption Program and stay informed


Watch technical deep-dives on SQL Server 2022


SQL Server 2022 Playlist


 

Large Watchlist using SAS key is in Public Preview!

Large Watchlist using SAS key is in Public Preview!

This article is contributed. See the original author and article here.

Watchlists are a critical component to enhance security operations and provide data correlation. Up till now, watchlist files have been limited to 3.8 MB per upload. We are excited to announce that Watchlists now support up to 500 MB file size per upload!


 


There are many scenarios where you will need to reference and look up a larger dataset in your detection rules or investigation. Here are some sample use cases you can use the large watchlists for.



  • Map database of IPv4 address networks with their respective geographical location from known sources such as MaxMind or IP2Location.

  • Leverage the CVE vulnerability database to help enrich incidents and alerts that may be related to a known exploit.

  • Enrich alerts and incidents with custom datasets that are larger than 3.8MB in size.


 


How to create a large watchlist


 


To create a large watchlist, you will need to upload a watchlist file in an Azure Storage account. Then create a shared access signature (SAS) URL for Microsoft Sentinel to securely retrieve the watchlist data. Finally upload the watchlist to your workspace in Microsoft Sentinel.


Check out our step-by-step instructions to create a large watchlist.


 


Upload the watchlist file in an Azure Storage account and generate a secure SAS URLUpload the watchlist file in an Azure Storage account and generate a secure SAS URL





Upload a large watchlist in Microsoft Sentinel portalUpload a large watchlist in Microsoft Sentinel portal


 


Considerations:



  • Creating a watchlist from a local file is still limited to 3.8 MB per upload. The increased limit applies only to watchlist files stored in Azure Storage.

  • Microsoft Sentinel will require an Azure Storage Blob SAS URL to access and download the file for processing and ingestion into the watchlist table. The SAS URL must have at least 6 hours away from its expiry time.

  • An entry in the CSV file must not exceed 10,240 characters per line.


 


Further reading resources:



 


Try out this new watchlist capability and let us know your feedback! 


 


 

Unleash the power of your small business with Microsoft 365

Unleash the power of your small business with Microsoft 365

This article is contributed. See the original author and article here.

Over the past two years, businesses of all industries and sizes have had to adapt to new ways of working, a challenging operating environment, and ever-changing customer expectations. With all this change, it’s hard to overstate the impact of having secure and reliable productivity and collaboration tools.

The post Unleash the power of your small business with Microsoft 365 appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Sign up for Microsoft Visio trial licenses for you and your team with your business login

Sign up for Microsoft Visio trial licenses for you and your team with your business login

This article is contributed. See the original author and article here.

We are happy to announce self-service trials for Microsoft Visio. As of today, you can sign up for free 30-day trials of Visio Plan 1 or Visio Plan 2 on existing Microsoft 365 tenants managed by your organization using your business login. Then, test out the full functionality of the Visio web and desktop apps before directly purchasing subscriptions.


 


With self-service trials, you can sign up for trial licenses for up to five users and then, with a limited admin role, assign the trial licenses to your colleagues in the Microsoft 365 admin center. If you run in to any issues signing up for your trial licenses, please contact your IT department.


 


These new self-service trial capabilities are available worldwide except for India. They are not available for Education or Government customers.


 


Please note: You will be asked to provide credit card details at signup. At the end of your 30-day trial, you will be charged the applicable subscription fee to continue using Visio. Cancel at any time to stop future charges.


 


Determine which Visio trial is right for you


 


With the Visio Plan 1 trial, you and your team members will have full access to the Visio web app—including dozens of diagram templates and hundreds of shapes—and 2 GB of OneDrive for Business cloud storage. The Visio Plan 2 trial includes all the features in the Visio Plan 1 trial, plus additional templates, shapes, and advanced features in the Visio desktop app. During both trials, you’ll be able to create, edit, share, and collaborate on diagrams and flowcharts using Visio or Microsoft Teams (requires a Microsoft 365 subscription to use Teams).


 


How to sign up


 


The 30-day trials of Visio Plan 1 and Visio Plan 2 are available for self-service signup by individuals and departments from the Visio plans and pricing comparison page. Select the corresponding trial link below the Buy Now button and complete the necessary steps.


 Screenshot of Visio Plan 1 and Visio Plan 2: Click on “Or try free for 1 month” to complete the steps to start your trialScreenshot of Visio Plan 1 and Visio Plan 2: Click on “Or try free for 1 month” to complete the steps to start your trial


Manage trial licenses as a Global or Billing admin


 


The self-service trial capabilities do not compromise IT oversight or control. If you are an admin, you can use the same self-service purchase controls to disable self-service trials while making use of subscription management capabilities to oversee and manage trial licenses on the licensing page in the Microsoft 365 admin center.


 


If you’ve disabled the self-service purchase functionality for Visio in the past, self-service trials signup for individuals or departments will automatically allow users to request licenses directly from you. Learn more about managing self-service licenses acquired by individuals or departments in your organization.


 


Give us feedback about your trial experience! Please tell us what you think in the comments below or send feedback via the Visio Feedback portal.


 


Continue the conversation by joining us in the Microsoft 365 Tech Community! Whether you have product questions or just want to stay informed with the latest updates on new releases, tools, and blogs, Microsoft 365 Tech Community is your go-to resource to stay connected! 

Adding DevOps on Form Recognizer for your custom model

Adding DevOps on Form Recognizer for your custom model

This article is contributed. See the original author and article here.

 


Introduction


 


Azure Form Recognizer is an amazing Azure AI Service to extract and analyze form fields documentsOne benefit of using Form Recognizer is the ability to create your own custom model based on documents specific to your business needs.


 


To create custom models, Azure provides Form Recognizer Studio, a web application that makes creation and training of custom model simple without the needs of an AI expert. 


 


One common challenge most customers have when dealing with more than a handful of models is to apply the same DevOps processes, they are familiar with when promoting code changes from one environment to another to their AI models.


 


This article demonstrates the use of Form Recognizer’s REST APIs to implement a CI/CD pipeline for model management.


 


The complete implementation is available on Github. 


 


Why DevOps matters


 


When creating a custom model based on documents that map your business needs, your programmers and data scientists will go through multiple iterations of training, resulting in multiple different models in the development environment.  Once they have a model that works well for the specific scenario, this model will then need to be promoted to other environments.


 


While it is possible to copy the training dataset and train a model in all of the other environments, that process is cumbersome and can result in missed labels resulting in lower accuracy models.  A more effective approach is to treat the model like a source code artifact and use a DevOps pipeline to orchestrate the movement of the model across the different environments ensuring traceability and compliance.  The diagram below explains the proposed implementation flow to achieve this result.


 


train.png


 


Our implementation


 


For our use case, let’s assume we have three environments: each is in a respective resource group.  This approach would work even if the resource groups are in three different subscriptions.


 


env.png


 


The model is trained in the development environment, where , data scientists and/or developers will use the Form Recognizer Studio to label the documents in a storage account.  The following steps describe that process in greater detail.


 


Start by moving the training dataset to a specific container in Azure Blob Storage.


 


model.png


 


In the Form Recognizer Studio, start by creating a custom project and connecting the Studio to work with the dataset, you just created.


 


studio.png


 


When you train your model, be sure to save the model ID you provided or find the model from the list of models within the project.  This model ID will be needed when it’s time to migrate the model to other environments.


 


modelid.png


 


Form Recognizer provides a model copy operation that starts with generating a model copy authorization for the target resource, in this case the QA environment.  This copy automation is provided to the Form Recognizer development resource to then execute the copy action.


 


To orchestrate this set of actions, a simple GitHub action was created.  The API can be integrated into your CI/CD pipeline using REST or any of the language specific SDKs.  In this case, we created an Azure Function that uses the .NET SDK for Form Recognizer.  This Azure Function provides multiple endpoints that are leveraged in the GitHub Action.


 


func.png


 


The following diagram describes the GitHub Actions Orchestration.


 


gh.png


 



  1. The developer moves all the documents needed to train the custom model into Azure Storage account.

  2. The developer uses the Form Recognizer Studio to train the custom model in the development environment.  Once the model is trained and the developer is satisfied with the model quality, the model ID is saved for use with the GitHub action.

  3. The developer initiates the GitHub action by providing the model ID from the previous step as an input parameter. input.png

  4. The first step is to validate that the model ID exists in the DEV environment.copy.png

  5. If the model exists in the DEV environment, it will be copied to the QA environment

  6. Now, a QA engineer can validate the model produces the expected results.

  7. Once the QA tests are successful, an approver needs to approve the next job to promote the model to the production environment.

  8. The model is now copied to the production environment and is available for use.


approve.png


deployed.png


 


Once the model is in production, you can use it within your applications.  The following example demonstrates how the model is being used to analyze documents.


 


analyze.png


 


Here are the GitHub Action used in this sample, it consists of 3 jobs that invoke the Azure Function using a PowerShell script.  As stated earlier, the Azure Function implementation is optional, this could be accomplished via HTTP requests directly to the Form Recognizer resources from your pipelines.


 


ghaction.png


 


Here is the simple PowerShell script.


 


powershell.png


 


Conclusion


 


In this blog post, we explain the importance of implementing a DevOps practice around your custom models in Azure Form Recognizer.  We provide an implementation and illustrate how easy it is to implement with the REST API.