This March, bring NCPW to your neighborhood
This article was originally posted by the FTC. See the original article here.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article was originally posted by the FTC. See the original article here.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
How to Use TSSv2 to Collect Data and Analyze to Solve High CPU Issues.
Hello everyone, this is Denzel Maxey with the Windows Performance Team. I found a tool that actively collects different data based on scenarios and streamlines the data collection process. Drumroll – introducing TSSv2 (Troubleshooting Support Script). In my job, I see a lot of High CPU cases and collecting an ETL trace using TSSv2 with Xperf aka WPR for high CPU has been fundamental in resolving issues.
I’d like to share some instructions, methods, and just insight on the tools in general that should be able to empower IT professionals resolve issues. This post will show how the TSSv2 tool can work with the Windows Performance Recorder. Tssv2 also works with several tools as it is very powerful but will focus on collecting a WPR trace using TSSv2 when regarding a case of High CPU. I can even give you a great clue as to how to collect data for Intermittent High CPU cases as well! Once you have the data, I’ll then show you how to analyze it. Lastly, I’ll provide some additional resources on WPA Analysis for High CPU.
Data Collection Tools:
TSSv2
TSSv2 (TroubleShootingScript Version 2) is a code signed, PowerShell based Tool and Framework for rapid flexible data collection with a goal to resolve customer support cases in the most efficient and secure way. TSSv2 offers an extensible framework for developers and engineers to incorporate their specific tracing scenarios.
WPR/Xperf
“Windows Performance Recorder (WPR) is a performance recording tool that is based on Event Tracing for Windows (ETW). The command line version is built into Windows 10 and later (Server 2016 and later). It records system and application events that you can then analyze by using Windows Performance Analyzer (WPA). You can use WPR together with Windows Performance Analyzer (WPA) to investigate particular areas of performance and to gain an overall understanding of resource consumption.”
*Xperf is strictly a command line tool, and it can be used interchangeably with the WPR tool.*
_________________________________________________________________________________________________________________________________________________
Let’s Dig in!
You notice your server or device is running with 90% high CPU. Your users are complaining of latency and poor performance. You have checked task manager, resource monitor or even downloaded and opened process explorer but there is still no exact root resource glaring you in the face. No worries, a WPR will break down the high CPU processes a bit more. You could even skip straight to this step in the future when you get comfortable working with these tools.
Setup TSSv2
Running a TSSv2 troubleshooting script with the parameters for either WPR or Xperf gathers some granular Performance data on machines showing the issue. In the example below, I’m saving the TSSv2 the script to D: (note the default data location is c:MS_Data). In your web browser, download TSSv2.zip found here: http://aka.ms/getTSSv2 or you can Open an Administrative PowerShell Prompt and paste the following commands.
The commands below will automatically prepare the machine to run TSSv2 by taking the following actions in the given order:
Ex: Commands used below
md D:TSSv2
Set-ExecutionPolicy -scope Process -ExecutionPolicy RemoteSigned -Force
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Start-BitsTransfer https://aka.ms/getTSSv2 -Destination D:TSSv2TSSv2.zip
Expand-Archive -LiteralPath D:TSSv2TSSv2.zip -DestinationPath D:TSSv2 -force
cd D:TSSv2
The result will be a folder named TSSv2 on drive D.
_________________________________________________________________________________________________________________________________________________
Gathering Data using TSSv2
Open an elevated PowerShell window (or start PowerShell with elevated privileges) and change the directory to this folder:
cd D:TSSv2
*WARNING* Data collection grows rather large quickly. You should have at least 30% of your overall RAM available as hard drive space. (Example, if you have 8 GB of RAM – then the file can grow to 2.5GB or larger in c:MS_Data.)
What are some of the scenarios you might have? Maybe you want to manually collect the trace. Or, once you start the trace, let it automatically stop.
How about limiting the file size? There are several parameters you can adjust for your needs.
Below you will find variations of using TSSv2 to collect WPR data in high CPU occurrences. You have an option of using either WPR or Xperf commands. Please review all of them before deciding which trace to take for your environment.
Default location of saved data will be C:MS_Data.
The prompt will tell you when to reproduce the issue, just simply enter “Y” will END the trace at that time and the machine in question experiencing high CPU will then finish running the data collection.
2. Scenario In State but you wanted to limit the size and length of time: The issue is currently occurring, the following example does NOT need user intervention to stop the trace. Default location of saved data will be C:MS_Data. The Xperf can grow to 4GB of memory and runs for 5 minutes with the setting below:
The issue is currently occurring; the following example does NOT need user intervention to stop the trace. Default location of saved data will be C:MS_Data. The Xperf can grow to 4GB of memory and runs for 5 minutes with the setting below:
.TSSv2.ps1 -Xperf CPU -XperfMaxFileMB 4096 -StopWaitTimeInSec 300
Note: you can modify the size and length of the trace by increasing or decreasing -XperfMaxFileMB and -StopWaitTimeInSec when it is initially run.
3. Scenario In State but you wanted to limit the size and length of time with data saved on Z:Data drive instead of C (default): The issue is currently occurring; the following example does NOT need user intervention to stop the trace. The Xperf can grow to 4GB of the memory and runs for 5 minutes with the setting below and this time the resulting data will be saved on Z:Data. You simply need to add -LogFolderPath Z:Data to the command.
.TSSv2.ps1 -Xperf CPU -XperfMaxFileMB 4096 -StopWaitTimeInSec 300 -LogFolderPath Z:Data
4. Scenario Intermittent High CPU and having a tough time capturing data. These commands will wait for the CPU to reach 90%, start a trace and will stop the file from growing larger than 4GB while running for 5 minutes.
.TSSv2.ps1 -Xperf CPU -WaitEvent HighCPU:90 -XperfMaxFileMB 4096 -StopWaitTimeInSec 300
5. Scenario Intermittent High CPU and having a tough time capturing data. These commands will wait for the CPU to reach 90%, start a trace and will stop the file from growing larger than 4GB while running for 100 seconds (1.5 minutes roughly).
.TSSv2.ps1 -Xperf CPU -WaitEvent HighCPU:90 -StopWaitTimeInSec 100
Pro Tip: You can check for additional Xperf/WPR commands by doing a search on the help command files in TSSv2 by typing
.tssv2.ps1 -help at the prompt. When prompted to enter a number or keyword, type xperf or wpr, hit enter, and you will see the options.
Ex: Finding help with keyword ‘xperf’
Be sure to wait for the TSS script to finish, it can take some time (even an hour to finish writing out). The PowerShell will return to type line and the folder in C:MS_Data should zip itself when complete. The location of the script does not determine the location of the data collected. Wait for trace to finish before exiting PowerShell.
Reminder: Just like in the first trace, you learned data collection grows rather large quickly. You should have at least 30% of your overall RAM available as hard drive space. (Example, if you have 8 GB of RAM – then the file can grow to 2.5GB or larger on c:MS_Data.)
_________________________________________________________________________________________________________________________________________________
You have the Data – Now Let’s look at it!
Download the Windows ADK (Windows Assessment and Deployment Kit) from this location: Download and install the Windows ADK | Microsoft Learn. Once you download the Windows ADK, you want to install the Windows Performance Toolkit. Double click on the executable (.exe) to start the installation process.
Uncheck everything except Windows Performance Toolkit, then click Install.
Opening the data in the C:MS_DATA folder
When complete, the WPR general TSSv2 command should have placed all collected data into this folder in a zipped file. You will know the trace ran all the way without stopping prematurely when you see the zipped file in C:MS_DATA. There will also be a message in the PowerShell window when the diagnostic completes stating the name of and the location of the zipped file.
You will need to unzip the zipped file to analyze the WPR trace (.etl file). After unzipping, you will see several data collections that can be helpful with analysis. However, what you mainly want to look at is the .etl file which is usually the biggest file located in the folder.
If you double click the .ETL file it should open in WPA, but if not, you can manually open the newly installed application and navigate to your file.
Example:
You can open the .ETL file to view the WPR trace with WPA (Windows Performance Analyzer) by clicking File, Open and then browsing to the file that ends with the .ETL extension.
Step 1. Open WPR trace in WPA and load the Public Symbols. You may also see symbols listed from the NGEN folder (NGEN is part of the folder name) collected at the time the WPR trace was run.
Select Trace, select Configure Symbol Paths
Click + sign (highlighted in yellow in screenshot below), then enter Public Symbol Path: srv*c:symbols*https://msdl.microsoft.com/download/symbols
More Information: (Symbol path for Windows debuggers – Windows drivers | Microsoft Learn)
Once symbols are configured simply click Load Symbols
Step 2. Once open you should see a window similar to the screenshot below. Expand Computation on the left and drag CPU Usage (Precise) to the right side of the Window to load. You can also double click CPU Usage (Precise) for it appear on the right side.
You will then see space on the top graph showing, “Trace Rundown”. That is not needed as it is the part of the trace where the script was finishing up. To get rid of the trace rundown, highlight the area before trace rundown, right click, then select “Zoom”.
You can now filter down each of your processes deeper and deeper to try to locate a potential root cause of what is spiking the CPU. You can look to see which processes have the highest weight over on the right-hand columns to help pinpoint the highest consumers. It may be a specific kernel driver, application, process, etc. but this should help point you in the right direction of what process is exhausting resources.
These are the columns you will want to focus on:
Left of Gold Bar:
New Process
New Thread ID
New Thread Stack
Right of Gold Bar:
Waits (us) Sum
Waits (us) Max
Count:Waits Sum
%CPU Usage Sum
You can see the CPU usage is the highest due to CPUSTRESS.EXE in this example. As you filter down you can see the threads that contribute to the max CPU spike which sums up to the top number of the CPU usage. This can be helpful to find out which threads, functions and modules are called for the root cause.
Conclusion:
Once again this is not the only use for the TSSv2 tool. But as you can see, the WPR/Xperf trace is a very complex tool that gathers data from a simple PowerShell command. This can be very efficient for troubleshooting. This article is not meant to cover all scenarios. However, I highly recommend taking some time to learn more about what TSSv2 can accomplish as this tool will only continue to get better.
If at any point you get stuck don’t hesitate to open a support case with Microsoft.
Additional Information:
Information on TSSv2 and alternative download site:
Information about Windows Performance Toolkit
Windows Performance Toolkit | Microsoft Learn
For Reference:
Download Windows Assessment Toolkit which contains Windows Performance Analyzer
Download and install the Windows ADK | Microsoft Learn
How to setup public symbols
Symbol path for Windows debuggers – Windows drivers | Microsoft Learn
This article is contributed. See the original author and article here.
Taking time to speak with customers remains one of the best ways to build relationships and close deals faster. However, in a digital world it is often difficult to secure that moment of interaction. So when that moment happens, it is critical to focus on the conversation. Capturing valuable insights and next steps is a distraction at this precious time. Microsoft Dynamics 365 Sales conversation intelligence continues to harness AI technology to assist salespeople with just that it’s there as your chief note taker. Master conversation follow ups by uncovering value from each call and gaining a deeper understanding of your customer interactions.
We’re excited to introduce two new features designed to save time and allow users to quickly access the most relevant and valuable insights from their calls:
Let’s dive into each one to learn more.
Call categorization introduces a revolutionary way to manage call recordings and learn more about leads, as well as assist managers with identifying coaching opportunities within their teams.
It is common for sales teams and contact centers to conduct many calls which are not successfully connected. This can lead to an overload of irrelevant data in call recording tables and a lot of noise for a seller and manager to wade through when reviewing calls for follow-up or best practice sharing. To address this issue, Dynamics 365 Sales conversation intelligence is introducing the Call categorization feature that automatically categorizes and tags short calls with 4 categories:
Once the calls are tagged, it becomes easy for sellers, managers, and operations to identify and exclude irrelevant call data. Sales teams can save time by not having to hunt for calls. Instead, with call categorization, they can review relevant conversations to follow up on and share as best practice or learnings.

When in the flow of a conversation multiple questions could be asked but the seller may not tackle them all within the call. Dynamics 365 Sales conversation intelligence now tracks all questions raised by customers and sellers during customer conversations. These are readyfor review and follow up almost immediately after the call has ended.
The new feature includes a “Questions” section in each call/meeting summary. The section tracks all questions asked during the call and groups them by customer or seller. This allows sellers and sales managers to easily locate and quickly jump to listen to a specific question within the conversation. By doing so, they gain a more in-depth understanding of the interaction.
With this insight documented, sellers can quickly drill into customers’ objections and concerns. In addition, they can review those open items for action.

With these productivity enhancements sellers can focus on engaging customers knowing their systems are working hard to remove complexity and optimize their sales conversation follow ups.
To get started, enable the public preview of the Call categorization feature: First-run setup experience for conversation intelligence in sales app | Microsoft Learn
Learn more about the Question detection feature: View and understand call summary page in the Dynamics 365 Sales Hub app | Microsoft Learn
Learn more about conversation intelligence:Improve seller coaching and sales potential with conversation intelligence | Microsoft Learn
Enable conversation intelligence in your organization:First-run setup experience for conversation intelligence in sales app | Microsoft Learn
If you are not already a Dynamics 365 Sales customer and want to know more, take a tour andstart your free trial today.
The post Optimize sales conversation follow-ups in two easy steps appeared first on Microsoft Dynamics 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
Telemetry is a crucial tool in monitoring the performance of a system to generate actionable insights that can improve productivity and optimize users’ experience. New warehousing telemetry data in Dynamics 365 Supply Chain Management helps provide insight into the activities and general health of your Warehouse Management tenants and devices, so that you can diagnose problems and analyze operations that affect performance.
With Warehouse Management Application Insights telemetry, you’ll be able to answer questions like these:
Answering these kinds of questions can help you make informed decisions about potential improvements in efficiency and automation. Does a process need to be reconfigured, or duplicate or obsolete configurations removed? Can a manual process be automated? Don’t guess; know, with Warehouse Management telemetry data.
Telemetry data is collected and processed using Application Insights. Warehouse Management Application Insights telemetry is a diagnosis tool that’s available now in Dynamics 365 Supply Chain Management.
The 10.0.29 release of Dynamics 365 Supply Chain Management supports Application Insights telemetry for the Warehouse Management mobile app. The 10.0.31 release supports Supply Chain Management warehouse processes, including wave processing, work creation, and more.
To use Warehouse Management telemetry, you’ll need to configure an Application Insights resource and enable Supply Chain Management to send it telemetry data.
Telemetry data is stored in Azure Monitor Logs in the customEvents table. View the collected data by writing Log queries in the Kusto Query Language (KQL).
Here’s a simple example:
For more examples of how to work with KQL, answers to frequently asked questions, and tips for using Supply Chain Management telemetry data with Excel, Power Automate, Power BI, PowerShell, and more, check the Supply Chain Management telemetry repository on GitHub.
You can use an out-of-the-box Power Apps template to easily connect your Warehouse Management Application Insights telemetry data to your Power BI workspace.
Here’s just some of the data you’ll find in the template:
Application Insights is billed based on the volume of telemetry data that your application sends (data ingestion) and the length of time that you want data to be available (data retention). See Azure Monitor pricing.
You can easily configure the system to send you an Azure Monitor alert if something occurs in your environment or application that requires immediate action.
Read the documentation:
Not yet a Dynamics 365 Supply Chain Management customer? Take a guided tour and start a free trial!
The post Warehousing telemetry now available in Supply Chain Management appeared first on Microsoft Dynamics 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
We continue to expand the Azure Marketplace ecosystem. For this volume, 130 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace | |
|---|---|
| Airflow by Kockpit: This offer from Kockpit provides Apache Airflow on Ubuntu 20.04.02 LTS. Apache Airflow is a platform for programmatically authoring, scheduling, and monitoring workflows. This image is designed for production environments on Microsoft Azure. | |
AllegroGraph 7.3.1 VM: This offer from Franz Inc. provides AllegroGraph 7.3.1 on a Microsoft Azure virtual machine. AllegroGraph is a distributed multi-modal graph and document database. | |
| Antuit.ai Solutions for Retail and CPG: Solutions from Antuit.ai, part of Zebra Technologies, use AI and machine learning to forecast omnichannel demand, enabling businesses to optimize inventory decisions. From planning through execution, companies can predict, shape, and execute demand by connecting decisions, removing uncertainty, and doing more with less. | |
BotCast: BotCast from Tech Unicorn is a solution designed to broadcast messages to multiple teams or individuals through channel posts or chat messages using Microsoft Teams. It can be accessed via the Teams app on mobile and desktop devices. | |
Botminds AI: Automated Invoice Processing: This no-code AI platform from Botminds AI Technologies Pvt. Ltd. has four key modules: Capture, for automated abstraction; Search, for contextual results; Analyze, for improved decision-making; and Automate, for maximizing efficiency. Enhance invoice-processing efficiency and enable visibility with AI-powered extraction and classification. | |
Botminds AI: Contract Automation and Intelligence: This no-code AI platform from Botminds AI Technologies Pvt. Ltd. has four key modules: Capture, for automated abstraction; Search, for contextual results; Analyze, for improved decision-making; and Automate, for maximizing efficiency. Empower contract and legal teams by enabling abstraction from thousands of complex contracts in minutes. | |
Botminds AI: Financial Spreading Automation: This no-code AI platform from Botminds AI Technologies Pvt. Ltd. has four key modules: Capture, for automated abstraction; Search, for contextual results; Analyze, for improved decision-making; and Automate, for maximizing efficiency. Empower finance teams with fast, accurate spreading and analysis across standards and formats. | |
Botminds AI: Tender Risk Automation: This no-code AI platform from Botminds AI Technologies Pvt. Ltd. has four key modules: Capture, for automated abstraction; Search, for contextual results; Analyze, for improved decision-making; and Automate, for maximizing efficiency. Speed up the tender-screening process and increase bidding confidence for bid teams and tender analysts. | |
Canarys Copy Project: Canarys Copy Project is a web service that copies team projects across Azure DevOps tenants. This limited trial version copies 100 work items with history and links, one Git repository, one test plan, and one pipeline. | |
CAR Consultation: CAR Consultation from Agrotools, available in Portuguese, is a query function that returns a rural environmental registry map from the provision of a geographical coordinate or the number of the CAR. CAR stands for cadastro ambiental rural, or rural environmental registry. | |
CIS Red Hat Enterprise Linux 9 Benchmark L1: This image offered by the Center for Internet Security provides Red Hat Enterprise Linux 9 preconfigured to the recommendations in the associated CIS benchmark. CIS benchmarks are vendor-agnostic, consensus-based security configuration guides. | |
Cognitiwe Fresh Food Monitoring: Cognitiwe uses computer vision, 5G connectivity, and Microsoft Azure private multi-access edge compute to conduct analysis on data streams coming from IP cameras and IoT devices. Retailers can employ Cognitiwe to monitor stock levels, improve operational efficiency, and reduce food waste. | |
ComplianceCow: ComplianceCow is an API-first platform that simplifies security compliance assessments for applications running on Microsoft Azure and Azure Kubernetes Service. It features automated evidence collection, easy-to-use dashboards and reports, and prebuilt templates for PCI-DSS controls. | |
Debian 11 with phpIPAM: This offer from Belinda CZ s.r.o. provides Debian 11 with phpIPAM version 1.5. phpIPAM is an open-source web IP address management application that’s based on PHP and uses a MySQL or MariaDB database as a back end. | |
Docker CE on Oracle Linux 8.7 Minimal: This offer from Art Group provides Docker Community Edition on a minimal installation of Oracle Linux 8.7. Docker CE is a development platform and virtualization technology that makes it easy to develop and deploy apps inside neatly packaged virtual containerized environments. | |
Domino Data Lab MLOps Platform: Domino, an enterprise MLOps platform, enables data scientists to develop better AI models. Individual data scientists can work faster with Domino’s easy access to scalable compute, containerized environments, automatic version control, and publishing features. | |
erwin Evolve: erwin Evolve is a configurable set of enterprise architecture and business process modeling and analysis tools. Users can map IT capabilities to the business functions they support and determine how people, processes, data, technologies, and applications interact to ensure alignment in achieving enterprise objectives. | |
eTransform Dynamic – Imaging: Efferent Health’s eTransform Dynamic – Imaging transforms any healthcare imaging data into federated, elastic, and compliant DICOMweb and FHIR objects. Marry your historic and real-time imaging data in Azure Health Data Services, providing accessibility from anywhere on any device. | |
eTransform Imaging A: Efferent Health’s eTransform Imaging A can ingest all medical imaging data from disparate on-premises sources or nonfederated cloud repositories in bulk, then transform the datasets into federated, elastic, and compliant DICOMweb and FHIR objects. | |
FilingBox MEGA: FilingBox MEGA protects data from ransomware and data breach malware, offering secure storage for Linux and Windows servers. FilingBox MEGA distributes a real file to preregistered applications and a fake file with a read-only attribute to unregistered applications. | |
FreeBSD 12.4: This offer from the FreeBSD Foundation provides FreeBSD 12.4. FreeBSD is an operating system derived from BSD and used to power modern servers, desktops, and embedded platforms. Its advanced networking, security, storage, and monitoring features have made it the platform of choice for many of the busiest web sites and most pervasive embedded networking and storage devices. | |
GLPI on Ubuntu Server 22.04 LTS: This offer from AskforCloud provides GLPI on Ubuntu Server 22.04 LTS. GLPI is an open-source tool to manage help desk assets, plan IT changes, efficiently solve problems, and automate business processes. | |
HoloDesk: Plansysteme’s HoloDesk is an industry-agnostic collaboration platform for 3D models. Share 3D assets with your team or external partners locally or remotely, and streamline processes for design, sales, support, and maintenance. | |
IBM Turbonomic Application Resource Management: IBM Turbonomic Application Resource Management (ARM) provides IT infrastructure management with application-aware resourcing enabled at installation. A unified platform, together with trustworthy actions, enables coordinated, full-stack automation. | |
IconPro APOLLO – Predictive Maintenance: Made by production engineers for production engineers, IconPro APOLLO is an industrial IoT solution that enables condition monitoring and predictive maintenance. Companies can reduce downtime and maintenance costs for measurement machines, tools, or robots while ensuring suitable environment conditions. | |
Instabase Automation Platform: The Instabase Automation Platform brings together deep-learning innovation and a suite of low-code building blocks to enable organizations to automate their business processes and transform messy, unstructured documents into structured data. | |
Intel Confidential Compute for Scikit-learn: This offer from Intel uses the open-source project Gramine to convert an unprotected Scikit-learn image into an Intel SGX-protected image. The resulting image can be used for privacy-preserving machine learning applications. The image can be started on any Azure machine supporting Intel SGX, a confidential computing solution. | |
Intel Confidential Compute for TensorFlow Serving: This offer from Intel uses the open-source project Gramine to convert an unprotected TensorFlow Serving image into an Intel SGX-protected image. The resulting image can be used for privacy-preserving machine learning applications. The image can be started on any Azure machine supporting Intel SGX, a confidential computing solution. | |
iRedMail on CentOS Stream 8 Minimal: This offer from Art Group provides iRedMail on a minimal installation of CentOS Stream 8. iRedMail is an open-source mail server solution that lets you host your own mail server at no cost. You retain personal data on your own hard disk, and you can control the email security and inspect transaction logs. | |
iRedMail on Debian 11 Minimal: This offer from Art Group provides iRedMail on a minimal installation of Debian 11. iRedMail is an open-source mail server solution that lets you host your own mail server at no cost. You retain personal data on your own hard disk, and you can control the email security and inspect transaction logs. | |
iRedMail on Ubuntu 20.04 Minimal: This offer from Art Group provides iRedMail on a minimal installation of Ubuntu 20.04. iRedMail is an open-source mail server solution that lets you host your own mail server at no cost. You retain personal data on your own hard disk, and you can control the email security and inspect transaction logs. | |
iRedMail on Ubuntu 22.04 Minimal: This offer from Art Group provides iRedMail on a minimal installation of Ubuntu 22.04. iRedMail is an open-source mail server solution that lets you host your own mail server at no cost. You retain personal data on your own hard disk, and you can control the email security and inspect transaction logs. | |
Jaeger, Packaged by Bitnami: This offer from Bitnami provides a container image of Jaeger, a distributed tracing system for monitoring and troubleshooting microservices-based distributed systems. Bitnami packages applications following industry standards and continuously monitors all components and libraries for vulnerabilities and updates. | |
Jellyfin on Ubuntu 20.04 Minimal: This offer from Art Group provides Jellyfin on a minimal installation of Ubuntu 20.04. Jellyfin is an open-source suite of multimedia applications designed to organize, manage, and share digital media files to networked devices. | |
Julia on Debian 10: This offer from AskforCloud provides Julia on Debian 10. Julia is an open-source dynamic programming language for scientific and numerical computing. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. | |
Julia on Debian 11: This offer from AskforCloud provides Julia on Debian 11. Julia is an open-source dynamic programming language for scientific and numerical computing. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. | |
Julia on Ubuntu Server 18.04 LTS: This offer from AskforCloud provides Julia on Ubuntu Server 18.04 LTS. Julia is an open-source dynamic programming language for scientific and numerical computing. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. | |
Julia on Ubuntu Server 20.04 LTS: This offer from AskforCloud provides Julia on Ubuntu Server 20.04 LTS. Julia is an open-source dynamic programming language for scientific and numerical computing. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. | |
Julia on Ubuntu Server 22.04 LTS: This offer from AskforCloud provides Julia on Ubuntu Server 22.04 LTS. Julia is an open-source dynamic programming language for scientific and numerical computing. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. | |
Klarytee Add-in for Microsoft Word: With Klarytee, you can select all of, or part of, a sensitive document or email you want to secure, then encrypt it and apply the required level of controls based on data sensitivity. Three license options are available, including a free version with a Microsoft Word add-in. | |
KPoD – Proof of Delivery App_v2: With the Proof of Delivery mobile app from KAISPE, businesses can ensure that their goods are transported and delivered in a timely manner, keeping customers satisfied. The KPoD mobile app is used for real-time process verification and order dispatch, and it can be linked with Microsoft Dynamics 365. | |
LAMP on Ubuntu 18.04 LTS: This offer from Elania Resources provides a LAMP stack on Ubuntu 18.04 LTS. The LAMP stack is comprised of open-source software that enables a server to host dynamic websites and web apps written in PHP. | |
LAMP on Ubuntu 20.04 LTS: This offer from Elania Resources provides a LAMP stack on Ubuntu 20.04 LTS. The LAMP stack is comprised of open-source software that enables a server to host dynamic websites and web apps written in PHP. | |
LAMP on Ubuntu 22.04 LTS: This offer from Elania Resources provides a LAMP stack on Ubuntu 22.04 LTS. The LAMP stack is comprised of open-source software that enables a server to host dynamic websites and web apps written in PHP. | |
Laravel 9 on Ubuntu 18.04 LTS: This offer from Elania Resources provides Laravel 9 on Ubuntu 18.04 LTS. Laravel is a web application framework with expressive, elegant syntax. Laravel features include thorough dependency injection, an expressive database abstraction layer, queues and scheduled jobs, and unit and integration testing. | |
Laravel 9 on Ubuntu 20.04 LTS: This offer from Elania Resources provides Laravel 9 on Ubuntu 20.04 LTS. Laravel is a web application framework with expressive, elegant syntax. Laravel features include thorough dependency injection, an expressive database abstraction layer, queues and scheduled jobs, and unit and integration testing. | |
Laravel 9 on Ubuntu 22.04 LTS: This offer from Elania Resources provides Laravel 9 on Ubuntu 22.04 LTS. Laravel is a web application framework with expressive, elegant syntax. Laravel features include thorough dependency injection, an expressive database abstraction layer, queues and scheduled jobs, and unit and integration testing. | |
Lobe.ai ONNX HTTP Extension for Azure Video Analyzer on Azure IoT Edge: This module from motojin.com, offering Lobe.ai export model inferencing based on ONNX, is developed for Azure IoT Edge. The module can be used in conjunction with Azure Video Analyzer. | |
Meshify Server on Ubuntu 22.04: This stand-alone version of the Meshify Server is ready to be configured against your Azure Active Directory domain for managing your WireGuard virtual private networks. It has a built-in MongoDB database that can be switched out for a hosted service as needed. | |
Morpheus CMP: Morpheus is a cloud management platform to enable self-service provisioning with policy guardrails. Morpheus integrates quickly with on-premises hypervisors like VMware and Nutanix, plus public clouds. | |
OfficeExpert TrueDEM: OfficeExpert TrueDEM provides detailed performance data for all Microsoft Teams calls and meetings. The information is gathered from localized agents running on computer endpoints, which can identify user experience issues and assist IT support. | |
OMP Sync: Whether you’re looking to migrate from Amazon DynamoDB or set up disaster recovery syncing, OMP helps you do both with a couple of clicks. You can move your data from DynamoDB to Azure Cosmos DB, MongoDB, or Apache Cassandra, and the tool can handle one-time loads or real-time syncing. | |
OpenVINO Integration with Torch-ORT: OpenVINO’s integration with Torch-ORT is designed for PyTorch developers who want to get started with OpenVINO in their inferencing applications. This product delivers OpenVINO inline optimizations with minimal code modifications. | |
Scanner Scanner: Designed for news outlets, journalists, and public safety organizations, Scanner Scanner uses Azure Cognitive Services to scan multiple audio feeds and detect events like fires or gunshots, then send an alert and place the event location on a map. | |
Securosys 365 DKE: Key Management for Microsoft 365: Securosys 365 DKE provides a set of encryption keys for Microsoft Purview. To use Double Key Encryption in Microsoft Purview, this set of separate keys needs to be stored and supplied from outside Microsoft infrastructure. | |
Senserva SaaS App: With Senserva, you can shift from reactive to proactive security and identify entitlement, configuration, and compliance risks before they impact your business. Senserva is a blend of deep analytics and user interfaces. It automates security and is integrated with Microsoft Sentinel and Azure Monitor’s Log Analytics. | |
Solgari for Microsoft Teams: Solgari’s customer engagement and contact center-as-a-service technology is now available as an app for Microsoft Teams. Deliver all communication channels, including social media, within the Teams interface. Agents can benefit from full case management, multiple-session handling, and contextual data from previous interactions. | |
Spark Executor with Hadoop: This offer from Kockpit Analytics provides an image of Apache Spark with Apache Hadoop on Ubuntu 18.04 LTS. Executors in Spark are the worker nodes in charge of a given Spark job. | |
Spark Master with Hadoop: This offer from Kockpit Analytics provides an image of Apache Spark with Apache Hadoop on Ubuntu 18.04 LTS. Spark Master is the manager for the Spark Standalone cluster and allocates resources among Spark applications. The resources are used to run the Spark driver and executors. | |
Spark Standalone with Hadoop: This offer from Kockpit Analytics provides an image of Apache Spark with Apache Hadoop on Ubuntu 18.04 LTS. Apache Spark’s Standalone mode offers a web-based user interface to monitor a cluster. | |
Standss Outbound Email Security for Outlook: SendConfirm from Standss prompts users to review and confirm recipients and attachments before emails are sent out, providing another checkpoint to stop confidential information from getting sent to unintended recipients. | |
Techlatest Stable Diffusion with InvokeAI Web Interface: This offer from TechLatest provides Stable Diffusion, an open-source text-to-image AI model with InvokeAI, a creative engine for Stable Diffusion models. Note: Stable Diffusion requires a lot of processing, so a GPU instance is recommended. If you want to use a CPU instance due to the high price of GPU instances, you should use instances with a higher CPU. | |
Thrive for Microsoft Teams: Thrive for Microsoft Teams puts behavior-change solutions at your fingertips, meeting your people where they are with real-time stress-reducing tools, inspirational storytelling, and science-backed steps that help them build better habits. | |
Titan FTP Server NextGen – Enterprise Plus Edition: This offer from South River Technologies provides the Enterprise Plus Edition of Titan FTP Server NextGen. This edition comes preconfigured for SFTP, FTP/S, and secure web transfers. The intuitive web browser interface gives your end users frictionless access to their files and simple upload and download capability. | |
Tomorrow.io Weather Intelligence Platform: Tomorrow.io’s platform provides weather intelligence and predictive insights, helping you adapt to the increasing impact of climate change and uncover solutions for any industry and every job challenge. | |
Ubuntu 22.04 LTS with phpIPAM: This offer from Belinda CZ s.r.o. provides Ubuntu 22.04 LTS with phpIPAM. phpIPAM is an open-source web IP address management application based on PHP that uses a MySQL or MariaDB database as a back end. | |
Virtual Smartdesk: Intelliteck’s Virtual Smartdesk is a conversational AI solution that enables multiple conversations to take place on a single phone line at the same time. The conversations take place between users and AI-enabled virtual assistant bots, while users’ requests are fulfilled by robotic process automation. | |
Go further with workshops, proofs of concept, and implementations | |
| Azure Arc: 3-Day Workshop: Accelerate your hybrid-cloud journey with the expertise of Cosmote, a subsidiary of OTE. OTE’s workshop will walk you through Azure Arc capabilities, match them to your business requirements, deliver a documented architecture vision, and advise you on the next steps to take. | |
Azure Architecture Implementation and Optimization: 27Global will support your organization’s evolving needs by optimizing underlying Azure infrastructure to align with your business goals and enable tech-driven operating models. If you haven’t already completed an Azure architecture assessment with 27Global, one will be conducted before any implementation activities commence. | |
Azure Arc Value Discovery Workshop: In this workshop from HS Data Ltd., you’ll discover the value of Microsoft Azure Arc for hybrid infrastructure management. Using Azure Arc, you can run services across on-premises datacenters, on other public cloud providers, or at the edge, and manage them with Azure as the control plane. | |
Azure Customer Identity Quick Start: 14-Week Implementation: Edgile will deliver a production pilot of Azure Active Directory B2C that meets enterprise customer identity and access management requirements. Pilot user journeys will be documented, and Edgile will configure an Azure DevOps instance to handle the Azure Active Directory B2C tenant. | |
Azure Landing Zone: 15-Day Implementation: G-Able will set up an Azure landing zone to host your workloads. Multiple packages are available and can include design services, workload migration, and elevated privacy and compliance standards for regulated industries. | |
Azure Managed Services: 1-Year Agreement: All Covered, a division of Konica Minolta, will monitor and manage your Microsoft Azure resources so you can focus on your attention on your core business tasks. All Covered simplifies the planning, maintenance, and oversight of your infrastructure, cybersecurity, and compliance needs. | |
Azure Network Jump Start: In this workshop, Switcom will present the basics of Azure networking and create a high-level network architecture. A proof of concept based on Azure Platform as a Service may be embarked upon afterward. This offer is available only in German. | |
Cloud Enablement Workshops: Do you want to modernize your IT with the use of public cloud technology? Centric’s workshop series can be instrumental in building a solid foundation for your journey to the cloud. Centric will answer questions, introduce you to design guidelines, and help craft an Azure landing zone. | |
Cyber Defense Services with Microsoft Sentinel: 5-Day Workshop: Using Microsoft Sentinel and Capgemini’s Cyber Defense Centers, Capgemini will conduct five days of threat monitoring, reporting, trend analysis, and automation ticketing. | |
Data Catalog Management and Microsoft Purview Implementation: In this engagement, Loihde will implement Microsoft Purview in your environment, connect to data sources on-premises and in the cloud, create a catalog with your metadata, and set up a business glossary. | |
Data Engineering on Azure: Onex Group’s workshop, intended for data engineers, will familiarize participants with Azure data services and usage scenarios. Services to be covered include Azure Data Lake Storage Gen2, Azure SQL Database, Azure Synapse, Azure Databricks, Azure Data Factory, and Azure DevOps. | |
Linux and Open-Source Database Migration to Azure (5 Weeks): Acer Information Services Co. Ltd. will assess your on-premises Linux and open-source database workloads to gauge cloud readiness, identify risks, and estimate cost and complexity before migrating them to Microsoft Azure. This service is available only in Chinese. | |
| Managed Microsoft XDR SOC Service with Incident Response: The threat landscape today demands round-the-clock monitoring to avoid critical business interruptions. Truesec’s managed service utilizes Microsoft extended detection and response solutions to offer continuous monitoring, analysis (false positive elimination), and response to keep you safe. | |
Microsoft Azure: 2-Week Proof of Concept: In this proof of concept, Netrix will identify a high-level use case for getting started with Azure IaaS or PaaS. Netrix will deploy the required resources, establish connectivity with on-premises resources, and validate interoperability. Design and governance documentation will be supplied. | |
| Microsoft Sentinel: 5-Hour Workshop: This workshop from CLOUD SERVICES contains theoretical and practical parts designed to help you start working with Microsoft Sentinel. You’ll learn what tasks Microsoft Sentinel solves most effectively, how to conduct a cyberattack investigation, and which pricing model best fits your business. | |
NCS Cloud Observability Dashboard: 3-Week Implementation: NCS will implement its Observability Dashboard on Microsoft Azure to provide an end-to-end view of your Azure resources, such as virtual machines, databases, and containers. You’ll be able to easily monitor resource health, performance, and availability. | |
Road to the Cloud (3 Phases): Is your organization ready to take its first steps in the cloud? Chmurowisko’s service consists of three modules (competencies, strategy development, and landing zone), each involving workshops conducted with the client and documentation supporting the path to Microsoft Azure. | |
SAP S/4 on Azure Launchpad: 6-Week Proof of Concept: PwC can help your enterprise validate your cloud and SAP S/4 business case. Using templates and Azure Launchpad, PwC will enable you to engage your teams on what the future will look like using SAP S/4HANA. | |
Smart Data Platform: 10-Day Implementation: Using Microsoft Power BI and Azure data services, Macaw will deploy a smart platform with an automated data extraction and data-modeling framework. This will allow you to deliver reliable data, dashboards, and reports. This offer is available in German. | |
VM Migration: 3-Week Proof of Concept: Migration to the cloud doesn’t have to be difficult, but many organizations struggle to get started. Click2Cloud’s proof of concept can help you optimize your workload migration. Click2Cloud will assess your company’s virtual machine workloads from on-premises infrastructure and devise a plan to migrate them to Microsoft Azure. | |
Wilas Captive Portal Implementation: In this engagement, Logicalis Asia partner TechStudio Solutions will set up the WILAS WiFi Engagement Portal on Microsoft Azure, then implement and configure access policies, integrate with Wi-Fi infrastructure, and deliver testing, training, and documentation. | |
Windows and SQL Server Migration to Azure (5 Weeks): Acer Information Services Co. Ltd. will identify Windows and SQL Server instances and functions used in your organization, then assess your on-premises workloads to gauge cloud readiness, identify risks, and estimate cost and complexity. A migration to Azure will follow. This service is available only in Chinese. | |
Contact our partners | |
| 5G-Enabled Digital Twins | |
Armor SOC2 Readiness Assessment | |
Azure Active Directory Migration: 3-Week Assessment | |
Azure Backup as a Service: 5-Day Assessment | |
Azure IoT Hub Health Check (3 Weeks) | |
Azure Security Well-Architected Review: 1-Hour Briefing | |
Azure Zero Trust Maturity Assessment: 1-Hour Briefing | |
Collear: Learning Management System Powered by PamTen | |
Confidential Cross-Database DNA Relatives Matching | |
Cyient Cloud Consulting Services | |
Data Architecture Review: 1-Hour Briefing | |
Decentriq Healthcare Data Clean Room | |
ESET Inspect integration for Microsoft Sentinel | |
IBM Consulting Global Green IT and Sustainability on Azure | |
Microsoft Security: 4-Week Assessment | |
Modulos Data-Centric AI Platform | |
RealMigrator: Migrate All Your Data Resources | |
Road to the Cloud (Phases 1 and 2) | |
Trend Micro Mobile Network Security | |
Well-Architected Review for Database Services: 1-Hour Briefing | |
| ZeroC: Emissions Management App | |
This article is contributed. See the original author and article here.
CISA released one Industrial Control Systems (ICS) advisory on January 19, 2023. This advisory provides timely information about current security issues, vulnerabilities, and exploits surrounding ICS.
CISA encourages users and administrators to review the newly released ICS advisory for technical details and mitigations:
This article was originally posted by the FTC. See the original article here.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article was originally posted by the FTC. See the original article here.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
Taking time to speak with customers remains one of the best ways to build relationships and close deals faster. However, in a digital world it is often difficult to secure that moment of interaction. So when that moment happens, it is critical to focus on the conversation. Capturing valuable insights and next steps is a distraction at this precious time. Microsoft Dynamics 365 Sales conversation intelligence continues to harness AI technology to assist salespeople with just that it’s there as your chief note taker. Master conversation follow ups by uncovering value from each call and gaining a deeper understanding of your customer interactions.
We’re excited to introduce two new features designed to save time and allow users to quickly access the most relevant and valuable insights from their calls:
Let’s dive into each one to learn more.
Call categorization introduces a revolutionary way to manage call recordings and learn more about leads, as well as assist managers with identifying coaching opportunities within their teams.
It is common for sales teams and contact centers to conduct many calls which are not successfully connected. This can lead to an overload of irrelevant data in call recording tables and a lot of noise for a seller and manager to wade through when reviewing calls for follow-up or best practice sharing. To address this issue, Dynamics 365 Sales conversation intelligence is introducing the Call categorization feature that automatically categorizes and tags short calls with 4 categories:
Once the calls are tagged, it becomes easy for sellers, managers, and operations to identify and exclude irrelevant call data. Sales teams can save time by not having to hunt for calls. Instead, with call categorization, they can review relevant conversations to follow up on and share as best practice or learnings.

When in the flow of a conversation multiple questions could be asked but the seller may not tackle them all within the call. Dynamics 365 Sales conversation intelligence now tracks all questions raised by customers and sellers during customer conversations. These are readyfor review and follow up almost immediately after the call has ended.
The new feature includes a “Questions” section in each call/meeting summary. The section tracks all questions asked during the call and groups them by customer or seller. This allows sellers and sales managers to easily locate and quickly jump to listen to a specific question within the conversation. By doing so, they gain a more in-depth understanding of the interaction.
With this insight documented, sellers can quickly drill into customers’ objections and concerns. In addition, they can review those open items for action.

With these productivity enhancements sellers can focus on engaging customers knowing their systems are working hard to remove complexity and optimize their sales conversation follow ups.
To get started, enable the public preview of the Call categorization feature: First-run setup experience for conversation intelligence in sales app | Microsoft Learn
Learn more about the Question detection feature: View and understand call summary page in the Dynamics 365 Sales Hub app | Microsoft Learn
Learn more about conversation intelligence:Improve seller coaching and sales potential with conversation intelligence | Microsoft Learn
Enable conversation intelligence in your organization:First-run setup experience for conversation intelligence in sales app | Microsoft Learn
If you are not already a Dynamics 365 Sales customer and want to know more, take a tour andstart your free trial today.
The post Optimize sales conversation follow ups in 2 easy steps! appeared first on Microsoft Dynamics 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
Postgres is one of the most widely used databases and supports a number of operating systems. When you are writing code for PostgreSQL, it’s easy to test your changes locally, but it can be cumbersome to test it on all operating systems. A lot of times, you may encounter failures across platforms and it can get confusing to move forward while debugging. To make the dev/test process easier for you, you can use the Postgres CI.
When you test your changes on CI and see it fail, how do you proceed to debug from there? As a part of our work in the open source Postgres team at Microsoft, we often run into CI failures—and more often than not, the bug is not obvious, and requires further digging into.
In this blog post, you’ll learn about techniques you can use to debug PostgreSQL CI failures faster. We’ll be discussing these 4 tips in detail:
Before diving into each of these tips, let’s discuss some basics about how Postgres CI works.
PostgreSQL uses Cirrus CI for its continuous integration testing. To use it for your changes, Cirrus CI should be enabled on your GitHub fork. The details on how to do this are in my colleague Melih Mutlu’s blog post about how to enable the Postgres CI. When a commit is pushed after enabling CI; you can track and see the results of the CI run on the Cirrus CI website. You can also track it in the “Checks” GitHub tab.
Cirrus CI works by reading a .cirrus.yml file from the Postgres codebase to understand the configuration with which a test should be run. Before we discuss how to make changes to this file to debug further, let’s understand its basic structure:
# A sequence of instructions to execute and
# an execution environment to execute these instructions in
task:
# Name of the CI task
name: Postgres CI Blog Post
# Container where CI will run
container:
# Container configuration
image: debian:latest
cpu: 4
memory: 12G
# Where environment variables are configured
env:
POST_TYPE: blog
FILE_NAME: blog.txt
# {script_name}_script: Instruction to execute commands
print_post_type_script:
# command to run at script instruction
- echo "Will print POST_TYPE to the file"
- echo "This post's type is ${POST_TYPE}" > ${FILE_NAME}
# {artifacts_name}_artifacts: Instruction to store files and expose them in the UI for downloading later
blog_artifacts:
# Path of files which should be relative to Cirrus CI’s working directory
paths:
- "${FILE_NAME}"
# Type of the files that will be stored
type: text/plain
Figure 1: Screenshot of the Cirrus CI task run page. You can see that it run script and artifacts instructions correctly.
Figure 2: Screenshot of the log file on Cirrus CI. The gathered log file is uploaded to the Cirrus CI.
As you can see, the echo commands are run at script instruction. Environment variables are configured and used in the same script instruction. Lastly, the blog.txt file is gathered and uploaded to Cirrus CI. Now that we understand basic structure, let’s discuss some tips you can follow when you see CI failures.
When Postgres is working on your local machine but you see failures on CI, it’s generally helpful to connect to the environment where it fails and check what is wrong.
You can achieve easily that using the RE-RUN with terminal button on the CI. Also, typically, a CI run can take time as it needs to find available resources to start and rerun instructions. However, thanks to this option, that time is saved as the resources are already allocated.
After the CI’s task run is finished, there is a RE-RUN button on the task’s page.Figure 3: There is an arrow on the right of the RE-RUN button, if you press it the “Re-Run with Terminal Access” button will appear.
You may not have noticed it before, but there is a small arrow on the right of the RE-RUN button. When you click this arrow, the “Re-Run with Terminal Access” button will appear. When this button is clicked, the task will start to re-run and shortly after you will see the Cirrus terminal. With the help of this terminal, you can run commands on the CI environment where your task is running. You can get information from the environment, change configurations and re-test your task.
Note that the re-run with terminal option is not available for Windows yet, but there is ongoing work to support it.
Postgres and meson provide additional build-time debug options to generate more information to find the root cause of certain types of errors. Some examples of build options which might be useful to set are:
-Dcassert=true [defaults to false]: Turns on various assertion checks. This is a debugging aid. If you are experiencing strange problems or crashes you might want to turn this on, as it might expose programming mistakes.-Dbuildtype=debug [defaults to debug]: Turns on basic warnings and debug information and disables compiler optimizations.-Dwerror=true [defaults to false]: Treat warnings as errors.-Derrorlogs=true [defaults to true]: Whether to print the logs from failing tests.While building Postgres with meson, these options can be setup using the meson setup [] [] or the meson configure commands.
These options can either be enabled with the “re-running with terminal access” option or by editing the cirrus.yml config file. Cirrus CI has a script instruction in the .cirrus.yml file to execute a script. These debug options could be added to the script instructions in which meson is configured. For example:
configure_script: |
su postgres <<-EOF
meson setup
-Dbuildtype=debug
-Dwerror=true
-Derrorlogs=true
-Dcassert=true
${LINUX_MESON_FEATURES}
-DPG_TEST_EXTRA="$PG_TEST_EXTRA"
build
EOF
Once it’s written as such, the debug options will be activated next time CI runs. Then, you can check again if the build fails and investigate the logs in a more detailed manner. You may also want to store these logs to work on them later. To gather the logs and store them, you can follow the tip below.
Cirrus CI has an artifact instruction to store files and expose them in the UI for downloading later. This can be useful for analyzing test or debug output offline. By default, Postgres’ CI configuration gathers log, diff, regress log, and meson’s build files—as can be seen below:
testrun_artifacts:
paths:
- "build*/testrun/**/*.log"
- "build*/testrun/**/*.diffs"
- "build*/testrun/**/regress_log_*"
type: text/plain
meson_log_artifacts:
path: "build*/meson
If there are other files that need to be gathered, another artifact instruction could be written or the current artifact instruction could be updated at the .cirrus.yml file. For example, if you want to collect the docs to review or share with others offline, you can add the instructions below to the task in the .cirrus.yml file.
configure_script: su postgres -c 'meson setup build'
build_docs_script: |
su postgres <<-EOF
cd build
ninja docs
EOF
docs_artifacts:
path: build/doc/src/sgml/html/*.html
type: text/html
Then, collected logs will be available in the Cirrus CI website in html format.Figure 4: Screenshot of the uploaded logs on the Cirrus CI task run page. Logs are uploaded to the Cirrus CI and reachable from the task run page.
Apart from the tips mentioned above, here is another tip you might find helpful. At times, we want to run some commands only when we come across a failure. This might be to avoid unnecessary logging and make CI runs faster for successful builds. For example, you may want to gather the logs and stack traces only when there is a test failure. The on_failure instruction helps to run certain commands only in case of an error.
on_failure:
testrun_artifacts:
paths:
- "build*/testrun/**/*.log"
- "build*/testrun/**/*.diffs"
- "build*/testrun/**/regress_log_*"
type: text/plain
meson_log_artifacts:
path: "build*/meson-logs/*.txt"
type: text/plain
As an example, in the above, the logs are gathered only in case of a failure.
While working on multi-platform databases like Postgres, debugging issues can often be difficult. Postgres CI makes it easier to catch and solve errors since you can work on and test your changes on various settings and platforms. In fact, Postgres automatically runs CI on every commitfest entry via Cfbot to catch errors and report them.
These 4 tips for debugging CI failures should help you speed up your dev/test workflows as you develop Postgres. Remember to use the terminal to connect CI environment, gather logs and files from CI runs, use build options on CI, and run specific commands on failure. I hope these tips will make Postgres development easier for you!
Recent Comments