Windows client roadmap update

Windows client roadmap update

This article is contributed. See the original author and article here.

We realize that a clear Windows client roadmap update helps consumers and organizations with planning their Windows release activities.


Today we’ll provide a brief update on the latest version of Windows 10, as well as share more on the time frame for the next Long-Term Servicing Channel (LTSC) release of Windows 11.


Windows 10 support lifecycle


As documented on the Windows 10 Enterprise and Education and Windows 10 Home and Pro lifecycle pages, Windows 10 will reach end of support on October 14, 2025. The current version, 22H2, will be the final version of Windows 10, and all editions will remain in support with monthly security update releases though that date. Existing LTSC releases will continue to receive updates beyond that date based on their specific lifecycles.


Recommendation



  • We highly encourage you to transition to Windows 11 now as there won’t be any additional Windows 10 feature updates.

  • If you and/or your organization must remain on Windows 10 for now, please update to Windows 10, version 22H2 to continue receiving monthly security update releases through October 14, 2025. See how you can quickly do this via a servicing enablement package in How to get the Windows 10 2022 Update.


The final end of support date for Windows 10 does not change with this announcement; these dates can be found on the Windows 10 Lifecycle page.


Windows 11 LTSC


It’s important for organizations to have adequate time to plan for adopting Windows 11. Today we’re announcing that the next Windows LTSC releases will be available in the second half of 2024:



  • Windows 11 Enterprise LTSC

  • Windows 11 IoT Enterprise LTSC


We’ll provide more details as we get closer to availability.


Recommendation


If you’re waiting for a Windows 11 LTSC release, you can begin planning and testing your applications and hardware on the current GA channel release, Windows 11, version 22H2. Check out App confidence: Optimize app validation with Test Base for more tips on how to test your applications.


Stay informed


In the future, we will add more information here and to the Windows release health page, which offers information about the General Availability Channel and LTSC under release information for appropriate versions.


The Windows release health page lists release information for different versions of Windows.The Windows release health page lists release information for different versions of Windows.




Continue the conversation. Find best practices. Bookmark the Windows Tech Community and follow us @MSWindowsITPro on Twitter. Looking for support? Visit Windows on Microsoft Q&A.

Microsoft Designer expands preview with new AI design features

Microsoft Designer expands preview with new AI design features

This article is contributed. See the original author and article here.

Today, we’re excited to announce we’re removing the waitlist and adding an expanded set of features to the Microsoft Designer preview. With new AI technology at the core, Microsoft Designer simplifies the creative journey by helping you get started quickly, augment creative workflows, and overcome creative roadblocks.

The post Microsoft Designer expands preview with new AI design features appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Optimize experiences for sellers and marketers with Dynamics 365

Optimize experiences for sellers and marketers with Dynamics 365

This article is contributed. See the original author and article here.

Discover how Dynamics 365 helps customer experience leaders do more with less

Leaders of organizations in every region and industry are cautiously navigating business decisions of 2023. The recent economic turbulence has forced companies to evaluate their internal processes, tools, and enterprise software to optimize for efficiency. Conversely, customer experience (CX) leaders see 2023 as the year of opportunity. By strengthening their customer experience strategy with technology, CX leaders can retain their most valuable customers, acquire new customers, and look to surpass their competitors. The good news is that you don’t need more to do this. You can do more with less, utilizing the power of AI to optimize experiences for your sellers, marketers, and data analysts so they can deliver better experiences for your customers.

Dynamics 365

Drive demand and close deals faster

Understand your customer

CX leaders know that the first step to any great customer experience is understanding your customer. They need a customer data platform that can do more for them, making sense of their data, offering recommendations, and serving up valuable insights. Microsoft Dynamics 365 Customer Insights unifies and enriches first-party and third-party data to truly understand your customer and predict their intent. It does this while maintaining privacy and compliance with customer consent. Out-of-the-box AI guides data wranglers with the next best action and also can provide predictions for customer lifetime value, so you can determine how to best invest in your customers.

Business Finland utilized Dynamics 365 Customer Insights to gain greater data insights, resulting in the ability to support thousands of Finnish companies through the COVID-19 pandemic, increasing its export sales by 20 percent, and allowing the Finnish government to fund topics and initiatives that align with its strategic goals.

With the latest Copilot in Dynamics 365 Customer Insights, data analysts and marketers can engage directly with customer data using simple, natural language. This saves time for data analysts, allowing them to type the query in their own words instead of identifying the query in SQL. This capability democratizes access to insights, allowing marketers to use their customer data platform (CDP) to ask questions using everyday language and receive answers fast, without needing to know SQL programming. With simple prompts, marketers can explore, understand, and predict customer preferences and needs in near real time, reducing the reliance on the analytics team to provide them with the customer insights they need.

Engage your customer

Customers today expect personalized experiences, but your marketers don’t have the time or resources to tailor every interaction with every customer. Microsoft Dynamics 365 Marketing can do more with less by using AI to orchestrate personalized journeys across every customer touchpoint. In addition to customer journeys, Dynamics 365 offers email marketing, lead scoring, marketing pages, and social posting, allowing you to seamlessly connect your marketing and sales processes. Mid-Continent Instruments and Avionics, an aviation parts supplier, is using Dynamics 365 Marketing to automate the follow-up process for completed repairs, tying this activity into product campaigns, and communicating more effectively with customers.

With Copilot in Dynamics 365 Marketing, marketers can eliminate the time-consuming process of manually building customer segments for email campaigns, utilizing the query assist feature that allows them to describe their targeted segment in natural language. When they are ready to craft their email content, marketers can harness the Copilot feature, content ideas, to generate content by providing a few prompts. They can also tailor the tone to meet the needs of their audience. With Copilot in Dynamics 365 Marketing, marketers can spend less time on copywriting and audience segmentation, and more time on strategic marketing efforts.

Deliver for your customer

You’ve moved your customer down the funnel, understanding and predicting their intent, personalizing experiences with communications and offers, and now they’re ready to purchase. Microsoft Dynamics 365 Sales enables salespeople to build strong relationships with your customers, take actions based on insights, and close deals faster. You can use Dynamics 365 Sales to keep track of your accounts and contacts, nurture your sales from lead to order, and create sales collateral. Sales managers can use AI to make their sales teams stronger by monitoring conversations with customers using conversation intelligence and providing coaching and feedback to sellers, or by creating step-by-step guidance for next best steps with sequences.

DP World shortened sales cycles enabling five times more proactive sales and two times greater retention with Dynamics 365 Sales.

With Copilot in Microsoft Viva Sales, part of Dynamics 365 Sales, sellers can save time with generated email content suggestions, which includes data that is relevant to the customer, such as pricing, promotions, and deadlines. In addition, they can generate an email response that proposes a meeting date and time based on availability of their Outlook calendar. These new capabilities help sellers automate and expedite administrative work so they can focus on what matters mostmaking meaningful connections and building trust with their customers and prospects.

Learn how to do more with less

With Dynamics 365 Customer Insights, Marketing, and Sales, you can connect your teams across all business processes to ensure your customer is always at the center. This ensures they have a personalized, seamless experience, from consideration to purchase. To learn more or take a guided tour, please visit our connected sales and marketing solution page. To learn more about Dynamics 365 Copilot, read the announcement blog from Microsoft Corporate Vice President, Business Applications, Emily He.

The post Optimize experiences for sellers and marketers with Dynamics 365 appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Database for MySQL – Flexible Server failover across regions without connection string changes

Azure Database for MySQL – Flexible Server failover across regions without connection string changes

This article is contributed. See the original author and article here.

With Azure Database for MySQL – Flexible Server, you can configure high availability with automatic failover within a region. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won’t be a single point of failure in your software architecture.


 


Note: For more information, see Azure Database for MySQL – Flexible Server – High Availability Concepts.


 


Within a region, there are three potential options to consider, as shown in the following table:


 






















Option (Mode)



Committed SLA



Non-HA



99.9%



Same Zone HA



99.95%



Zone Redundant HA (ZHRA)*



99.99%



*ZRHA is only available in regions that support availability zones. For the latest list of Azure regions, in the Azure Database for MySQL documentation, see Azure regions.


 


In addition to the ‘in-region’ modes listed above, there’s also an option to design for protection of database services across Azure regions. One common pattern we’ve seen with several customers is the need for maximum in-region availability along with a cross region disaster recovery capability. This manifests itself as ZRHA in the primary region and a Read Replica in another region, preferably the paired region, as illustrated in the following diagram:


 


Azure Database for MySQL - Flexible Server1 .PNG


 


With ZRHA, failover between the Primary and Standby servers is automatically managed by the Azure platform, and importantly, the service endpoint name does not change. On the other hand, the manual process associated with a regional failover does introduce a change to the service endpoint name. Some customers have expressed an interest in being able to perform a regional failover without later having to update the associated application connection strings.


 


In this post, I’ll explain how to address this requirement and provide a regional failover that requires no application connection string changes.


 


For our purposes, we’ll use the following simplified architecture diagram as a starting point:


 


Azure Database for MySQL - Flexible Server2a.png


 


In this illustration, there’s a single Primary server located in Australia East and a Replica is hosted in Australia Southeast. With this setup, it’s important to understand some implementation details, especially around networking and guidance:



  • Each server is deployed using the Private Access option.

  • Each server is registered to the same Azure Private DNS Zone, in this case, myflex.private.mysql.database.azure.com.

  • Each server is on separate a VNet, and the two VNets are peered with each other.

  • Each VNet is linked to the Private DNS zone.


The server name, IP address, server type, and region for the two servers I created are shown in the following table:


 
























Server / Service name



IP address



Role



Region



primary01.mysql.database.azure.com



10.0.2.4



Primary



Australia East



replica01.mysql.database.azure.com



192.168.100.4



Replica



Australia Southeast



 


Note: For more information about Azure Database for MySQL connectivity and networking, see the article Connectivity and networking concepts for Azure Database for MySQL – Flexible Server.


 


When configured properly, the Private DNS Zone (should appear as shown in the following image:


 


bmckerrMSFT_2-1682460495762.png


 


It’s possible to resolve these DNS names from within either VNet. For example, the Linux shell shows the following detail for a Linux VM, which happens to be on the Australia East VNet, and it can resolve the both the service name and the private DNS zone name of each of the servers.


 


Note: This Linux VM is being used simply to host the ‘nslookup’ and ‘mysql’ binaries that we are using in this article:


 


bmckerrMSFT_3-1682460495767.png


 


In addition to name resolution and courtesy of our VNet peering, I can also connect to both databases using either the service name or the private DNS name. Running the command-line application ‘mysql’, I’ll connect to the primary server using both DNS names as shown in the following image:


 


bmckerrMSFT_4-1682460495792.png


 


And next, I’ll use ‘mysql’ again to connect to both DNS names for the replica server:


 


bmckerrMSFT_5-1682460495813.png


 


To recap, we have set up a primary server in one region and replica service in another region using the Private Access networking, standard VNET peering, and Private DNS Zone features. I then verified that I could connect to both databases using the service name, or the name allocated by the Private DNS zone. The remaining question, however, is how to failover to the replica database, for example in a DR drill, and allow my application to connect to the promoted replica without making any changes to the application configuration? The answer, it turns out, is pretty simple…


 


In addition to typical DNS record types of ‘A’ Address and ‘PTR’ Pointer, ‘CNAME’ is another useful record type that I can use as an “alias” to effectively point to another DNS entry. Next, I’ll demonstrate how to configure a ‘CNAME’ record to point to either of the databases in our set up.


 


For this example, I’ll create a CNAME record with value ‘prod’ that points at the ‘A’ record for the Primary server. Inside the Private DNS Zone you can add a new record by choosing ‘+ Record Set’. Then you can add a CNAME record like so:


 


bmckerrMSFT_6-1682460495816.png


 


While the default TTL is 1 hour, I’ve reduced this to 30 seconds to limit DNS clients and applications from caching an answer for too long, which can have a significant impart during or after a failover. After I’ve added the CNAME record, the DNS zone looks like this:


 


bmckerrMSFT_7-1682460495821.png


 


Notice that the new ‘prod’ name points to the ‘A’ record for the primary server.


 


Now, I’ll verify that I can use the CNAME record to connect to the primary database:


 


bmckerrMSFT_8-1682460495833.png


 


Cool! That’s just DNS doing its thing with the CNAME record type.


 


It is also possible to edit the CNAME DNS record to point it to the replica:


 


bmckerrMSFT_9-1682460495838.png


 


After saving the updated CNAME, when I connect to ‘prod’, it is now connecting to the replica, which is in READ-ONLY mode. I can verify this by trying a write operation, such as creating a table:


 


bmckerrMSFT_10-1682460495849.png


 


Sure enough, the CNAME ‘prod’ now points to the replica, as expected.


 


Given what I’ve shown so far, it’s clear the using the flexibility of Azure Private DNS and CNAME records is ideal for this use case.


 


The last step in this process is to perform the failover and complete the testing.


 


In the Azure portal, navigate to the Replication blade of either the Replica server or the Standby server, and then ‘Promote’ the Replica:


 


bmckerrMSFT_11-1682460495852.png


 


After selecting Promote, the following window appears:


 


bmckerrMSFT_12-1682460495859.png


 


When the newly promoted Replica server is available, I want to verify two things, that the:



  • CNAME record points to the Replica (now Primary)

  • Database is writeable


 


bmckerrMSFT_13-1682460495889.png


 


From an application perspective (the application is the mysql client in this article), we haven’t had to make any changes to connect to our database regardless of which region is hosting the workload. This method can be easily integrated within DR procedures or failover testing.  Making use of the Azure CLI to semi-automate these changes is also possible and could possibly reduce the likelihood of human errors associated with changing DNS records. However, DNS changes are, in general, less risky than making application configuration changes.


 


If you have any feedback or questions about the information provided above, please leave a comment below or email us at AskAzureDBforMySQL@service.microsoft.com. Thank you!

Microsoft Designer expands preview with new AI design features

From Copilot in Microsoft Viva to the new Intune Suite—here’s what’s new in Microsoft 365

This article is contributed. See the original author and article here.

Discover the latest in Microsoft 365, including Copilot in Microsoft Viva, Microsoft Viva Glint, Windows 365 Frontline, and Microsoft Intune Suite.

The post From Copilot in Microsoft Viva to the new Intune Suite—here’s what’s new in Microsoft 365 appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

What’s new in SynapseML v0.11

What’s new in SynapseML v0.11

This article is contributed. See the original author and article here.

Announcing SynapseML v0.11. The new version contains many new features to help you build scalable machine learning pipelines.Announcing SynapseML v0.11. The new version contains many new features to help you build scalable machine learning pipelines.


 


 


We are pleased to announce SynapseML v0.11, a new version of our open-source distributed machine learning library that simplifies and accelerates the development of scalable AI. In this release, we are excited to introduce many new features from the past year of developments well as many bug fixes and improvements. Though this post will give a high-level overview of the most salient new additions, curious readers can check out the full release notes for all of the new additions.


 


OpenAI Language Models and Embeddings


A new release wouldn’t be complete without joining the large language model (LLM) hype train and SynapseML v0.11 features a variety of new features that make large-scale LLM usage simple and easy. In particular, SynapseML v0.11 introduces three new APIs for working with foundation models: `OpenAIPrompt`, ` OpenAIEmbedding`, and `OpenAIChatCompletion`. The `OpenAIPrompt` API makes it easy to construct complex LLM prompts from columns of your dataframe. Here’s a quick example of translating a dataframe column called “Description” into emojis.


 

from synapse.ml.cognitive.openai import OpenAIPrompt

emoji_template = """
  Translate the following into emojis
  Word: {Description}
  Emoji: """

results = (OpenAIPrompt()
    .setPromptTemplate(emoji_template)
    .setErrorCol("error")
    .setOutputCol("Emoji")
    .transform(inputs))

 


 


This code will automatically look for a database column called “Description” and prompt your LLM (ChatGPT, GPT-3, GPT-4) with the created prompts. Our new OpenAI embedding classes make it easy to embed large tables of sentences quickly and easily from your Apache Spark clusters.  To learn more, see our docs on using OpenAI embeddings API and the SynapseML KNN model to create an LLM-based vector search engine directly on your spark cluster. Finally, the new OpenAIChatCompletion transformer allows users to submit large quantities of chat-based prompts to ChatGPT, enabling parallel inference of thousands of conversations at a time. We hope you find the new OpenAI integrations useful for building your next intelligent application.


 


Simple Deep Learning


SynapseML v0.11 introduces a new Simple deep learning package that allows for the training of custom text and deep vision classifiers with only a few lines of code. This package integrates the power of distributed deep network training with PytorchLightning with the simple and easy APIs of SynapseML. The new API allows users to fine-tune visual foundation models from torchvision as well as a variety of state-of-the-art text backbones from HuggingFace.


 


Here’s a quick example showing how to fine-tune custom vision networks:


 

from synapse.ml.dl import DeepVisionClassifier

train_df = spark.createDataframe([
    ("PATH_TO_IMAGE_1.jpg", 1),
    ("PATH_TO_IMAGE_2.jpg", 2)
], ["image", "label"])

deep_vision_classifier = DeepVisionClassifier(
    backbone="resnet50",
    num_classes=2,
    batch_size=16,
    epochs=2,
)

deep_vision_model = deep_vision_classifier.fit(train_df)

 


 


Keep an eye out with upcoming new releases of SynapseML featuring additional simple deep-learning algorithms that will make it easier than ever to train and deploy models at scale.


 


LightGBM v2


LightGBM is one of the most used features of SynapseML and we heard your feedback on better performance! SynapseML v0.11 introduces a completely refactored integration between LightGBM and Spark, called LightGBM v2. This integration aims for high performance by introducing a variety of new streaming APIs in the core LightGBM library to enable fast and memory-efficient data sharing between spark and LightGBM. In particular, the new “Streaming execution mode” has a >10x lower memory footprint than earlier versions of SynapseML yielding fewer memory issues and faster training. Best of all, you can use the new mode by just passing a single extra flag to your existing LightGBM models in SynapseML.


 


ONNX Model Hub


SynapseML supports a variety of new deep learning integrations with the ONNX runtime for fast, hardware-accelerated inference in all of the SynapseML languages (Scala, Java, Python, R, and .NET).  In version 0.11 we add support for the new ONNX model hub, which is an open collection of state-of-the-art pre-trained ONNX models that can be quickly downloaded and embedded into spark pipelines. This allowed us to completely deprecate and remove our old dependence on the CNTK deep learning library.  


 


To learn more about how you can embed deep networks into Spark pipelines, check out our ONNX episode in the new SynapseML video series:


 


 


Causal Learning


SynapseML v0.11 introduces a new package for causal learning that can help businesses and policymakers make more informed decisions. When trying to understand the impact of a “treatment” or intervention on an outcome, traditional approaches like correlation analysis or prediction models fall short as they do not necessarily establish causation. Causal inference aims to overcome these shortcomings by bridging the gap between prediction and decision-making. SynapseML’s causal learning package implements a technique called “Double machine learning”, which allows us to estimate treatment effects without data from controlled experiments. Unlike regression-based approaches, this approach can model non-linear relationships between confounders, treatment, and outcome. Users can run the DoubleMLEstimator using a simple code snippet like the one below:


 

from pyspark.ml.classification import LogisticRegression
from synapse.ml.causal import DoubleMLEstimator

dml = (DoubleMLEstimator()
      .setTreatmentCol("Treatment")
      .setTreatmentModel(LogisticRegression())
      .setOutcomeCol("Outcome")
      .setOutcomeModel(LogisticRegression())
      .setMaxIter(20))

dmlModel = dml.fit(dataset)

 


 


For more information, be sure to check out Dylan Wang’s guided tour of the DoubleMLEstimator on the SynapseML video series:


 


Vowpal Wabbit v2


Finally, SynapseML v0.11 introduces Vowpal Wabbit v2, the second-generation integration between the Vowpal Wabbit (VW) online optimization library and Apache Spark. With this update, users can work with Vowpal wabbit data directly using the new “VowpalWabbitGeneric” model. This makes working with Spark easier for existing VW users. This more direct integration also adds support for new cost functions and use cases including “multi-class” and “cost-sensitive one against all” problems. The update also introduces a new progressive validation strategy and a new Contextual Bandit Offline policy evaluation notebook to demonstrate how to evaluate VW models on large datasets.


 


Conclusion


In conclusion, we are thrilled to share the new SynapseML library with you with you and hope you will find that it simplifies your distributed machine learning pipelines.  This blog only covered the highlights, so be sure to check out the full release notes for all the updates and new features. Whether you are working with large language models, training custom classifiers, or performing causal inference, SynapseML makes it easier and faster to develop and deploy machine learning models at scale.


 


Learn more


Field Service Palm Springs: Modernize service operations

Field Service Palm Springs: Modernize service operations

This article is contributed. See the original author and article here.

We’re excited to return to Field Service Palm Springs from April 25 through April 27, 2023, at the JW Marriott Desert Springs Resort & Spa.

We will showcase how Connected Field Service helps leaders:

  • Move beyond the costly break/fix model to a proactive, predictive model.
  • Unlock the power of data and use Internet of Things (IoT), machine learning, and AI.
  • Transform their field operations and improve customer experience.

This year, we are hosting a thought leadership luncheon with our partner Hitachi Solutions to discuss the benefits of a connected field service and how to use data to remain competitive, and continuously improve business performance and customer experiences in an increasingly challenging environment.

Field service organizations manage hundreds of technicians with varying expertise, experiences, and skills. With 80 percent of consumers more likely to make a purchase from a brand that provides personalized experiences, organizations have come to realize how important quality service is to remain resilient despite uncertainty.1 Employees are working from remote or distributed locations, reducing the amount of personalized interaction. Meanwhile, remote monitoring of IoT devices continues to transform service from a cost center to a revenue generator.

Connected Field Service is the ability to add connected devices, powered by the Internet of Things (IoT), and uses cloud capabilities to augment your existing field service operations. It enables organizations to transform the way they provide service from a costly, reactive break-fix model to a proactive, and in some cases, even predictive service model through the holistic combination of IoT diagnostics, scheduling, asset maintenance, and inventory on the same platform.

IoT has brought a new level of efficiency to the field service industry, helping service professionals address issues more proactively and minimize downtime. As McKinsey researchers predict, IoT applications could generate a value of over $470 billion annually by 2025 by enhancing operations across various industries.2

By integrating IoT signals across the enterprise, a connected field service helps organizations predict and resolve customer issues before the customer is aware, thereby ensuring consistent and dependable customer operations through hassle-free and preemptive field service.

Four Connected Field Service solutions

Connected Field Service combines four innovative Microsoft solutions that enable service leaders to digitally transform service organizations:

1. Microsoft Dynamics 365 Field Service: Optimizes service operations and inventory management

  • Reduces downtime by enabling service organizations to rapidly dispatch technicians
  • Helps service teams ensure a first-time fix by selecting the right technicians and parts for each call
  • Increases service efficiency by optimizing service call assignments, routes, and scheduling
  • Increases customer satisfaction by ensuring technicians are aware of service preferences

2. Azure IoT Remote Monitoring: Gathers data from connected assets

  • Helps technicians identify and repair malfunctioning assets before damage occurs
  • Reduces the need for service calls by enabling technicians to remotely diagnose equipment issues
  • Arms technicians with the diagnostic information they need to ensure a first-time fix
  • Enables service organizations to analyze equipment failure patterns to improve maintenance strategies

3. Microsoft Azure IoT Predictive Maintenance: Transforms asset data into insights

  • Reduces downtime by enabling technicians to anticipate and preempt equipment failures
  • Limits unnecessary maintenance by aligning equipment service strategies to observed patterns
  • Increases efficiency by enabling teams to service assets when the right parts and people are available
  • Enables organizations to explore new business models using insights from service data

4. Microsoft Dynamics 365 Sales: Identifies upsell and cross-sell opportunities

  • Provides service technicians with upsell and cross-sell recommendations
  • Enables team members in non-sales roles to advance deals with step-by-step guidance
  • Enables sales teams and service technicians to access customer information and sales resources in non-office environments
  • Drives visibility into product and parts usage across the organization

Connected Field Service becomes a reality with Microsoft. Service leaders can better manage costs, enhance service delivery, and increase customer satisfaction (CSAT) by proactively resolving customer issues before the customer is aware. Take advantage of smart, internet-ready devices that can detect and diagnose issues, integrating with field service management (FSM) software like Dynamics 365 Field Service to automatically initiate troubleshooting and, when needed, create work orders to dispatch technicians for onsite service. Learn how you can use technology to schedule preventative maintenance based on consumption rather than rely on a regimented schedule. Best of all, enjoy the flexibility of implementing the solution in stages so your team can ramp up via a natural progression. Learn more about the latest Dynamics 365 Field Service features.

Engage with Microsoft at Field Service Palm Springs 2023

We invite you to join us, along with our partners, to discover how Connected Field Service using Dynamics 365 Field Service and IoT can help create a seamless service experience that enhances customer experiences, increases cost savings, and improves efficiency.

Register for Field Service Palm Springs and visit the Microsoft booth (101/103) where you can meet with Dynamics 365 Field Service experts to discuss how connected data enables better experiences across your organization.

About Field Service Palm Springs

For 20 years, Field Service Palm Springs has become the must-attend conference for service executives. From early IoT concepts to AI, Field Service is where innovative ideas spread, and future strategies are created. Today, Field Service is a global event, with major conferences in Palm Springs, Amelia Island, San Diego, Amsterdam, and Singapore.

Since 2003, the top service and support minds have gathered in Palm Springs in April for the flagship Field Service conference. With forward-looking content and unique session formats that ensure you learn and network most effectively, Field Service is designed to help you achieve service excellence and drive profitability.

Close-up of two hands holding a tablet

Microsoft Dynamics 365 Field Service

Optimize service operations and inventory management.


Sources

1 Forbes, 50 Stats Showing The Power of Personalization, 2020

2 FieldCircle, How To Utilize IoT in The Field Service Industry?

The post Field Service Palm Springs: Modernize service operations appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Manage attribute-based omnichannel sales pricing 

Manage attribute-based omnichannel sales pricing 

This article is contributed. See the original author and article here.

Pricing is one of the fundamental tools to boost supply chain profits by better matching supply and demand. Many businesses have started to reform their pricing strategies in recent years as a result of the growth of e-commerce and the constantly changing business environments in order to improve pricing transparency, supply chain agility and margin optimization.

We are launching the Public Preview of Pricing management within Dynamics 365 Supply Chain Management from 10.0.33 to support sales managers managing and execute the attribute-based omnichannel sales pricing.

Why attribute-based omnichannel pricing?

  • Transaction to omnichannel pricing:

Traditional business-to-business (B2B) organizations are increasingly considering switching to omnichannel sales and selling directly to end customers in order to have greater control over price and margins. The omnichannel transformation results in significant modifications to pricing models and rules.

By offering an omnichannel price engine, a central place to manage pricing rules and automated omnichannel pricing execution, Dynamics 365 Supply Chain Management aids B2B business in the transition to omnichannel pricing.

  • Transaction to attribute-based pricing:

Working with the marketing and product manager to comprehend the product differentiating features, target customer segments and other pricing sensitivity elements is one of the important responsibilities of the sales managers. The package types, the delivery mode and the expected receipt date could be one of the pricing differentiators. By giving business the ability to convert data from customers, product, and orders into price attributes and building pricing on different pricing structure,  Dynamics 365 Supply Chain Management supports business to adopt the attribute-based pricing model.

What is Pricing management?

Dynamics 365 Supply Chain Management Pricing Management leverages the Commerce Scale Unit (CSU) to help traditional B2B companies embrace omnichannel pricing. Pricing management enables attribute-based pricing for the price components that are across the sales pricing structures, including product base price, sales trade agreement price, discounts, charges and rebate management.  

How Pricing management supports business flows:

  1. DESIGN your pricing component types using price attributes.
  2. CONSTRUCT your pricing structure with pricing components, such as margin elements.
  3. MANAGE price markup based on product standard cost (for manufactured products) or vendor price catalog (for trading products).
  4. SIMULATE pricing rules and impacts.
  5. EXECUTE pricing calculation across channels.
  6. MONITOR promotion fund consumption with control.

  • Flexible data model for building price attributes. Price attributes can be based on categorized product pricing differentiators, customer groups and order types.  
  • Central place to offer, manage and calculate pricing. Boost pricing transparency across channels, which is essential for aligning pricing strategies across multiple channels.  
  • Manage complex pricing structureswith price component breakdowns. When you place an order, the pricing details reflect the pricing structure for you to understand the pricing calculation sequence and price breakdowns for future in-depth analysis.
  • Establish the sophisticated pricing with pricing simulator to evaluate the impact. When converting from B2B pricing to B2B and B2C pricing, consider discount concurrency, bundle sales, mandatory sales items, and bonus free item pricing rules.  
  • Fund control to ensure you don’t avoid margin leakage from fund consumption.  
  • Real-time cross channel pricing execution with the pricing engine to quickly determine pricing while considering a variety of commercial aspects, such as the item’s general base price, the price of a sales trade agreements, long-term discount agreements, short-term promotion discounts, and retrospective rebate calculations for each sales order. 
  • External applications can retrieve calculated pricing by leverage the Commerce Scale Unit (CSU)based Pricing APIs.

Next steps:

If your organization on the journal transition to attribute-based omnichannel selling pricing, consider taking the next step with Pricing Management within Dynamics 365 Supply Chain Management.

Get an overview of Pricing management by reading the document.

If you are a potential customer or partner and want to learn more, contact the https://learn.microsoft.com/en-us/dynamics365/supply-chain/pricing-management/price-attributes-overview product team directly by email.

Also check a series of demo video in Pricing management Yammer.

Not yet a Supply Chain Management customer? Take a guided tour


The post Manage attribute-based omnichannel sales pricing  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service 

Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service 

This article is contributed. See the original author and article here.

Your brand is the face of your business. And often, the live chat widget on your website is the first point of contact for your customers. Having a strong brand for your customer service products can build trust and credibility, differentiate yourself from competitors, ensure consistency in communication, and create a positive emotional connection with customers.  

We are excited to announce our upgraded live chat widget that allows you to customize every detail of the widget to match your brand identity. From the font and color scheme to the iconography, you can now own every pixel of the widget and ensure that it represents your brand in the best possible way. 

Three customized chat widgets, each representing a different branding style

Style every component of the live chat widget to reflect your brand 

When you update your environment with the latest release, you can use our live chat script tag customization to edit the design of the live chat widget through CSS-represented styling. It is easier than ever to create a branded look for your chat widget. You can choose the font, color, style, and size of every component of the chat widget to reflect your brand. The image below shows examples of chat widget components and the different ways you can change them.

Editable elements in the default chat button and chat container

Learn more

Watch a quick video introduction.

To update your chat widget and customize every detail, please check out our public documentation here to learn more. 

For more advanced customization options, try the custom chat widget here, where you can customize the functionalities as well.  

The post Drive brand loyalty with a customizable live chat widget in Dynamics 365 Customer Service  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Como criar uma extensão customizada para o Azure DevOps

Como criar uma extensão customizada para o Azure DevOps

This article is contributed. See the original author and article here.

Como criar uma extensão customizada para o Azure DevOps


Em alguns casos, é necessário criar uma extensão personalizada para o Azure DevOps, seja para adicionar funcionalidades que não estão disponíveis nativamente ou para modificar alguma funcionalidade existente que não atenda às necessidades do projeto. Neste artigo, mostraremos como criar uma extensão personalizada para o Azure DevOps e como publicá-la no Marketplace do Azure DevOps.


Antes de começar certifique:



  • Ter uma conta no Azure DevOps. Caso ainda não tenha uma, você pode criar uma seguindo as instruções disponíveis aqui.

  • Ter um editor de código instalado, como o Visual Studio Code, que pode ser baixado em code.visualstudio.com.

  • Ter a versão LTS do Node.js instalada, disponível para download em, nodejs.org. Ter o compilador de TypeScript instalado, sendo a versão recomendada 4.0.2 ou superior. Ele pode ser instalado via npm em npmjs.com.

  • Ter o CLI do TFX instalado, sendo a versão recomendada 0.14.0 ou superior. Ele pode ser instalado globalmente via npm com o comando npm i -g tfx-cli ou conferindo mais detalhes em TFX-CLI npm i -g tfx-cli.


Preparando o ambiente de desenvolvimento




  1. Crie uma pasta para a extensão, por exemplo, my-extension e dentro desta pasta crie a uma subpasta, por exemplo, task.




  2. Abra o terminal na pasta criada e execute o comando npm init -y, o parâmetro -y é para aceitar todas as opções padrão. Você vai notar que foi criado um arquivo chamado package.json e nele estão as informações da extensão.


        {
    “name”: “my-extension”,
    “version”: “1.0.0”,
    “description”: “”,
    “main”: “index.js”,
    “scripts”: {
    “build”: “tsc ./index.ts”,
    },
    “keywords”: [],
    “author”: “”,
    “license”: “ISC”
    }



  3. Adicione a azure-pipelines-task-lib como dependência da extensão, execute o comando npm i azure-pipelines-task-lib –save-dev.




  4. Adicione também as tipificações do TypeScript, execute o comando npm i @types/node –save-dev e npm i @types/q –save-dev.




  5. Crie um arquivo .gitignore na pasta raiz da extensão e adicione o seguinte conteúdo:


    node_modules



  6. Instale o compilador de TypeScript, execute o comando npm i typescript –save-dev.




  7. Crie um arquivo tsconfig.json na pasta raiz da extensão e adicione o seguinte conteúdo:


        {
    “compilerOptions”: {
    “target”: “es6”, /* Specify ECMAScript target version: ‘ES3’ (default), ‘ES5’, ‘ES2015’, ‘ES2016’, ‘ES2017’, ‘ES2018’, ‘ES2019’, ‘ES2020’, or ‘ESNEXT’. */
    “module”: “commonjs”, /* Specify module code generation: ‘none’, ‘commonjs’, ‘amd’, ‘system’, ‘umd’, ‘es2015’, ‘es2020’, or ‘ESNext’. */
    “strict”: true, /* Enable all strict type-checking options. */
    “esModuleInterop”: true, /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies ‘allowSyntheticDefaultImports’. */
    “skipLibCheck”: true, /* Skip type checking of declaration files. */
    “forceConsistentCasingInFileNames”: true /* Disallow inconsistently-cased references to the same file. */
    }
    }



  8. Crie um arquivo chamado vss-extension.json na pasta raiz da extensão my-extension e adicione o seguinte conteúdo:


        {
    “manifestVersion”: 1,
    “id”: “<>”,
    “version”: “1.0.0”,
    “publisher”: “<>”,
    “name”: “My Extension”,
    “description”: “My Extension”,
    “public”: false,
    “categories”: [
    “Azure Pipelines”
    ],
    “targets”: [
    {
    “id”: “Microsoft.VisualStudio.Services”
    }
    ],
    “icons”: {
    “default”: “images/icon.png”
    },
    “files”: [
    {
    “path”: “task”
    }
    ],
    “contributions”: [
    {
    “id”: “my-extension”,
    “description”: “My Extension”,
    “type”: “ms.vss-distributed-task.task”,
    “targets”: [
    “ms.vss-distributed-task.tasks”
    ],
    “properties”: {
    “name”: “my-extension”
    }
    }
    ]
    }

    Substitua o <> por ID único de cada extensão, você pode gerar um ID aqui. Substitua o <> pelo publisher ID criado no passo 1 da etapa de publish.




  9. Na pasta raiz da sua extensão my-extension, crie uma pasta chamada images e adicione uma imagem chamada icon.png com o tamanho de 128×128 pixels. Essa imagem será usada como ícone da sua extensão no Marketplace.




Criando a extensão


Depois de configurar o ambiente, você pode criar a extensão.




  1. Na pasta task crie um arquivo chamado task.json e adicione o seguinte conteúdo:


        {
    “$schema”: “https://raw.githubusercontent.com/Microsoft/azure-pipelines-task-lib/master/tasks.schema.json”,
    “id”: “<>”,
    “name”: “My Extension”,
    “friendlyName”: “My Extension”,
    “description”: “My Extension”,
    “helpMarkDown”: “”,
    “category”: “Utility”,
    “visibility”: [
    “Build”,
    “Release”
    ],
    “author”: “Your Name”,
    “version”: {
    “Major”: 1,
    “Minor”: 0,
    “Patch”: 0
    },
    “groups”: [],
    “inputs”: [],
    “execution”: {
    “Node16”: {
    “target”: “index.js”
    }
    }
    }

    Substitua o <> pelo mesmo GUID gerado no passo 8 da etapa de preparação de ambiente de desenvolvimento.


    Esse arquivo descreve a extensão que será executada no pipeline. Nesse caso, a extensão ainda não faz nada, mas você pode adicionar os inputs e a lógica para executar qualquer coisa.




  2. Na sequência crie um arquivo chamado index.js e adicione o seguinte conteúdo:


        const tl = require(‘azure-pipelines-task-lib/task’);

    async function run() {
    try {
    tl.setResult(tl.TaskResult.Succeeded, ‘My Extension Succeeded!’);
    }
    catch (err) {
    if (err instanceof Error) {
    tl.setResult(tl.TaskResult.Failed, err.message);
    }
    }
    }

    run();


    Esse arquivo é o responsável por executar a extensão. Nesse caso, ele apenas retorna uma mensagem de sucesso. Você pode adicionar a lógica para executar qualquer coisa.




  3. Adicione na pasta task uma imagem chamada icon.png com o tamanho de 32×32 pixels. Essa imagem será usada como ícone da sua extensão no Azure Pipelines.




  4. No terminal, execute o comando tsc, para compilar o código Typescript para Javascript. Esse comando irá gerar um arquivo chamado index.js na pasta task.




  5. Para executar a extensão localmente, execute o comando node index.js. Você deve ver a mensagem My Extension Succeeded!.


        C:tempmy-extensiontask> node index.js
    ##vso[task.debug]agent.TempDirectory=undefined
    ##vso[task.debug]agent.workFolder=undefined
    ##vso[task.debug]loading inputs and endpoints
    ##vso[task.debug]loading INPUT_CLEANTARGETFOLDER
    ##vso[task.debug]loading INPUT_CLIENTID
    ##vso[task.debug]loading INPUT_CLIENTSECRET
    ##vso[task.debug]loading INPUT_CONFLICTBEHAVIOUR
    ##vso[task.debug]loading INPUT_CONTENTS
    ##vso[task.debug]loading INPUT_DRIVEID
    ##vso[task.debug]loading INPUT_failOnEmptySource
    ##vso[task.debug]loading INPUT_FLATTENFOLDERS
    ##vso[task.debug]loading INPUT_SOURCEFOLDER
    ##vso[task.debug]loading INPUT_TARGETFOLDER
    ##vso[task.debug]loading INPUT_TENANTID
    ##vso[task.debug]loaded 11
    ##vso[task.debug]Agent.ProxyUrl=undefined
    ##vso[task.debug]Agent.CAInfo=undefined
    ##vso[task.debug]Agent.ClientCert=undefined
    ##vso[task.debug]Agent.SkipCertValidation=undefined
    ##vso[task.debug]task result: Succeeded
    ##vso[task.complete result=Succeeded;]My Extension Succeeded!
    C:tempmy-extensiontask>



Publicando a extensão no Marketplace


Quando a sua extensão estiver pronta, você pode publicá-la no Marketplace. Para isso será necessário criar um editor de extensão no Marketplace.




  1. Acesse o Marketplace e clique em Publish Extension. Após fazer o login, você será redirecionado para a página de criação de um editor de extensão. Preencha os campos e clique em Create.


    Criando um editor de extensão




  2. No terminal execute o comando tfx extension create –manifest-globs vss-extension.json, na pasta My-Extension. Esse comando irá gerar um arquivo chamado publishID-1.0.0.vsix, que é o arquivo que será publicado no Marketplace.


    CreateExtension




  3. Acesse a página de publicação de extensão no Marketplace e clique New extension e seguida Azure DevOps. Selecione o arquivo my-extension-1.0.0.vsix e clique em Upload.


    UploadExtension


    Se tudo ocorrer bem, você verá algo como a imagem abaixo.


    ExtensionPublished




  4. Com a extensão publicada, será necessário compartilhá-la com a sua organização. Para isso, clique no menu de contexto da extensão e clique em Share/UnShare.


    ShareExtension


    Clique em + Organization.


    ShareExtension1


    E digite o nome da sua Organização, ao clicar fora da caixa de digitação a validação é feita e o compartilhamento é realizado.


    ShareExtension2




Instalando a extensão na sua organização


Após publicar a extensão no Marketplace, você pode instalá-la na sua organização, para isso siga os passos abaixo.




  1. Clique no menu de contexto da extensão e clique em View Extension.


    InstallExtension


    Você verá algo como a imagem abaixo.


    InstallExtension1




  2. Clique em Get it free.




  3. Verifique se sua organização está selecionada e clique em Install.


    InstallExtension2


    Se a instalação ocorrer tudo bem, você verá algo como a imagem abaixo.


    InstallExtension3


    Após a instalação, você verá a extensão na lista de extensões instaladas e poderá ser utilizada nos seus pipelines.




Conclusão


O uso de extensões customizadas no Azure DevOps desbloqueiam funcionalidades que não estão disponíveis. Neste artigo, você aprendeu como criar uma extensão customizada e como publicá-la no Marketplace. Espero que tenha gostado e que possa aplicar o conhecimento adquirido em seus projetos.


Referências



  1. Criar uma organização

  2. Referência de manifesto de extensão

  3. Build/Release Task Exemplos

  4. Extensões de pacote e publicação