Gain visibility into your inventory to improve supply chain resiliency

Gain visibility into your inventory to improve supply chain resiliency

This article is contributed. See the original author and article here.

An accurate view of your inventory is key to many decisions that you make as a company, but it is more and more challenging to get timely, correct data drawn from storage locations, sales channels, and a variety of source data systems. Visibility into your inventory is the basis for replenishment decisions, your fulfillment strategy, and even the financial status of the company, yet nearly every activity related to your supply chain can affect inventory at some point.

One of the goals of inventory management is to maintain a flexible stock level and good turnover ratio, but disruptive situations in the supply chain, coupled with delayed or inaccurate data, make forecasting a nightmare.

Perhaps this is why supply chain professionals overwhelmingly plan to invest in agility and resiliency for their supply chains. A 2021 Gartner study about “responding to a disrupted world” found that 89% want to make their supply chains more agile and 87% want more resiliency.

To address these concerns, Microsoft now offers the Inventory Visibility Add-in as part of Microsoft Dynamics 365 Supply Chain Management.

Solution to inventory pain points

The Inventory Visibility Add-in can help you transform your supply chain by tackling your inventory pain points. Inventory Visibility is a highly scalable microservice that can be enabled as an add-in to Dynamics 365 Supply Chain Management and integrate with data sources from Microsoft or third-party logistics providers (3PL). It enables real-time global inventory visibility without the need to do a full-fledged enterprise resource planning (ERP) implementation.

High-volume retailers and manufacturers can easily handle millions of transactions per minute and accurately determine cross-channel inventory.

Inventory currency enables a faster response

For most businesses, it is essential to make decisions based on current, accurate data. Tracking inventory is especially important. Changes in inventory might suggest an understock or overstock situation that demands a fast reaction. Inventory Visibility lets you explore the immediate physical status of inventory, including a status of in-transit, on hand, ordered, or a custom status. This allows the organization to adjust production or sales plans in time.

The following shows an Inventory Visibility dashboard with on-hand inventory as well as the supply, demand, and reserved inventory statuses.

On-hand dashboard showing inventory.

Scaling to support resiliency

One way to increase resiliency in your supply chain is to adopt multiple sales channels and storage locations. The combination of online, call center, and in-store sales channels helps companies maximize sales opportunities. Setting up more storage locations, including ones closer to local markets, can better support shipping and fulfillment, especially when disruptions occur.

But companies also find that those new sales channels and expanded storage locations can have different systems, making it difficult to consolidate data for real-time information about stock level and supply and demand. Having that information is crucial to support business operations. Inventory Visibility was designed for this scenario; it is capable of handling millions of transactions across different channels and geographies in seconds.

Another strategy that supply chain executives pursue to gain resiliency is to diversify their vendor sourcing. We have heard from more customers who would like to have a view into their vendors’ inventory. This can support the sell-through or direct sales scenarios or provide better insight about potentially accessible inventory. Such companies want to integrate with the inventory systems of their vendors, and this use case is also supported by Inventory Visibility.

One of our customers, a beverage giant, uses Inventory Visibility to calculate in real time the consumption of the bill of materials (BOM) for every unit of their beverage sold in every store. This supports better planning and accurate cost calculations. Previously, the customer used a time-consuming, manual process to consolidate data from several thousand stores and across regions. Now, all inventory changes can be reflected in less than a minute, with a batch job that pushes data to Inventory Visibility every minute. We are also helping this customer to establish a direct connection between their point-of-sales (POS) systems and Inventory Visibility, which will provide a data sync within seconds.

Next steps

If your organization is on the journey to transition from siloed systems to a unified and transparent inventory platform, consider taking the next step with Inventory Visibility.

The post Gain visibility into your inventory to improve supply chain resiliency appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

How to Get Started with Azure Cosmos DB | NoSQL Database for High Performance & Scalability

How to Get Started with Azure Cosmos DB | NoSQL Database for High Performance & Scalability

This article is contributed. See the original author and article here.

Updates to the managed, limitless scale, NoSQL Azure Cosmos DB database. For your smaller apps, get the best cost performance with the new serverless option for on-demand querying and the Azure Cosmos DB free tier for provisioned throughput. For larger workloads, embed and partition your data, and leverage autoscale for cost optimizations. Estefani Arroyo, Azure Cosmos DB Program Manager, joins host Jeremy Chapman to share updates and benefits of Azure Cosmos DB.


 


Screen Shot 2022-01-24 at 9.34.04 AM.png


 








 


 



QUICK LINKS:


00:44 — How is Azure Cosmos DB different?


02:32 — Scale out architecture


04:33 — Example of new serverless option


06:43 — Free tier and provisioned throughput


07:27 — Run NoSQL workloads at scale


09:58 — Partitioning with partition keys


12:04 — How to identify the cause of throttling


13:42 — Autoscale


15:20 — Wrap up


 


Link References:


Get started with Azure Cosmos DB free https://azure.microsoft.com/services/cosmos-db/


Access free Azure Cosmos DB training on Microsoft Learn at https://aka.ms/learncosmosdb


Set up a free trial at https://aka.ms/trycosmosdb


 


Unfamiliar with Microsoft Mechanics?


We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


Keep getting this insider knowledge, join us on social:






Video Transcript:


– Up next, I’m joined by Estefani Arroyo to look at the latest updates to the managed, limitless scale, NoSQL Azure Cosmos DB database and what you can do to get the best cost performance for your smaller apps with the new serverless option for on-demand querying and the Cosmos DB free tier for provisioned throughput. Then for your larger workloads, we’re going to look at how you can gain game changing efficiencies by embedding and partitioning your data as well as leveraging autoscale for cost optimizations. So thanks to Estefani for joining us on Mechanics today.


 


– Hey Jeremy, thanks for having me on. I’m so excited to be here.


 


– Thank you, so before we get into the updates, it’s been a while since we’ve looked at Azure Cosmos DB on Mechanics. So why don’t we start by getting everyone on the same page, especially if you might be used to relational databases. So how does Cosmos DB differ?


 


– Yeah, the biggest thing that separates NoSQL databases like Cosmos DB from traditional relational databases is that there is no enforced schema. This makes them extremely flexible. And this is important because across industries, a good majority of the data generated in day-to-day operations will be semi-structured or unstructured, meaning it doesn’t need to follow a rigid tabular form or schema. It could be metadata or activity data from JSON files generated by IoT devices on the edge, or something like click-stream data from your web apps used to generate personalized, real-time recommendations, or even invoices containing customer information and line items from your payment systems. Really it can be any number of things. And this type of operational data is often high volume with a high change rate, so a flexible schema is key. Which is where Azure Cosmos DB comes in as a fully managed NoSQL database for applications at any scale. It supports multiple data models to represent your data from Graph, which models many to many relationships within your data, to the popular document model, the columnar format, and the key value model as well.


 


– Okay, so as a developer or database admin, then how would you be able to interact with the data in these models?


 


– Well, that’s where APIs come in. We have our full-featured Core SQL API, which was designed for Azure Cosmos DB and supports the document model. If you have experience in SQL, this API lets you query data using the familiar SQL syntax. But then we also give you other sets of familiar APIs aligned to data models which let you continue to use your existing language and database skills while still leveraging Azure Cosmos DB.


 


– So it’s easy then to use your preferred model and also environment to interact with Cosmos DB. And you also mentioned that data volumes can get pretty dynamic and large for this type of operational data. So how does Cosmos DB then handle that?


 


– Well, Cosmos DB has a scale-out architecture. Compared to traditional relational databases that might reside entirely on a single VM, requiring you to scale up and down with v cores and RAM, Cosmos DB scales in and out by adding and removing nodes, or as we call them physical partitions, which comprise of data you store and the compute you provision. We also have built-in geo redundancy with data replication into any Azure region. In fact, Cosmos DB has an SLA-backed five nines of availability.


 


– So is Cosmos DB then better suited for larger or smaller apps?


 


– Well, Cosmos DB works well for any size application. There is no massive scale requirement. And with Cosmo DB, you can start very small, and as operations start to pick up later you can scale out to billions of transactions spread across the globe. And to make Cosmo DB even more accessible for smaller apps, you can now take advantage of serverless for on-demand querying. This makes it viable to run even your smaller apps for less than a dollar a month and only pay by operation.


 


– And that’s a really good point, because some people might think that Cosmos DB operations can get pretty expensive and that the service actually caters towards larger data operations.


 


– Yeah, exactly. And that’s something we want to emphasize is not the case. When it comes to cost, one really important concept to understand is the Request Unit, or RU. Think of this like the logical currency for Cosmo DB. It represents the CPU, IOPS, and memory required to perform database operations as a single unit. In our provisioned model, database throughput is the number of RUs provided to you per second. Even though we measure throughput by the second, it’s important to remember this is billed by the hour. And for the serverless model, you only pay for the number of request units consumed by database operations over a one month period. So in either case, you really want to choose the right provisioning model to minimize costs.


 


– Right, so can you walk us through an example?


 


– Sure. Let me show you an example, using our new inexpensive serverless option. Here I have our website open for Adventure Works and it’s an e-commerce app that sells items like outdoor clothing, gear, and outdoor accessories. As a customer, you can log into your profile to get your order history, add items to your wishlist for your next outdoor adventure, and make sure your details are up to date. And as you browse through the products, we’re pulling data from our product databases to see the details. So if I make an order, we’re performing data operations to reserve inventory and write order information to our shopping cart and all the other subsequent steps in the ordering system. So all of these transactions are data operations, reads and writes to various data services. And bringing this back to our concept of RUs, each operation is also consuming request units. So here in our Data Explorer in Azure, let’s take a look at what some of these operations would consume. First, let’s take a look at the customer profile retrieval we had at login. If I manually run the query using an email address, you’ll see it’s consuming about 2.8 RUs. To simulate browsing the product inventory, I can run this query and it consumes around 3.6 RUs. Then as you place an order, this operation takes around 10.3 RUs. So if you add everything up that 16.7 RUs on the low end, but let’s round up to 20 RUs to be safe. And if I received an order per minute, then my throughput is naturally 20 RUs per minute. So doing the math, with a total number of minutes per month, which is about 43,000, that’s 864,000 RUs. So even though that sounds like quite a bit, it turns out to be less than 28 cents, which is the cost for one million RUs. And based on our estimated usage of 864,000, that works out to be 24 cents per month, which is pretty neat. And you can even see that here, when we view our estimated costs in the portal.


 


– Right and as you said, this can scale all the way down to zero, so you might have hours or days without any activity or consuming any RUs, or maybe you’re not getting 43,000 orders in every month and this could be even less. So what if you want to have provisioned capacity so that the service is always warm and ready to go?


 


– So in that case, that’s where you want to use provisioned throughput, which also lets you take advantage of increased scalability and multi-region support. Starting small, we have a free tier for Cosmos DB, where we recently doubled the number of RUs. You could now get up to 1,000 request units per second and 25 gigs of storage, and it comes with the same SLAs, which is also great for testing scenarios. If you think about our customer example with transaction and invoice operations, which use around 20 RUs, in that case you can do about 50 types of these operations per second for free.


 


– And at some point you might actually outgrow a thousand RUs per second on provisioned throughput with the free tier. So why don’t we dig into what you can do to run your NoSQL workloads at scale while keeping all of it as efficient as possible?


 


– The key to efficiency is getting the data model right. I’ll walk through a few examples using the SQL API document model which is the most popular. And to get this right from the beginning, you’ll want to look at the read and write operations you want Cosmo DB to perform as you structure your model. So for example, if you have a read-heavy application, embedding or de-normalizing helps ensure results come in fast without using JOINS. And by combining the number of related entities into a single document, we reduce the total number of documents in the database. While this increases the size of the document, read results can be returned a lot faster. On the other hand, if you have a write-heavy application, it’s better to use referencing, or normalizing, like you would for a relational database. By creating unique entities that reference one another, you can break up write requests and break down large documents to speed up your write operations. This leads to more small documents and duplication, but will increase performance when doing a large number of writes and updates. So we want to use a flexible schema to minimize the RUs. Let me show you how this would work with a practical example using our customer record retrieval from before. First, if we look at the JSON document here, you’ll see that we have the customer address and password information alongside other customer details, and in a single record. If we compare this to how you would usually structure this information in a typical relational database model using multiple tables or databases, you’d probably have one for the customer by email address, password by customer ID, and address by customer ID.


 


– That sounds like it’d be a pretty typical or common schema and how you might build out table structures, maybe in a SQL instance, for example.


 


– That’s right, but to query each of these items and then assemble everything together, these would all be separate requests, each having their own cost in terms of compute. For example, here I’m going to query customer by email requests and you’ll see, it’s consuming about 2.8 RUs. Next, I’ll query the password for that customer ID, and it’s another 2.8 RUs. Then I’ll query the address for the customer, and that operation is again another 2.8 RUs. So we’re almost at nine RUs, which might not sound like a lot, but it all adds up as operations start to scale up. And if you’ll recall from the previous query, we were able to get all three pieces of information with just 2.8 RUs in a single query, which is a third of the RU consumption versus pulling these individually.


 


– You also mentioned that in parallel, you need to look at partitioning, especially as your operations really start to grow in scale and across regions. So what can we do there to keep the RU and throughput as low as possible without impacting performance?


 


– Yeah, partitioning is super important. And in Cosmo DB, partitions are used to distribute both your data and your database operations evenly. To achieve this, we use a partition key, which is a required property in your document for routing data to the right partition. Choosing the best partition key is really important. It should be chosen based on data access patterns. You’ll want to plan for this up front to avoid poor performance and to make sure you’re keeping costs low. You can think of physical partitions like buckets where portions of your data sit. These use a partition key, which works like a placement hint to determine where to write new data and where to read data when you query it. The right partition key distributes both operations and the data evenly across all partitions. Now let me show you an example. So what I’ve set up here is two containers for our customer data. All their profile and transaction data will be stored in these containers. Both containers are configured for 1500 RU per second provisioned throughput. And this throughput is evenly distributed across all partitions in the container. And I’ve set up a client application to simulate this workload. It’s now generating 30 sales order documents a second, and inserting these into both of the containers. Let’s have a look at some of the metrics that are available for these containers. Here, I’ve set up a workbook which allows me to compare the metrics for both of these containers side by side. Both containers are getting about the same number requests over time. And both containers have high utilization. If you look at the container on the right, you’ll see that it’s being throttled. This happens when RU utilization exceeds 100% in any one second. In those cases, the operations get retried the next second, which also adds to the number of total requests as you’ll see in our total request count. That said, the container on the left is now running fine, even though it’s running the same operations with the same provisioned throughput.


 


– So how do you then identify the cause of throttling that happened in the bad container?


 


– Yeah, that’s a great question. We can investigate this by drilling into the throughput metrics. You’ll see the total request units consumed over time appear very similar. However, when we scroll down a little to look at how these requests and associated request units are being distributed across the partitions, we’ll see some very distinct differences. We have 10 partitions with 1500 RUs per second provisioned. So each partition will have an equal 150 RU per second. In our good container on the left, we’re using all of the partitions of the container to service our workload, and in our bad container on the right we’re only using one. And by looking at the normalized RU consumption graph below, we can see this more clearly. So let’s find out what’s behind that. I’ll go back and look at those two containers in more detail, starting with the good container. In the items view, we can see that the partition key that we’ve chosen for our good container is customer ID in this case. Assuming that sales orders are coming from a large number of different customers, they will be distributed across a large number of partitions. So now let’s go to the same thing, but for the bad container. We can see its partition key is order date. And if we look at the property values for order date, we can see it’s the date the transaction occurred. And this means every transaction that occurs on the same day will land in the same single partition, which means only one of the 10 partitions is serving requests, which is bad because 90% of the available throughput is being wasted.


 


– So that really proves just how important it is to plan your partition strategy upfront, based on your data access patterns.


 


– Exactly. If you do that work upfront and you’re using Cosmos DB, this is a game changer for running your applications efficiently. And when you combine this with autoscale, it gets even better. So let’s take our same example, except now simulating around seven times the number of transactions, or 200 per second, during the peak times. Even our well partitioned container is throttled here because it doesn’t have sufficient provisioned throughput. It needs 15,000 RUs per second in this case. Now, I could go and increase the manual provisioned throughput to that level to cope with the peaks. But if I did that, it would be billed the same even for non-peak hours as well. And unless I manually adjust the scale every hour, I’d be stuck paying for unused capacity. And that’s where autoscale comes in. The way autoscale works is that for each hour, the scale is automatically set to the peak throughput value for that hour. And here you can see that each hourly peak matches the provisioned throughput. So let’s autoscale the provisioned throughput. I just need to hit this radio button here and save. And now if we go back to our Insights blade, change the time range, and then select a database and container, you’ll see that we are no longer throttled. And our normalized RU consumption is only part of what has been provisioned. This is good, because we will only be billed for the maximum throughput we consume, or a 10% minimum of the configured max RU per second value. Also autoscale throughput is available instantaneously without interruptions to your app.


 


– And as you showed, autoscaling is a great way to adjust request units once you’ve been able to observe utilization patterns, and it can also help you out if you’re just starting out from scratch and trying to figure out your workload needs. So for anyone who’s watching, who wants to try out Cosmos DB, what do you recommend they do?


 


– We have an entire set of training that anyone can access for free on Microsoft Learn at aka.ms/learncosmosdb, and you can set up a free trial at aka.ms/trycosmosdb.


 


– Thanks so much for joining us today, Estefani. And of course, keep watching Microsoft Mechanics for the latest in tech updates. Subscribe to our channel if you haven’t already, and thank you for watching.



 

FSLogix release 2201 Public Preview

This article is contributed. See the original author and article here.

Hello FSLogix nation!


 


Microsoft is pleased to announce the availability of the FSLogix Preview release 2201.


 


This is a PUBLIC PREVEW release


 


To access please go to this link and complete the following form: Public Preview request form


 


After submitting the form, a download link will be provided.


 


Changes:


 



  • Fixed issue where the FSLogix Profile Service would crash if it was unable to communicate with the FSLogix Cloud Cache Service.

  • The Office file cache is now machine specific, and therefore cannot roam between session hosts, so it is excluded from FSLogix containers.

  • FSLogix Search Indexing is now only available in versions of Windows Server that do not provide per-user search indexing. Per-user search indexes were introduced in Windows Server 2019 (version 1809). FSLogix search indexing is not available in Windows 10 or Windows 11.

  • FSLogix now correctly handles cases where the Windows Profile SVC refCount registry value is set to an unexpected value.

  • Over 30 accessibility related updates have been made to the FSLogix installer and App Rules Editor.

  • A Windows event now records when a machine locks a container disk with a message that looks like “This machine ‘[HOSTNAME]’ is using [USERNAME]’s (SID=[USER SID]) profile disk. VHD(x): [FILENAME].”

  • Resolved an issue where the DeleteLocalProfileWhenVHDShouldApply registry setting was ignored in some cases.

  • Fixed an issue where active user session settings where not retained if the FSLogix service was restarted. This was causing some logoffs to fail.

  • FSLogix will no longer attempt to reattach a container disk when the user session is locked.

  • Fixed an issue that caused the FSLogix service to crash when reattaching container disks.

  • Fixed a Cloud Cache issue that caused IO failures if the session host’s storage blocksize was smaller than a cloud provider’s. For optimal performance, we recommend the session host disk hosting the CCD proxy directoy has a physical block size greater than or equal to the CCD storage provider with the largest blocksize size.

  • Fixed a Cloud Cache issue where a timed out read request (network outage, storage outage, etc.) was not handled properly and would eventually fail.

  • Reduced the chance for a Cloud Cache container disk corruption if a provider is experiencing connection issues.

  • Resolved an issue where temporary rule files were not deleted if rule compilation failed.

  • Previously, the Application masking folder was only created for the user who ran the installer. With this update, the rules folder is created when the Rules editor is launched.

  • Resolved an interoperability issue with large OneDrive file downloads that was causing some operations to fail.

  • Fixed an issue where per-user and per-group settings did not apply if the Profile or ODFC container was not enabled for all users.

  • Resolved an issue where the Office container session configuration was not cleaned up if a profile fails to load.

  • Fixed an issue where HKCU App Masking rules leveraging wildcards would fail to apply.

  • Fixed an issue where FSLogix did not properly handle logoff events if Profile or ODFC containers were disabled during the session or per-user/per-group filters were applied mid-session that excluded the user from the feature. Now FSLogix logoff related events will always occur if FSLogix loaded a container for the user.

  • Fixed an issue that caused some sessions configured with an ODFC container to fail to login.

  • Resolved an issue where the App Rules editor would crash if there were no assignments configured.

2022 release wave 1 plans for Dynamics 365 and Power Platform now available

2022 release wave 1 plans for Dynamics 365 and Power Platform now available

This article is contributed. See the original author and article here.

Today, we published the 2022 release wave 1plans for Microsoft Dynamics 365 and Microsoft Power Platform, a compilation of new capabilities that are planned to be released between April 2022 and September 2022. This first release wave of the year offers hundreds of new features and enhancements, demonstrating our continued investment to power digital transformation for our customers and partners.

Highlights from Dynamics 365

  • Dynamics 365 Marketing continues to invest in collaboration by enabling collaborative content creation with Microsoft Teams. Marketers can use content fragments and themes to improve authoring efficiency. Investments in Data and AI enable marketers to also author content with advanced personalization using codeless experiences. Every customer interaction matters, and in this release, we are enabling our customers to continue the conversation with their customers by responding to SMS replies through a personalized experience based on responses using custom keywords that can be added to journeys.
  • Dynamics 365 Sales is putting data to work and enabling seamless collaboration to empower sales professionals to be more productive and deliver value in every customer interaction. Business data is now ambient and actionable from within Microsoft 365 interfaces, enabling sellers to quickly establish context and act on data. Using a single workspace in the Sales Hub, sellers can adjust their sales pitch using AI-guided live feedback and suggestions, and managers can track team performance and provide valuable coaching to help boost customer satisfaction.
  • Dynamics 365 Customer Service continues to invest in delivering capabilities that ensure personalized service across channels, empower agents, and make collaboration easier in an ever-increasing remote world. With the new Customer Service Admin center app, we’re simplifying the setup with guided, task-based experiences making it easier to get up and running quickly. Enhancements to the inbox view allow agents to rapidly work through issues across channels while maintaining a focus on the customer. Investments in collaboration with Microsoft Teams include data integration, AI-suggested contacts, and AI-generated conversation summaries. Lastly, investments in knowledge management include relevance search integration and historical analytics, and unified routing with default queue enhancements and routing diagnostics.
  • Dynamics 365 Field Service brings innovative enhancements and usability improvements to the schedule board. The new schedule board is now at functional parity with the previous version, and we are enhancing the user experience of hourly and multi-day views to improve dispatcher productivity. Additionally, the Field Service mobile application includes enhancements to boost technician productivity and is now fully supported on Windows devices. 
  • Dynamics 365 Finance is launching the general availability of subscription billing to ensure organizations can thrive in a service-based economy. We are enabling our customers to maximize financial visibility and profitability by bringing intelligent automation around vendor invoicing, financial close through ledger settlements, and year-end close services. In addition, we are releasing to market the preview of Tax Audit and Reporting Service. Lastly, we continue to enhance our globalization offerings in Globalization Studio such as Tax Calculation and Electronic Invoicing. With Globalization Studio, these low-code globalization services and their multi-country content will be available to any first and third-party app and extended with prebuilt ISV connectors.
  • Dynamics 365 Supply Chain Management investments continue to focus on driving agility and resilience in the supply chain. Enhanced warehouse and manufacturing execution workloads enable businesses to scale mission-critical operations using cloud and edge scale units. Planning Optimization brings new manufacturing scenarios and planning strategies to help businesses, and manufacturers, shorten planning cycles, reduce inventory levels, and improve customer service. The new global inventory accounting functionality allows inventory accounting in multiple representations to simplify operations for businesses working in multiple currencies or facing different local and global accounting standards.
  • Dynamics 365 Intelligent Order Management brings an expanded set of out-of-the-box provider integrations, enabling rapid deployment and connectivity to an ecosystem of solutions in the order capture, logistics, fulfillment, and delivery process flows. Combined with the rich ecosystem of providers, customers will have the ability to achieve advanced order orchestration using the new expanded set of features and optimizations supported in inventory orchestration, order actions, and fulfillment. This release wave brings a brand-new Returns and Exchange management service directly integrated into e-commerce solutions. This service enables customers to orchestrate journeys that minimize operational costs related to getting merchandise back on shelves and drive clear communication with their consumers.
  • Dynamics 365 Project Operations is investing in enabling capabilities ranging from onboarding, estimating, and using resources from external talent pools helping to boost efficiencies in project planning and delivery. Customers will also be able to upgrade from Project Service Automation to Project Operations using an in-place upgrade experience. In addition, customers can bring their own project management tools through a generic API where task scheduling can happen in the project management tool of choice and then integrate to Project Operations, becoming available to users in a read-only manner. Resource scheduling and booking would remain in Project Operations.
  • Dynamics 365 Guides continues to invest in capabilities that improve the collaboration experiences for authors and operators on HoloLens 2. The application will also be updated to support guest access so that customers can share their guides with users outside of their organization.
  • Dynamics 365 Remote Assist is investing in B2B service scenarios by bringing one-time calling to general availability and supporting additional calling policies for external users. Additionally, we are updating the Remote Assist mobile app to support improved collaboration through the ability to share screens across iOS and Android.
  • Dynamics 365 Human Resources will equip HR professionals with the ability to tailor experiences and automatically complete processes when employees are joining, leaving, and moving within an organization. We will also provide intelligent talent management capabilities to enable companies to understand the gap between the skills needed for the organization and employees to be successful, and the skillset held by the organization’s current workforce. By providing this intelligent talent management capability, Dynamics 365 Human Resources enables companies to ensure the right people are in the right jobs, but also plan for the future.
  • Dynamics 365 Commerce continues to invest in key B2B commerce scenarios, including sales agreements, on-behalf-of ordering, and partner-specific product catalogs and pricing. This release also introduces customer segmentation and targeting with Dynamics 365 Customer Insights, and out-of-box A/B experimentation and analytics tools. The new Store Commerce app streamlines point of sale deployment and servicing while improving performance. New workflows in headquarters, bulk image upload, and manifest-driven upload simplify the management of media assets across channels. Lastly, customer service functionality is easily enabled on your e-commerce site with Power Virtual Agents and Omnichannel for Customer Service.
  • Dynamics 365 Fraud Protection is delivering multiple new capabilities in this release. Operators of Payment Service Providers will be able to offer fraud protection as a service to their businesses, including those that have multi hierarchy business structure. Deep search capabilities enhancing analytics and policy settings have now been enabled as well as integrated case management for purchase protection. In addition, Fraud Protection now offers support for native mobile applications as well as businesses building their offering on top of Power App portals. Finally, customers will be able to choose to have Dynamics 365 Fraud Protection provisioned within Canada if they have specific data residency needs or latency requirements that would require it.
  • Dynamics 365 Business Central continues to simplify the customer onboarding experience by offering a modern Help Pane that gives users guidance and learning content where they need it most, in the context of their work. The Help Pane flattens the learning curve, increasing productivity, and business process adoption. Customers using Microsoft Power Platform can use the new capabilities of our connectors. In this release, we are making it easier to trigger a Power Automate flow directly from Business Central pages, which can save time by automating business processes. Collaborating on Business Central data in Microsoft Teams is smoother because we’ve removed the licensing friction. 
  • Dynamics 365 Customer Insights, Microsoft’s customer data platform, expands the footprint of consent enablement features across more areas within Customer Insights. It enables customers to integrate and harmonize consent data from multiple consent systems and data sources. This will ensure that consent permissions and preferences of your customers are honored during real-time personalization scenarios in Customer Insights. New data enrichment capabilities will enable customers to leverage our safe data collaboration capability to share and enrich their customer data. Safe data collaboration puts you in control of your data with privacy-enabled workflows to join and enrich your data with other datasets.

Highlights from Microsoft Power Platform

  • Power BI continues to invest in empowering every individual, team, and organization to drive a data culture. For individuals, we are improving the create experience through the addition of measures using natural language and allowing users to collaborate via OneDrive. For teams, we are bringing enhancements to Goals focused on enterprise needs, integration with PowerPoint, and adding new capabilities to the Power BI integration in Teams. To empower the organization, we are improving our experience with big data through automatic aggregations, data protection capabilities via data loss prevention (DLP) policies, and providing improved visibility into user activity to admins.
  • Power Apps maintains focus on enabling developers of all skill levels to build enterprise-class apps infused with intelligence and collaboration capabilities. Makers will be able to collaborate on the same app to simultaneously work and merge changes to accelerate development and track collaboration. Makers and developers of all skill levels will be more productive with Dataverse, leveraging intelligence to assist development with natural language to code, powered by advanced AI models such as GPT-3 and PROSE. Most importantly, we are including key updates to ensure organizations can deliver flagship apps across the entire company faster and safer than ever. These include allowing packaging of apps to be deployed on Android and iOS and improvements to ALM and governance capabilities to ensure safe and scalable rollouts.
  • PowerApps portals continue to invest in bringing more out-of-the-box capabilities to support both low-code/no-code development as well for professional developers. Some of the salient capabilities for makers include converting portals into cross-platform mobile applications by enabling them as progressive web apps, an option to use Global search powered via Dataverse search integration, and enhancements for professional developers to do more with portals using Microsoft Power Platform PAC CLI tool.
  • Power Automate is more accessible than ever, which makes it easier to get started automating tasks no matter where you are in Microsoft 365. We have seen customers of all sizes increase the scale of their robotic process automation (RPA) deployments; therefore, we are adding features to make it easier to manage machines in Azure and the credentials of users and accounts. Finally, all the features we are building are increasingly automatable by default, adhering to the API-first approach, so that IT departments can manage their Power Automate infrastructure in whatever way they want.
  • Power Virtual Agents brings improvements in the authoring experience with commenting, Power Apps portals integration, data loss prevention options, proactive bot update messaging in Microsoft Teams, and more.
  • AI Builder is adding capabilities around document automationin particular, the ability to process unstructured documents, such as contracts or e-mails. By extracting insights from the semantic understanding of the text in unstructured documents, customers will be able to extract key information from documents and process them automatically in an end-to-end flow using Power Automate. We are also focusing on building out a Feedback Loop process, enabling improvement of model accuracy by retraining models with data processed in production. Lastly, we are adding capabilities to effectively manage the governance and lifecycle of AI Models.

For a complete list of new capabilities, please check out the Dynamics 365 and Microsoft Power Platform 2022 release wave 1 plans.

Early access period

Starting January 31, 2022, customers and partners will be able to validate the latest features in a non-production environment. These features include user experience enhancements that will be automatically enabled for users in production environments during April 2022. Take advantage of the early access period, try out the latest updates in a non-production environment, and get ready to roll out updates to your users with confidence. To see the early access features, check out the Dynamics 365 and Microsoft Power Platform pages. For questions, please visit the Early Access FAQ page.

We’ve done this work to help youour partners, customers, and usersdrive the digital transformation of your business on your terms. Get ready and learn more about the latest product updates and plans, and share your feedback in the community forum for Dynamics 365 or Microsoft Power Platform.

The post 2022 release wave 1 plans for Dynamics 365 and Power Platform now available appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Online retailer, Fashion Nova, gets a dressing down for hiding negative reviews

Online retailer, Fashion Nova, gets a dressing down for hiding negative reviews

This article was originally posted by the FTC. See the original article here.

Shopping for clothes online can be fun and convenient, but it lacks the in-person experience of trying them on, touching the fabrics, and checking for quality. That’s why so many online shoppers turn to honest customer reviews for help. But when an online retailer cherry picks only the positive reviews for posting, the result is anything but honest.

If a company suggests that the reviews on its website reflect the views of all buyers who submitted reviews, it’s against the law for the company to NOT post negative reviews. According to the FTC [link to press release], online retailer Fashion Nova did just that. The FTC says that Fashion Nova broke the law when it failed to post hundreds of thousands of negative reviews that people submitted.

What does this mean for you and other online fashionistas? Well, for one thing, Fashion Nova must not make any further misrepresentations about customer reviews or other endorsements.

Here are some things to consider the next time you’re using online reviews to buy clothing or anything else:

  • Think about the source of the reviews you’re reading. What do you know about the reviewers or the site they’ve posted on that makes them trustworthy?
  • Compare reviews from a variety of well-known sources, not just the seller’s site.
  • Start with websites recognized for having credible and impartial expert reviews.

For more information, see Online Shopping and How To Evaluate Online Reviews.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.