This article is contributed. See the original author and article here.
The COVID-19 pandemic has certainly served as a wake-up call for many businesses. Not only has it highlighted the need for better insights across the entire operational experience, but this global stressor also exposed vulnerabilities in business models and workforce strategies.
The pandemic also brought to light the importance of agility and adaptability in the face of change. In order to survive and thrive in the new normal, businesses must be able to pivot quickly and effectively to meet the ever-changing needs of their customers. Enterprises are increasingly focusing on service-centric business models that provide recurring revenue streams.
To better understand these trends, Microsoft commissioned Forrester Consulting to investigate where companies are succeeding, struggling, and investing in their quest to move toward service-centric business models and project-based operations. In the study, Foster Business Model Innovation To Achieve Growth Goals, Forrester Consulting explored how this trend impacts the global business landscape and associated business teams.
Business model innovation
Learn how to achieve growth goals with business model innovation.
In March and April 2022, Forrester conducted an online survey of 509 global financial leaders who have decision-making influence within service and project teams or businesses.1 They also performed two qualitative interviews with global practice and financial leaders. The research participants were asked about project-centric business tools. Companies ranged in size from 500 to more than 20,000 employees.
Forrester Consulting’s study highlighted three areas of need that organizations are struggling with.
Societal and business trends
With more employees working from home, the line between personal and professional life has blurred. This change compelled organizations to focus more on employee well-being, inclusion, and connectivity. Organizations are pushing for greater connectivity and convergence to foster an always-on culture.
These trends have pushed more than 75 percent of respondents to invest in the cloud, infrastructure as a service, software as a service, and the Internet of Things. By investing in these areas, organizations can provide their employees with the resources they need to succeed both at work and at home.
Multiple roadblocks
Macroeconomic trends, such as supply chain issues and staffing shortages, prevent organizations from adopting service-centric business models. About 42 percent of respondents reported struggling to keep up with demand due to external supply chain disruptions. Companies are also facing internal challenges, such as a lack of clarity and connectivity, which are preventing them from optimizing project delivery.
Unlocking new revenue streams
Revenue recognition is the main driver for business model transformation, with businesses looking to refine their product offerings as well as their pricing. About 40 percent of respondents are working to evolve their financial models, including how they charge customers and structure their entities.
This insight corresponds with a focus on improving strategy and planning that respondents are prioritizing. Improving team collaboration and connectivity also ranks highly with organizations investing in this area. Finally, workforce well-being is also a key concern, with organizations investing in improving the well-being of their employees.
Key takeaways to embrace service-centric business models
Let’s briefly touch on a few key takeaways you will find in the full study that can help companies address these challenges to embrace service-centric business models.
Beyond financial monetization
Forrester recommends that organizations take a comprehensive approach to business model innovation. Ideal innovation offers better insight into operational processes. It also gives employees modern workplace tools and environments, resulting in a holistic approach to critical process improvements.
Profit visibility
By investing in technologies that enable connectivity, organizations can improve communication and collaboration across their businesses. Forrester highlighted the importance of understanding how businesses can embrace service-centric business models. Companies improve profit visibility by focusing on customer outcomes through end-to-end project tracking. The right tools are necessary for organizations to fully understand their profit drivers. Microsoft Dynamics 365 Finance enables businesses to maximize financial visibility and profitability.
Finding the right partner
The study indicates that the right partners are essential for successful platform innovation and improved project-based operations. The right partner will have a deep understanding of the project requirements and the ability to effectively communicate and collaborate with the project team. Furthermore, technology investments and performance metrics should be aligned with business objectives.
Next steps
In today’s business environment, it is critical for businesses to be able to implement service-oriented business models. To accomplish this, companies need visibility and connectivity into all aspects of their operationsincluding their projects, processes, and data. Due to siloed systems and data, many businesses lack this visibility and connectivity.
Businesses that can optimize their systems and implement service-oriented business models will be better positioned to succeed. Organizations can use tools such as Dynamics 365 Finance and Dynamics 365 Project Operations to support their progress toward project-based operational models.
This article is contributed. See the original author and article here.
As we prepare the Roadmap for Host Integration Server, our Mainframe and Midranges Integration platform and its Azure Logic Apps Connectors, the Azure Integration Services Product Group, is interested on learning how we can assist you supporting your efforts in your Mainframe and Midranges Modernization to the Azure Cloud. The following is the link to the survey: https://aka.ms/hostintegrationpartners.
This article is contributed. See the original author and article here.
AKS Web Application Routing with Open Service Mesh
AKS product team announced a public preview of Web Application Routing this year. One of the benefits of using this add-on is the simplicity of adding entry point for applications to your cluster with a managed ingress controller. This add-on works nicely with Open service mesh. In this blog, we investigate how this works, how to setup mTLS from ingress controller to OSM and the integration. While we are using AKS managed add-on, we are taking the open-source OSM approach for explaining this, but it’s important to remember that AKS also has an add-on for OSM.
Reference link above focuses on step-by-step process to implement Web application routing along with few other add-ons such as OSM and Azure Keyvault secrets provider. The intention of this blog is not to repeat same instructions but an attempt to dig into few important aspects of OSM to illustrate connectivity from this managed ingress add-on to OSM. Enterprises prefer to leverage managed services and add-ons but at the same time there is a vested interest in understanding foundational building blocks of open-source technologies used and how they are glued together to implement certain functionalities. This blog attempts to provide some insight into how these two (OSM and web app routing) are working together but not drill too much into OSM as its documented well in openservicemesh.io
First step is creating a new cluster:
az aks create -g webapprg -n webappaks -l centralus –enable-addons web_application_routing –generate-ssh-keys
This creates a cluster along with ingress controller installed. You can check this in ingressProfile of the cluster.
Ingress controller is deployed in a namespace called app-routing-system. Image is pulled from mcr registry (and not other public registries). Since this creates ingress controller, it creates public IP attached to Azure Load Balancer and used for Ingress controller. You might want to change ‘Inbound security rules’ in NSG for agentpool to your own IP address (from default Internet) to protect.
This managed add-on creates an ‘Ingress controller’ with ingress class ‘webapprouting.kubernetes.azure.com’. So, any ingress definition should use this Ingress class.
You can see that Nginx deployment is running with HPA config. Please understand that this is a reverse proxy, sits in data path, uses resources such as CPU+memory and lots of network I/O so it makes perfect sense to set HPA. In other words, this is the place where all traffic enters the cluster and traverses through to application pods. Some refer to this as north-south traffic into the cluster. It’s important to emphasize that there were several instances in my experience where customers use OSS nginx and didn’t set right config for this deployment, ran into unpredictable failures while moving into production. Obviously, this wouldn’t show up in functional testing! So, use this managed add-on where AKS manages it for you and maintains it with more appropriate config. You don’t need to and shouldn’t change anything in app-routing-system namespace. As stated above, we are taking under the hood approach to understand the implementation and not to change anything here.
In this diagram, app container is a small circle and sidecar (envoy) is a larger circle. Using larger circle for sidecar for more space to show relevant text, so there is no significance with the sizing of the circle/eclipse! Top left side of the diagram is a copy of a diagram from openservicemesh.io site to explain the relationship between different components in OSM. One thing to note here is that there is a single service certificate for all K8S pods belonging to a particular service where there is a proxy certificate for each pod. You will understand this much better later in this blog.
At this time, we have deployed a cluster with managed ingress controller (indicated by A in diagram). It’s time to deploy service mesh. Again, I’m reiterating that we are taking open source OSM installation approach to walk you through this illustration, but OSM is also an another supported AKS add-on.
Let’s hydrate this cluster with OSM. OSM installation requires osm CLI binaries installed in your laptop (Windows or Linux or Mac). Link below.
Assuming that your context is still pointing to this newly deployed cluster, run this following command.
osm install –mesh-name osm –osm-namespace osm-system –set=osm.enablePermissiveTrafficPolicy=true
This completes installation of OSM (ref: B in diagram) with permissive traffic policy which means there are no traffic restrictions between services in the cluster.
Here is a snapshot of namespaces.
List of objects in osm-system namespace. It’s important to ensure that all deployed services are operational. In some cases, if a cluster is deployed with nodes with limited cpu/mem, this could cause issues to deployment. Otherwise, there shouldn’t be any other issues.
At this time, we’ve successfully deployed ingress controller (ref: A) and service mesh (ref: B).
However, there are no namespaces in the service mesh. In the diagram above, assume dotted-red rectangle without anything in that box.
Let’s create new namespaces in the cluster and add them to OSM.
One thing to notice from osm namespace list output is that the status of sidecar-injection. Sidecar-injection uses Kubernetes mutating admission webhook to inject ‘envoy’ sidecar into the pod definition before it is written to etcd. It also injects another init container into the pod definition which we will review later.
Also create sample2 and add this to OSM. Commands below.
k create ns sample2
osm namespace add sample2
Deploy sample1 (deploy-sample1.yaml) application with 3 replicas. This uses ‘default’ service account and creates a service with Cluster IP. This is a simple hello world deployment as found in Azure documentation. If you want to test, you can clone code from git@github.com:srinman/webapproutingwithosm.git
Let’s inspect service account for Nginx (our Web app routing add-on in app-routing-system namespace).
As you can see, in app-routing-namspace, nginx is using nginx service account and, in sample1 namespace, there is only one service account which is ‘default’ service account.
k get deploy -n app-routing-system -o yaml | grep -i serviceaccountname
This confirms that Nginx is indeed using nginx service account and not default one in app-routing-system.
Let’s also inspect secrets in osm-system and app-routing-system namespaces. Note that there is no K8S TLS secret for talking to OSM.
At this point, you have an ingress controller installed, OSM installed, sample1 and sample2 added to OSM, app deployed in sample1 namespace but there is no configuration defined yet for routing traffic from ingress controller to application. In the diagram, you can imagine that there is no connection #2 from ingress to workload in mesh.
User configuration in Ingress
We need to configure app-routing-system, our managed add-on, to listen for inbound traffic as known as north-south traffic and where to proxy connection to. This is done with ‘Ingress’ object in Kubernetes. Please notice some special annotations in Ingress definition. These annotations are needed for proxying connection to an application that is part of OSM.
k apply -f ingress-sample1.yaml
Once this is defined, you can view nginx.conf updated with this ingress definition.
k exec nginx-6c6486b7b9-kg9j4 -n app-routing-system -it – sh
cat nginx.conf
We’ve verified the configuration for ‘Web app routing’ to listen and proxy traffic to aks-helloworld-svc service in namespace sample1. In diagram, #A configuration is complete for our traffic to sample1 namespace. If the configuration is a simple Ingress definition without any special annotations and if the target workload is not added to OSM namespace, we should be able to route north-south traffic into our workload by this time but that’s not the case with our definition. We need to configure OSM to accept connections from our managed Ingress controller.
User configuration in OSM
Let’s review OSM mesh configuration. You can notice that spec.certificate doesn’t have ingressGateway section.
Now, you can notice a new secret in osm-system. OSM issues and injects this certificate in osm-system namespace. Nginx is ready to use this certificate to initiate connection to OSM. Before we go further into this blog, let’s understand a few important concepts in OSM.
Open service mesh data plane uses ‘Envoy’ proxy (https://www.envoyproxy.io/). This envoy proxy is programmed (in other words configured) by OSM control plane. After adding sample1 and sample2 namespace and deploying sample1, you could have noticed two containers running in that pod. One is our hello world app, other one is injected by OSM control plane with mutating webhook. It also injects init container which changes ip tables to redirect traffic.
Now that Envoy is injected, it needs to be equipped with certificates for communicating with its mothership (OSM control plane) and for communicating with other meshed pods. In order to address this, OSM injects two certificates. One is called ‘proxy certificate’ for ‘Envoy’ to initiate connection to OSM control plane (refer B in diagram) and another one is called ‘service certificate’ for pod-to-pod traffic (for meshed pods – in other words pods in namespaces that are added to OSM). Service certificate uses the following for CN.
<ServiceAccount>.<Namespace>.<trustdomain>
This service certificate is shared for pods that are part of same service. Hence, the name service certificate. This certificate is used by ‘Envoy’ when initiating pod-to-pod traffic with mTLS.
As an astute reader, you might have noticed some specifics in our Ingress annotations. It defines who the target is in proxy_ssl_name. Here our target service is default.sample1.cluster.local.
default is ‘default service account’, sample1 is namespace. Remember, in OSM, it’s all based on identities.
Get pod name, replace -change-here with pod name and run this following command to check this.
You can see CN = default.sample1.cluster.local in the cert.
We are also informing nginx to use secret from osm-system namespace called nginx-client-cert-for-talking-to-osm. Nginx is configured to proxy connect to default.sample1.cluster.local with TLS secret nginx-client-cert-for-talking-to-osm. If you inspect this TLS secret (use instructions below if needed), you can see “CN = nginx.app-routing-system.cluster.local”
Extract cert info: use k get secret, use tls.crt data and base64 decode it, run openssl x509 -in file_that_contains_base64_decoded_tls.crt_data -noout -text
At this time, we have wired up everything from client to Ingress controller listening for connections, and Nginx is set to talk to OSM.
However, Envoy proxy (OSM data plane) is still not configured to accept TLS connection from Nginx.
Any curl to mysite.srinman.com will result in error response.
HTTP/1.1 502 Bad Gateway
Please understand that we can route traffic all the way from client to ‘Envoy’ running alongside our application container but since traffic is forced to enter ‘Envoy’ with our init container setup, Envoy checks and blocks this traffic. With our configuration osm.enablePermissiveTrafficPolicy=true, Envoy is programmed by OSM to allow traffic within namespaces in the mesh but not from outside traffic to enter. In other words, all east-west traffic is allowed within the mesh and these communications automatically establish mTLS between services. Let’s configure OSM to accept this traffic.
This configuration is addressed by IngressBackend. The following definition tells OSM to configure Envoy proxies used for backend service ‘aks-helloworld-svc’ to accept TLS connection from sources: defined.
There are instructions in the link above for adding nginx namespace to osm. More specifically, the following command is not necessary since we’ve already configured Nginx with Ingress definition to use proxy ssl name and proxy ssl tls cert for connecting to application pod’s Envoy or OSM (#2 in the diagram. Picture shows connection from only one pod from Nginx but you can assume that this could be from any Nginx pod). OSM doesn’t need to monitor this namespace for our walk through. However, at the end of this blog, there is an additional information on how OSM is configured and how IngressBackend should be defined with managed OSM and Web app routing add-on.
osm namespace add “$nginx_ingress_namespace” –mesh-name “$osm_mesh_name” –disable-sidecar-injection
Earlier, we verified that Nginx uses with TLS cert with “CN = nginx.app-routing-system.cluster.local”. IngressBackend configures that source must be ‘AuthenticatedPrincipal’ with name nginx.app-routing-system.cluster.local. All others are rejected.
Once this is defined, you should be able to see a successful connection to app! Basically, client connection is terminated at Ingress controller (nginx) and proxied/resent (#2 in the diagram) from nginx to application pods in namespace (sample1). Envoy proxy is intercepting this connection and sending it to the actual application which is still listening on plain port 80 but our web application routing along with open service mesh took care of accomplishing encryption-in-transit between ingress controller and application pod – essentially mitigating the need for application teams to manage and own this very critical security functionality. It’s important to remember that we were able to accomplish this mTLS with very few steps with all managed by AKS (well, provided you use add-ons for OSM and Web application routing). Once the traffic lands in service meshed data plane, Open service mesh provides lots of flexibility and configuration options to manage this traffic (east-west) within the cluster across OSM-ed namespaces.
Let’s try to break this again to understand more!
In our IngressBackend, let’s make a small change to the name of authenticated principal. Change it to something other than nginx. Sample below.
– kind: AuthenticatedPrincipal
name: nginxdummy.app-routing-system.cluster.local
Apply this configuration. Attempt to connect to our service.
* Trying 20.241.185.56:80…
* TCP_NODELAY set
* Connected to 20.241.185.56 (20.241.185.56) port 80 (#0)
> GET / HTTP/1.1
> Host: mysite.srinman.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 403 Forbidden
< Date: Fri, 25 Nov 2022 13:09:16 GMT
< Content-Type: text/plain
< Content-Length: 19
< Connection: keep-alive
<
* Connection #0 to host 20.241.185.56 left intact
RBAC: access denied
This means that we’ve defined OSM to accept connections from identity nginxdummy from app-routing-system namespace but that’s not the case in our example. Envoy basically stops connection in the same application pod before it reaches the application container itself.
Let’s try to make it work by not reverting the change but by changing a different config in IngressBackend
skipClientCertValidation: true
It should work fine now since we are configuring OSM to ignore client certification validation/verification. From a security viewpoint, if you think about this, you could send traffic from a different app or ingress controller to this application pod – basically unprotected. Let’s change this back to false and also fix nginx service name. Apply the config and check if you can access the service.
Thus far, we’ve deployed an application in one namespace and configured ingress controller to send traffic into our mesh. What would the process for another app in a different namespace using our managed ingress controller?
Let’s create another workload and understand how to define ingress and to understand the importance of service account. Sample code in deploy-sample2.yaml
In this deployment, you can see that we are using serviceAccountName: sample2-sa not the default service account. (Namespace, Service account creation is not shown and its implicit that you understand!)
You can see how ingress definition is slightly different from the one above (for sample1). proxy_ssl_name is set to sample2-sa in sample2 namespace. However, it uses the same TLS secret that sample1 used, which is TLS with “CN = nginx.app-routing-system.cluster.local”
Ingressbackend definition looks like this below. You can see that it’s the same ‘sources’ definition with different backends.
We have established TLS between Nginx and application pod (#2 in diagram). However, from client to ingress is still plan HTTP (#1 in diagram). Enabling TLS for this is straightforward and there are few ways to do this including Azure managed way, but we will explore build our own approach. Let’s create a certificate with CN=mysite.srinman.com.
This should enforce all calls to https from the client.
Traffic flow
Traffic enters ingress managed LB
TLS traffic terminated at Ingress controller pods
Ingress controller pods initiates proxy connection to backend service (specifically one of the pod that is part of that service, and even more specifically to pod’s envoy proxy container. Also remember injected init-container takes care of setting up ip tables to route requests to Envoy)
App pod – Envoy container terminates TLS traffic and initiates connection to localhost on app port (remember app container shares same pod, thus same network namespace)
App pod – app container listening on port, responds to the request.
As traffic enters the cluster, as seen above and in the diagram, it can be inspected in 3 different logs at least. Nginx, Envoy and App itself.
Check traffic in nginx logs
Check traffic in envoy logs
Check traffic in app logs
Nginx log (you might want to check both the pods if you are not able to locate the call in one. There should be two)
You could notice that request is coming from local host. This is because envoy container sends the traffic from the same host (actually pod – remember pod is same as host in Kubernetes world! “A Pod models an application-specific “logical host” – reference link ).
Lastly, when you opt-in for OSM add-on along with Web application routing add-on, certain things are already taken care of; for example, TLS secret osm-ingress-client-cert is generated and written to kube-system namespace. It also automatically adds app-routing-system namespace to OSM with sidecar-injection disabled mode. This means that in the IngressBackend definition kind: Service can be added for verifying source IPs in addition to identity (AuthenticatedPrincipal) for allowing traffic. This of course adds more protection. Check this file ingressbackend-for-osm-and-webapprouting.yaml in repo.
I hope that these manual steps helped to provide a bit more insight into the role of Web application routing and how it works nicely with Open Service Mesh. We also reviewed a few foundational components of Web application routing such as Nginx, IngressBackend, Envoy and OSM.
This article is contributed. See the original author and article here.
The Microsoft Intelligent Security Association (MISA) is an ecosystem of independent software vendors and managed security service providers that have integrated their solutions to better defend against a world of increasing threats. Learn more about an offer from MISA partner Tanium in the Azure Marketplace:
Tanium Cloud: Tanium Cloud delivers the full functionality of the Tanium platform’s unified cloud endpoint detection, management, and security as a fully managed, Azure-based service with zero customer infrastructure required. See and control every endpoint, everywhere.
This article is contributed. See the original author and article here.
90 percent of companies say their existing systems for tracking customer journeys need improvement.
In today’s customer-centric world, customer data is a critical part of a company’s ability to serve up personalized, relevant experiences. Post-sale, data plays an equally critical role in providing customer service teams with the tools they need to provide speedy service, answer customer inquiries, and resolve post-sale issues smoothly and quickly. The challenge facing a majority of companies today is the ability to collect, analyze, and act on that data across their organizations.
In partnership with Futurum, we surveyed 1,000 global business leaders, technologists, marketers, and data and customer experience professionals, and identified the challenges and opportunities that best-in-class data practices afford customer experience (CX) leaders today. The findings of the report revealed what we had surmised going inthat companies are evolving their customer relationship management (CRM) technologies to meet changing customer expectations.
Shift to digital customer journeys
Companies are changing their operational mindset regarding customer experience. Our research shows that a whopping 85 percent of companies report that their customers are significantly more digitally focused than they expected, and it’s clear that companies need to shift their strategies in order to adapt. In the last two years, we’ve seen a seismic shift in how customers live, work, play, and shop online. Customer behavior has irreversibly changed. In fact, 96 percent of organizations say they’ve accelerated their digital transformation and/or technology deployments to keep pace with changing customer requirements, and that includes serving up a better customer experience.
Complexity abounds for customer experience transformation
Complexity is the name of the game in business today. If you’re involved in delivering better customer experiences, you’re likely nodding as you read that lineand we feel your pain. Today’s proliferation of channels through which customers interact with brands provides both opportunities and challenges. We are seeing this interaction via email, text, social media, apps, websites, and in-store communicationsand it can be more than a little overwhelming. The massive influx of data from these many touchpoints has added a level of complexity that we haven’t seen before. Our research shows that working with and properly managing data is easily one of the most difficult challenges companies face, and solving this challenge is of paramount concern.
But why does data add so much complexity? In order to be used effectively, data must be collected, cleaned, analyzed, and maintained in real timesomething that many organizations aren’t yet capable of doing, both from a tech stack standpoint as well as an internal skill set standpoint. Adding to that, organizations are also facing new challenges with the looming elimination of third-party data that’s been relied on to track customers, along with changing privacy regulations. There’s a shift happening, and organizations aren’t ready. In fact, 90 percent of companies in our research study reported that their existing systems for tracking customer journeys need improvement. But the good news is that there are solutions for that.
Harness data and use it more effectively, customer data platforms are the solution
Companies looking to transform their customer experience don’t just need customer data for better customer engagements, they also need real-time data to inform those engagements. That data is of great value to the organization, but in order for it to deliver value, data needs to be centralized, easily accessible, and processed in a single source of truth. That’s where a customer data platform (CDP) comes in and why it’s truly table stakes for organizations today (and not just for marketers).
Best-in-class CDP solutions work to connect the organization as a whole to the data that is amassed so that it can be accessed and utilized by sales teams, marketing teams, customer service teams, commerce teams, and beyond. CDPs are the lifeblood of the organization, housing data, providing visibility as needed, and allowing employees throughout the organization to make data-driven decisions that serve up the very best in customer experiences across every touchpoint.
That’s where the right technology solutions can be game-changers. Real-time, actionable insights into customer experience are most likely to materialize from a CDP that centralizes user data and makes it available to everyone who engages with the customer journey. As the amount of data available has increased exponentially, the challenge becomes how to collect data in a manner that both enables actionable analysis but also protects customer trust.
Microsoft Dynamics 365 Customer Insights
Microsoft provides an enterprise-leading customer data platform.
Ensuring customer trust and privacy to improve customer experience
Let’s talk about the roles that trust and privacy play in the overall customer experience. The collection and analysis of customer data is central to improving customer experience, but mounting privacy concerns surrounding data can threaten customers’ trust in the companies they engage with. Our research shows that while the speed of digitization and the necessity to respond with new solutions has led more than three quarters of companies to implement new technologies or programs, those technologies and programs might not be completely secure. A solid 51 percent of respondents in our study acknowledge they’ve experienced at least one customer data breach in the prior year, so unified data systems that reduce this risk are highly prized.
Changing regulations surrounding privacy and data protection also provide a consistent challenge. Our research showed that 88 percent of respondents expect to change their customer engagement strategies in order to adapt to future market and regulatory requirements. A CDP can help ensure compliance required by General Data Protection Regulation, California Privacy Rights Act, California Consumer Privacy Act, and other regulations designed to give customers more control over the personal information businesses collect about them. We expect to also see CDPs evolving to address the cookieless future that’s ahead of us and track customer consent as part of the functionality of the CDP.
Top 5 features in successful CDP solutions
As we’ve mentioned previously, our research showed an overwhelming number of companies feel like they don’t have the right tools in place to help them improve their customer experience. They need something betterand they know what they want.
Our study identified the top five most important features in successful customer-facing solutions. These include:
Integrated AI-based analytics
AI-based data classification
The ability to import data from multiple sources
Connection to social networks
Real-time actionable insights
Unfortunately, these are the same five areas in which respondents reported their existing customer data tools are lacking. 83 percent of those surveyed report they already use a CDP to provide a centralized source of record, 35 percent say they are still relying on piecemeal solutions that don’t offer real-time access to customer data. In order to effectively use technology to improve the customer experience they’re serving up, CX leaders require substantial improvement in the existing CDP solutions that they’re relying on.
CX leaders need a unified CDP that provides real-time access to customer data, and one that is easily accessible to multiple stakeholders throughout the organization. They also need a tool that ensures regulatory compliance and data privacy, putting customers at ease. This can help facilitate enhanced collaboration, actionable analysis, and the ability to put the insights gained from customer data to immediate use.
By offering a single source of truth for an organization, a unified CDP can enable enhanced collaboration across the entire organization (including sales, service, finance, and research and development) to address every part of the customer experience journey.
Learn more
In the full report, we’ve identified nine key insights that are shaping customer experience needs today, creating a clear roadmap for the future of customer data collection, storage, and analysis. The application of this knowledge stands to benefit companies and customers alike. You can download the full report here.
Looking for a CDP? Learn more about Dynamics 365 Customer Insights, Microsoft’s enterprise-leading customer data platform. Microsoft provides an enterprise-leading customer data platform.
This article is contributed. See the original author and article here.
An email signature concludes an email with style, professionalism, and branding. Customer service agents need to use their signature when emailing customers. Each agent has a distinctive style, however, and enforcing a standard pattern can be a challenge. With two new features in Microsoft Dynamics 365 Customer Service, you can create signatures for your agents that consistently represent your organization’s brand and messaging.
Include dynamic content in a common email signature
We have added the ability to include dynamic placeholders in email signatures. Now you can easily create a common signature for multiple agents.
Dynamic signatures eliminate the need to maintain multiple signatures while bringing consistency to how agents sign off their emails. You no longer have to manually check to make sure agents are using a consistent pattern or train new agents to use a specific signature. With the magic of dynamic placeholders, agent information is automatically inserted with their signature.
Link an email signature to a queue to ensure consistent messaging
Agents often send email from a queue or a shared mailbox. Having a common signature in this scenario is necessary for most contact centers. Now it is possible to link a signature to a queue.
If you don’t want to link a signature to your queues, Dynamics 365 will continue to use the signature template of the queue owner.
Learn more
Watch a quick video introduction:
This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.
Recent Comments