Tell Us What You Think!
This article is contributed. See the original author and article here.
This article is contributed. See the original author and article here.
This article is contributed. See the original author and article here.
Welcome to the fourth edition of Microsoft Viva People Science industry trends, where the Viva People Science team share learnings from customers across a range of different industries. Drawing on data spanning over 150 countries, 10 million employees, and millions of survey comments, we uncover the unique employee experience challenges and best practices for each industry.
In this blog, @Jamie_Cunningham and I share our insights on the state of employee engagement in the retail industry. You can also access the recording from our recent live webinar, where we discussed this topic in depth.
Let’s first look at what’s impacting the retail industry today. In summary, we are hearing about market volatility, supply chain constraints, changing consumer behavior, technological advancements, labor pressures, and rising costs. According to the Deloitte Retail Trends 2023 report, the top-of-mind issues for retail leaders are:
These issues require retailers to be agile, resilient, and innovative in their employee experience strategies and execution. The retail industry also faces some specific challenges in attracting and retaining talent, such as:
According to Glint benchmark data (2023), employee engagement in retail has declined by two points between 2021 and 2022. It’s clear that retailers need to invest in improving the employee experience, especially for the frontline workers, who are the face of the brand and the key to customer loyalty. So, how do they do this? Here are three examples of how retailers we’ve worked with have addressed the needs of their employees with the support of Microsoft Viva:
1. Create a compelling future
We worked with the leadership team of a MENA (Middle East and North Africa) based retailer to recognize that there was a connection between their ability to communicate the future of the direction of the organization effectively, and the degree to which employees saw a future for themselves in the organization. The team committed to clarifying how the business initiatives they were rolling out connected to future work opportunities for their teams.
2. Build bridges with frontline employees
According to the Microsoft Work Trend Report (2022), sixty-three percent of all frontline workers say messages from leadership don’t make it to them. A global fashion brand recognised after several years of employee listening that the actions being taken by leadership were not being felt on the shop floor. We worked with them to adopt a simplified action taking model with one clear commitment from leaders, that was efficient and effective in terms of communication and adoption. They also increased their investment in manager enablement to support better conversations within teams, when results from Viva Glint were released. This simplified approach led to improved perceptions of the listening process, and greater clarity at all levels on where to focus for a positive employee experience.
3. One internal team, one goal
Through an Executive Consultation with leaders of a UK retailer, it was identified that wellbeing was a risk for the business that unless addressed, would severely impact their priorities. With that in mind, the team created internal alignment – to prioritise wellbeing through both training investment and policy changes, resulting in a thirteen-point improvement in the wellbeing score year over year.
Conclusions
To succeed in this dynamic and competitive market, retailers need to focus on their most valuable asset: their employees. By investing in the employee experience, especially for the frontline workers, retailers can boost their employee engagement, customer satisfaction, and business performance.
A downloadable one-page summary is also available with this blog for you to share with your colleagues and leaders.
Leave a comment below to let us know if this resonates with what you are seeing with your employees in this industry.
References:
Deloitte retail trends report (2023)
Microsoft Work Trend Index special report (2022)
This article is contributed. See the original author and article here.
Practice mode is now available in Forms. It’s tailored for EDU users, particularly students, offering a new way for students to review, test, and reinforce their knowledge. Follow me, let’s check out more details of practice mode. You can just try it from this template. (Note: Practice mode is only available for quizzes. )
Practice mode
Instant feedback after answering each question
In practice mode, questions will be shown one at a time. Students will receive immediate feedback after submitting each question, indicating whether their answer is right or wrong.
Instant feedback after answering each question
Try multiple times for the correct answer
Students can reconsider and try a question multiple times if they answer it incorrectly, facilitating immediate re-learning, and consequently strengthening their grasp of certain knowledge.
Try multiple times to get the correct answer
Encouragement and autonomy during practice
Students will receive an encouraging message after answering a question, whether their answer is correct or not, giving them a positive practice experience. And They have the freedom to learn at their own pace. If they answer a question incorrectly, they can choose to retry, view the correct answer, or skip this question.
Encouragement message and other options
Recap questions
After completing the practice, students can review all the questions along with the correct answers, offering a comprehensive overview to assess their overall performance.
Recap questions
Enter practice mode
Practice mode is only available for quizzes. You can turn it on from the “…” icon in the upper-right corner. Once you distribute the quiz recipients will automatically enter practice mode. Try out practice mode from this template now!
Enter practice mode
This article is contributed. See the original author and article here.
Frontline managers have gained greater control, on a team-level, over the capabilities offered in Microsoft Shifts.
With the latest releases now available on the Shifts settings page, we have made updates to improve the end-user experience for frontline manager and workers. The updates are as follows:
Previously, when the Open Shifts setting was off, frontline managers could create but not publish open shifts. Also, they could view open and assigned shifts listed on their team’s schedule (including when workers are scheduled for time off).
Now, when the setting is turned off, frontline managers can’t create open shifts and can only view on their team’s schedule the assigned shifts (including scheduled time off).
See the differences from the past and new experience for frontline managers:
Previously, when the time-off request setting was turned off, frontline managers couldn’t assign time off to their team members; more over, frontline workers couldn’t request time-off.
Now, when the setting is turned off, frontline managers can continue to assign time off to their team members. However, frontline workers will not have the ability to create time-off requests if this setting remains off.
Your organization can leverage Shifts as the place where the frontline may view their working and non-working schedules despite not using Shifts as your leave management tool.
See the new experience for frontline managers:
Previously, when any of the request-related setting toggled between on to off, frontline managers couldn’t manage previous requests that were submitted when the setting was on.
Now, frontline managers can directly manage previous requests on the Requests page while frontline workers can view status and details of their individual requests.
Shifts settings: Manage settings in Shifts – Microsoft Support
Latest Shifts enhancements: Discover the latest enhancements in Microsoft Shifts – Microsoft Community Hub
This article is contributed. See the original author and article here.
TL;DR: This post navigates the intricate world of AI model upgrades, with a spotlight on Azure OpenAI’s embedding models like text-embedding-ada-002. We emphasize the critical importance of consistent model versioning ensuring accuracy and validity in AI applications. The post also addresses the challenges and strategies essential for effectively managing model upgrades, focusing on compatibility and performance testing.
Embeddings in machine learning are more than just data transformations. They are the cornerstone of how AI interprets the nuances of language, context, and semantics. By converting text into numerical vectors, embeddings allow AI models to measure similarities and differences in meaning, paving the way for advanced applications in various fields.
In the complex world of data science and machine learning, embeddings are crucial for handling intricate data types like natural language and images. They transform these data into structured, vectorized forms, making them more manageable for computational analysis. This transformation isn’t just about simplifying data; it’s about retaining and emphasizing the essential features and relationships in the original data, which are vital for precise analysis and decision-making.
Embeddings significantly enhance data processing efficiency. They allow algorithms to swiftly navigate through large datasets, identifying patterns and nuances that are difficult to detect in raw data. This is particularly transformative in natural language processing, where comprehending context, sentiment, and semantic meaning is complex. By streamlining these tasks, embeddings enable deeper, more sophisticated analysis, thus boosting the effectiveness of machine learning models.
Lets discuss the potential impacts and challenges that arise when different versions of embedding models are used within the same domain, specifically focusing on Azure OpenAI embeddings. When embeddings generated by one version of a model are applied or compared with data processed by a different version, various issues can arise. These issues are not only technical but also have practical implications on the efficiency, accuracy, and overall performance of AI-driven applications.
Now lets move to the process of upgrading an embedding model, focusing on the steps you should take before making a change, important questions to consider, and key areas for testing.
Assessing the Need for Upgrade:
Understanding Model Changes:
Data Backup and Version Control:
Compatibility with Existing Systems:
Cost-Benefit Analysis:
Long-Term Support and Updates:
Performance Testing:
Compatibility Testing:
Fallback Strategies:
Monitoring and Evaluation:
Feedback Loop:
Upgrading an embedding model involves careful consideration, planning, and testing. By following these guidelines, customers can ensure a smooth transition to the new model version, minimizing potential risks and maximizing the benefits of the upgrade.
Embedding can significantly enhance the performance of various AI applications by enabling more efficient data handling and processing. Here’s a list of use cases where embeddings can be effectively utilized:
Enhanced Document Retrieval and Analysis: By first performing embeddings on paragraphs or sections of documents, you can store these vector representations in a vector database. This allows for rapid retrieval of semantically similar sections, streamlining the process of analyzing large volumes of text. When integrated with models like GPT, this method can reduce the computational load and improve the efficiency of generating relevant responses or insights.
Semantic Search in Large Datasets: Embeddings can transform vast datasets into searchable vector spaces. In applications like eCommerce or content platforms, this can significantly improve search functionality, allowing users to find products or content based not just on keywords, but on the underlying semantic meaning of their queries.
Recommendation Systems: In recommendation engines, embeddings can be used to understand user preferences and content characteristics. By embedding user profiles and product or content descriptions, systems can more accurately match users with recommendations that are relevant to their interests and past behavior.
Sentiment Analysis and Customer Feedback Interpretation: Embeddings can process customer reviews or feedback by capturing the sentiment and nuanced meanings within the text. This provides businesses with deeper insights into customer sentiment, enabling them to tailor their services or products more effectively.
Language Translation and Localization: Embeddings can enhance machine translation services by understanding the context and nuances of different languages. This is particularly useful in translating idiomatic expressions or culturally specific references, thereby improving the accuracy and relevancy of translations.
Automated Content Moderation: By using embeddings to understand the context and nuance of user-generated content, AI models can more effectively identify and filter out inappropriate or harmful content, maintaining a safe and positive environment on digital platforms.
Personalized Chatbots and Virtual Assistants: Embeddings can be used to improve the understanding of user queries by virtual assistants or chatbots, leading to more accurate and contextually appropriate responses, thus enhancing user experience. With similar logic they could help route natural language to specific APIs. See CompactVectorSearch repository, as an example.
Predictive Analytics in Healthcare: In healthcare data analysis, embeddings can help in interpreting patient data, medical notes, and research papers to predict trends, treatment outcomes, and patient needs more accurately.
In all these use cases, the key advantage of using embeddings is their ability to process and interpret large and complex datasets more efficiently. This not only improves the performance of AI applications but also reduces the computational resources required, especially for high-cost models like GPT. This approach can lead to significant improvements in both the effectiveness and efficiency of AI-driven systems.
Azure OpenAI provides a systematic approach to model versioning, applicable to models like text-embedding-ada-002:
Regular Model Releases:
Version Update Policies:
Notifications and Version Maintenance:
Upgrade Preparation:
Model version mismatches in embeddings, particularly in the context of Azure OpenAI, pose significant challenges that can impact the effectiveness of AI applications. Understanding these challenges and implementing strategies to mitigate their effects is crucial for maintaining the integrity and efficiency of AI-driven systems.
Recent Comments