This National Small Business Week, make sure everyone in your company understands AI

This article is contributed. See the original author and article here.

Whether you are running a startup or an already thriving small business, harnessing AI-driven solutions will help you discover new opportunities, streamline operations, and make data-driven decisions with confidence. Understanding and exploring the possibilities of AI is essential for small businesses and key to unlocking growth, driving innovation, and maintaining a competitive edge. 


The first step is understanding the potential of AI for your business. Microsoft has developed several online resources to help. In recognition of National Small Business Week, we have curated a list of those resources that may be helpful for small business professionals who want to get started with AI.


Establish an AI foundation


Start your AI journey by visiting the Microsoft WorkLab and examining a rich collection of content that addresses the real-world scenarios of how AI is impacting work today. New articles are regularly added that will help you understand not just AI’s high-level capabilities, but also the nuances of AI and how to directly apply AI to your day-to-day work. Key resources include:



Build your AI skills


When you’re ready to build a deeper AI skill, you explore the Microsoft AI Learning Hub. You’ll find a variety of tools to help you go from understanding AI to preparing for it. You can learn the mechanics of using the technology and even how to build it into your own apps and services.


Start with the learning journey for Business Users, which is foundational for getting an underlying understanding of AI, and then move into a more detailed guidance on how to use and implement its capabilities. If you’re an IT professional, look at the learning journey for IT Professionals, which provides a thorough grounding on the particulars of AI adoption, deployment, and small business concerns, like data classification and regulatory considerations.


To define your own path, get skilling recommendations based on your job responsibilities and objectives. No matter where you want to go, you can use the AI learning assessment to define a customized learning journey to get you there. 


Put AI to work


To put your AI skills into practice or if you’re already using Copilot for Microsoft 365, visit the Microsoft Copilot Lab. This site provides easy, visual introductions into what Copilot is and how it helps you do more no matter which Microsoft 365 app you are using. These tools are designed for professionals that need a fast, tactical grounding so they can benefit from AI every day. 


One example is the prompt writing guide, which explains how to write effective prompts so Copilot can deliver exactly what you need. This toolkit teaches the art and science of prompting. It walks through a series of easy initial prompting exercises like writing an AI-powered email or creating an image, so you’ll understand how to edit a prompt to tailored it to your scenario.


 Microsoft Learn has a series of freely available, advanced courses to help you gain a deeper understanding of Copilot, how it works with the apps in Microsoft 365 and best practices for everyday use.


Get started

National Small Business Week may be an annual event, but you can build your AI skills year-round. Join the Microsoft SMB Tech Community to network with other professionals using Copilot. You can come here anytime to ask questions, get help, keep up with the latest AI news specific to small and medium-sized businesses and find out about upcoming online or local events.


Examining the Deception infrastructure in place behind

Examining the Deception infrastructure in place behind

This article is contributed. See the original author and article here.

The domain name has an interesting story behind it. Today it’s not linked to anything but that wasn’t always true. This is the story of one of my most successful honeypot instances and how it enabled Microsoft to collect varied threat intelligence against a broad range of actor groups targeting Microsoft. I’m writing this now as we’ve decided to retire this capability.


In the past the domain ‘’ was an early domain used to host Visual Studio Code and some helpful documentation. The domain was active until around 2021, when this documentation was moved to a new home. The site behind the domain was an Azure AppService site that performed the redirection thus preventing existing links from being broken.


Sometime around mid-2021 the existing Azure AppService instance was shutdown leaving pointing to a service that no longer existed. This created a vulnerability.


This situation is what’s called a dangling subdomain which refers to a subdomain that once pointed to a valid resource but now hangs in limbo. Imagine a subdomain like that used to handle a blog application. When the underlying service is deleted (the blog engine) you might update your page link and assume the service has been retired. However, there is still a subdomain that pointed to the blog entire, this is now “dangling” and can’t be resolved.


A malicious actor can discover the dangling subdomain. Provision a cloud Azure resource with the same name and now visiting will redirect to the attacker’s resource. Here they control the content.


This happened in 2021 when the domain was temporarily used to host a malware C2 service. Thanks to multiple reports from our great community this was quickly spotted and taken down before it could be used. As a response to this Microsoft now has more robust tools in place to catch similar threats.


How did it become a honeypot?


Today it is relatively routine that MSTIC takes control of attacker-controlled resources and repurposes these for threat intelligence collection. Taking control of a malware C2 environment for example enables us to potentially discover new infected nodes.

At the time of the dangling code subdomain this process was relatively new. We wanted a good test case to show the value of taking over resources over taking them down. So instead of removing the dangling subdomain we pointed this instead to a node in our existing vast honeypot sensor network.


A honeypot is a decoy system designed to attract and monitor malicious activity. Honeypots can be used to collect information about the attackers, their tools, their techniques, and their intentions. Honeypots can also be used to divert the attackers from the real targets and to waste their time and resources.


Microsoft’s honeypot sensor network has been in development since 2018. It’s used to collect information on emerging threats to both our and our customers environments. The data we collect helps us be better informed when a new vulnerability is disclosed and gives us retrospective information on how, when and where exploits are deployed.


This data becomes enriched with other tools Microsoft has available and turns it from a source of raw threat data to threat intelligence. This is incorporate into a variety of our security products. Customers can also get access to this via Sentinel’s emerging threat feed.

The honeypot itself is a custom designed framework written in C#. It enables security researchers to quickly deploy anything from a single HTTP exploit handler in one or two lines of code all the way up to complex protocols like SSH and VNC. For even more complex protocols we can hand off to real systems when we detect exploit traffic and revert these shortly after.


It is our mission to deny threat actors access to resources or enable them to use our infrastructure to create further victims. That’s why in almost all scenarios the attacker is playing in a high interaction, simulated environment. No code is run, everything is a trick or deception designed to get them to reveal their intentions to us.


Substantial engineering has gone into our simulation framework. Today over 300 vulnerabilities can be triggered through the same exploit proof-of-concepts available in places like GitHub and exploitdb. Threat actors can communicate with over 30 different protocol and can even ‘log in’ and deploy scripts and execute payloads that look like they are operating on a real system. There is no real system and almost everything is being simulated.


Even so it’s important that in standing up a honeypot on an important domain like that it wasn’t possible for attackers to use this as environment to perform other web attacks. Attacks that might rely on same origin trust. To mitigate this further we added the sandbox policy to the pages which prevents these kinds of attacks.


What have we learnt from the honeypot?


Our sensor network has contributed to many successes over the year. We’ve presented on these at computer security conferences in the past as well as shared our data with academia and the community. We incorporate this data into our security products to enable them to be aware of the latest threats.


In recent years this capability has been crucial to understanding the 0day and nDay ecosystem. During Log4shell incident we were able to use our sensor network to track each iteration of the underlying vulnerability and associated proof-of-concept all the way back to GitHub. This helped us understand the groups involved in productionising the exploit and where it was being targeted. Our data enables internal teams to be much better prepared to remediate and provides the analysis for detection authors to improved products like MDE in real time.


The team developing this capability also works closely with the MSRC who our track our own security issues. When the Exchange ProxyLogon vulnerability was announced we had already written a full exploit handler in our environment to track and understand not just the exploit but the groups deploying it. This situational awareness this enables us to give clearer advice to industry, better protect our customers and integrate new threats we were seeing into Windows Defender and MDE.


The domain was often critical to the success of this as well as a useful early warning system. When new vulnerabilities have been announced, threat actors can often be too consumed with trying to use the vulnerability as quickly as possible than checking for Deception infrastructure like a honeypot. As a result, often saw exploits first, many of these exploits were attributed to threat actors MSTIC already tracks.


What happened next?


The code subdomain had been known to bug bounty researchers for several years. When receiving reports for this domain these would be closed to let them know they had found a honeypot.


In the past we’ve asked these security professionals to refrain from publishing the details of this service is in effort to protect the value we received from it. We’ve also understood for a while that this subdomain would need to be retired when it because well know what was behind it.


That time is now.


On the 25th April an uptick in traffic to the subdomain and posts on Twitter showed that domain was being investigated by a broad groups of individuals. We don’t want to waste effort researchers put into finding issues with our production systems so it was decided that the truth would finally be revealed and the system retired.





The timeline below gives an order of events from our perspective. It’s unknown exactly how the full exploit URL for our server ended up in Google search database, but it looks like this and the associated discovery on Twitter/X culminated in almost 80k Wechat exploits in a 3 hour period. it’s unlikely the Google crawler would have naturally found the URL. Our current theory is that a security researcher found this and submitted a report to Microsoft. As part of this process either the Chrome browser or another app found this URL and submitted it for indexing.


March 2024 WeChat exploit appear in Google search results for the first time
15th April 2024 Sumit Jain posts a redacted screenshot of a exploit mitigation, some debate occurs about whether the domain is the code subdomain.
21st April 2024 Google trends show that many people are now search for this domain
24th April 2024 We start to notice a significant uptick in traffic to the subdomain
26th April 2024 We are handling 126 thousand times more requests than average


By 26th April we were handling ~160k requests per day, up from the usual 5-100. Most of these requests were to a single endpoint handling a vulnerability in the Wechat Broadcast plugin for WordPress (CVE-2018-16283). This enabled anyone to ‘run’ a command from a parameter in the URI.


Looking at these URI’s we found 11k different commands being run. Most of these pushed a message by some group or another stating that the site had been hacked by them. As this was a simulation, this did not happen.

Removing these messages gives a clearer picture of the kinds of commands people were entering.




Most commands entered were Linux recon commands. These attempted to find out what the system was, what files it contained and more broadly what value it was to Microsoft. The next biggest group were running command, these ranged from basic Linux commands like ‘whoami’ but a few enterprising folks went on to run scripts of various languages.


Most people who interacted didn’t get further that the Wechat exploit. Over the three days that infosec Twitter took interest 63 different exploits in total were triggered. The biggest surprise to me was that most researchers stuck to HTTP, only three groups probed the other ports and even fewer logged into the other services that were available.



Some of the best investigation came from @simplylurking2 on Twitter/X who after finding out the system was a honeypot continued to analyse what we had in place and constructed. First constructing a rick roll and then a URL that when visited would display a message to right click and save a payload.




With such a lot of information now publicly available the usefulness of this subdomain has also diminished. On April 26th we replaced the site with a 404 message and are working on retiring the subdomain completely.


Our TI collection is undiminished, Microsoft runs many of these collection services across multiple datacentres. Our concept has been proved and we have rolled out similar capabilities at higher scales in many other locations worldwide. These continue to give us a detailed picture of emerging threats.

Reducing the Environmental Impact of Generative AI: a Guide for Practitioners

Reducing the Environmental Impact of Generative AI: a Guide for Practitioners

This article is contributed. See the original author and article here.


As generative AI’s adoption rapidly expands across various industries, integrating it into products, services, and operations becomes increasingly commonplace. However, it’s crucial to address the environmental implications of such advancements, including their energy consumption, carbon footprint, water usage, and electronic waste, throughout the generative AI lifecycle. This lifecycle, often referred to as large language model operations (LLMOps), encompasses everything from model development and training to deployment and ongoing maintenance, all of which demand diligent resource optimisation.


This guide aims to extend Azure’s Well-Architected Framework (WAF) for sustainable workloads to the specific challenges and opportunities presented by generative AI. We’ll explore essential decision points, such as selecting the right models, optimising fine-tuning processes, leveraging Retrieval Augmented Generation (RAG), and mastering prompt engineering, all through a lens of environmental sustainability. By providing these targeted suggestions and best practices, we equip practitioners with the knowledge to implement generative AI not only effectively, but responsibly.



Image Description: A diagram titled “Sustainable Generative AI: Key Concepts” divided into four quadrants. Each quadrant contains bullet points summarising the key aspects of sustainable AI discussed in this article.


Select the foundation model

Choosing the right base model is crucial to optimising energy efficiency and sustainability within your AI initiatives. Consider this framework as a guide for informed decision-making:


Pre-built vs. Custom Models

When embarking on a generative AI project, one of the first decisions you’ll face is whether to use a pre-built model or train a custom model from scratch. While custom models can be tailored to your specific needs, the process of training them requires significant computational resources and energy, leading to a substantial carbon footprint. For example, training an LLM the size of GPT-3 is estimated to consume nearly 1,300 megawatt hours (MWh) of electricity. In contrast, initiating projects with pre-built models can conserve vast amounts of resources, making it an inherently more sustainable approach.


Azure AI Studio‘s comprehensive model catalogue is an invaluable resource for evaluating and selecting pre-built models based on your specific requirements, such as task relevance, domain specificity, and linguistic compatibility. The catalogue provides benchmarks covering common metrics like accuracy, coherence, and fluency, enabling informed comparisons across models. Additionally, for select models, you can test them before deployment to ensure they meet your needs. Choosing a pre-built model doesn’t limit your ability to customise it to your unique scenarios. Techniques like fine-tuning and retrieval augmented generation (RAG) allow you to adapt pre-built models to your specific domain or task without the need for resource-intensive training from scratch. This enables you to achieve highly tailored results while still benefiting from the sustainability advantages of using pre-built models, striking a balance between customisation and environmental impact.


Model Size

The correlation between a model’s parameter count and its performance (and resource demands) is significant. Before defaulting to the largest available models, consider whether more compact alternatives, such as Microsoft’s Phi-2, Mistral AI’s Mixtral 8x7B or similar sized models, could suffice for your needs. The efficiency “sweet spot”—where performance gains no longer justify the increased size and energy consumption—is critical for sustainable AI deployment. Opting for smaller, fine-tuneable models (known as small language models—or SLMs) can result in substantial energy savings without compromising effectiveness.


Model Selection


Sustainability Impact

Pre-built Models

Leverage existing models and customise with fine-tuning, RAG and prompt engineering

Reduces training-related emissions

Custom Models

Tailor models to specific needs and customise further if needed

Higher carbon footprint due to training

Model Size

Larger models offer better output performance but require more resources

Balancing performance and efficiency is crucial


Improve the model’s performance

Improving your AI model’s performance involves strategic prompt engineering, grounding the model in relevant data, and potentially fine-tuning for specific applications. Consider these approaches:


Prompt Engineering

The art of prompt engineering lies in crafting inputs that elicit the most effective and efficient responses from your model, serving as a foundational step in customising its output to your needs. Beyond following the detailed guidelines from the likes of Microsoft and OpenAI, understanding the core principles of prompt construction—such as clarity, context, and specificity—can drastically improve model performance. Well-tuned prompts not only lead to better output quality but also contribute to sustainability by reducing the number of tokens required and the overall compute resources consumed. By getting the desired output in fewer input-output cycles, you inherently use less carbon per interaction. Orchestration frameworks like prompt flow and Semantic Kernel facilitate experimentation and refinement, enhancing prompt effectiveness with version control and reusability with templates.


Retrieval Augmented Generation (RAG)

Integrating RAG with your models taps into existing datasets, leveraging organisational knowledge without the extensive resources required for model training or extensive fine-tuning. This approach underscores the importance of how and where data is stored and accessed since its effectiveness and carbon efficiency is highly dependent on the quality and relevance of the retrieved data. End-to-end solutions like Microsoft Fabric facilitate comprehensive data management, while Azure AI Search enhances efficient information retrieval through hybrid search, combining vector and keyword search techniques. In addition, frameworks like prompt flow and Semantic Kernel enable you to successfully build RAG solutions with Azure AI Studio.



For domain-specific adjustments or to address knowledge gaps in pre-trained models, fine-tuning is a tailored approach. While involving additional computation, fine-tuning can be a more sustainable option than training a model from scratch or repeatedly passing large amounts of context via prompts and organisational data for each query. Azure OpenAI’s use of PEFT (parameter-efficient fine-tuning) techniques, like LoRA (low-rank approximation) uses far fewer computational resources over full fine-tuning. Not all models support fine-tuning so consider this in your base model selection.


Model Improvement


Sustainability Impact

Prompt Engineering

Optimise prompts for more relevant output

Low carbon impact vs. fine-tuning, but consistently long prompts may reduce efficiency

Retrieval Augmented Generation (RAG)

Leverages existing data to ground model

Low carbon impact vs. fine-tuning, depending on relevance of retrieved data

Fine-tuning (with PEFT)

Adapt to specific domains or tasks not encapsulated in base model

Carbon impact depends on model usage and lifecycle, recommended over full fine-tuning


Deploy the model

Azure AI Studio simplifies model deployment, offering various pathways depending on your chosen model. Embracing Microsoft’s management of the underlying infrastructure often leads to greater efficiency and reduced responsibility on your part.


MaaS vs. MaaP

Model-as-a-Service (MaaS) provides a seamless API experience for deploying models like Llama 3 and Mistral Large, eliminating the need for direct compute management. With MaaS, you deploy a pay-as-you-go endpoint to your environment, while Azure handles all other operational aspects. This approach is often favoured for its energy efficiency, as Azure optimises the underlying infrastructure, potentially leading to a more sustainable use of resources. MaaS can be thought of as a SaaS-like experience applied to foundation models, providing a convenient and efficient way to leverage pre-trained models without the overhead of managing the infrastructure yourself.


On the other hand, Model-as-a-Platform (MaaP) caters to a broader range of models, including those not available through MaaS. When opting for MaaP, you create a real-time endpoint and take on the responsibility of managing the underlying infrastructure. This approach can be seen as a PaaS offering for models, combining the ease of deployment with the flexibility to customise the compute resources. However, choosing MaaP requires careful consideration of the sustainability trade-offs outlined in the WAF, as you have more control over the infrastructure setup. It’s essential to strike a balance between customisation and resource efficiency to ensure a sustainable deployment.


Model Parameters

Tailoring your model’s deployment involves adjusting various parameters—such as temperature, top p, frequency penalty, presence penalty, and max response—to align with the expected output. Understanding and adjusting these parameters can significantly enhance model efficiency. By optimising responses to reduce the need for extensive context or fine-tuning, you lower memory use and, consequently, energy consumption.


Provisioned Throughput Units (PTUs)

Provisioned Throughput Units (PTUs) are designed to improve model latency and ensure consistent performance, serving a dual purpose. Firstly, by allocating dedicated capacity, PTUs mitigate the risk of API timeouts—a common source of inefficiency that can lead to unnecessary repeat requests by the end application. This conserves computational resources. Secondly, PTUs grant Microsoft valuable insight into anticipated demand, facilitating more effective data centre capacity planning.


Semantic Caching

Implementing caching mechanisms for frequently used prompts and completions can significantly reduce the computational resources and energy consumption of your generative AI workloads. Consider using in-memory caching services like Azure Cache for Redis for high-speed access and persistent storage solutions like Azure Cosmos DB for longer-term storage. Ensure the relevance of cached results through appropriate invalidation strategies. By incorporating caching into your model deployment strategy, you can minimise the environmental impact of your deployments while improving efficiency and response times.


Model Deployment


Sustainability Impact


Serverless deployment, managed infrastructure

Lower carbon intensity due to optimised infrastructure


Flexible deployment, self-managed infrastructure

Higher carbon intensity, requires careful resource management


Dedicated capacity for consistent performance

Improves efficiency by avoiding API timeouts and redundant requests

Semantic Caching

Store and reuse frequently accessed data

Reduces redundant computations, improves efficiency


Evaluate the model’s performance

Model Evaluation

As base models evolve and user needs shift, regular assessment of model performance becomes essential. Azure AI Studio facilitates this through its suite of evaluation tools, enabling both manual and automated comparison of actual outputs against expected ones across various metrics, including groundedness, fluency, relevancy, and F1 score. Importantly, assessing performance also means scrutinising your model for risk and safety concerns, such as the presence of self-harm, hateful, and unfair content, to ensure compliance with an ethical AI framework.


Model Performance

Model deployment strategy—whether via MaaS or MaaP—affects how you should monitor resource usage within your Azure environment. Key metrics like CPU, GPU, memory utilisation, and network performance are vital indicators of your infrastructure’s health and efficiency. Tools like Azure Monitor and Azure carbon optimisation offer comprehensive insights, helping you check that your resources are allocated optimally. Consult the Azure Well-Architected Framework for detailed strategies on balancing performance enhancements with cost and energy efficiency, such as deploying to low-carbon regions, ensuring your AI implementations remain both optimal and sustainable.


A Note on Responsible AI

While sustainability is the main focus of this guide, it’s important to also consider the broader context of responsible AI. Microsoft’s Responsible AI Standard provides valuable guidance on principles like fairness, transparency, and accountability. Technical safeguards, such as Azure AI Content Safety, play a role in mitigating risks but should be part of a comprehensive approach that includes fostering a culture of responsibility, conducting ethical reviews, and combining technical, ethical, and cultural considerations. By taking a holistic approach, we can work towards the responsible development and deployment of generative AI while addressing potential challenges and promoting its ethical use.



As we explore the potential of generative AI, it’s clear that its use cases will continue to grow quickly. This makes it crucial to keep the environmental impact of our AI workloads in mind.


In this guide, we’ve outlined some key practices to help prioritise the environmental aspect throughout the lifecycle. With the field of generative AI changing rapidly, make sure to say up to its latest developments and keep learning.



Special thanks to the UK GPS team who reviewed this article before it was published. In particular, Michael Gillett, George Tubb, Lu Calcagno, Sony John, and Chris Marchal.

Early adopters of Microsoft Copilot in Dynamics 365 Guides recognize the potential for productivity gains

Early adopters of Microsoft Copilot in Dynamics 365 Guides recognize the potential for productivity gains

This article is contributed. See the original author and article here.

In this era of rapid technological advancement, our industrial landscape is undergoing a significant transformation that affects many processes and people—from the way operational technology (OT) production data is leveraged to how frontline workers perform their jobs. While 2.7 billion skilled individuals keep manufacturing operations going, their attrition and retirement rates are on the rise. This heightened turnover is contributing to an ever-widening skills gap, pressuring organizations to look beyond traditional working and skilling to extend capabilities and ensure growth.

Microsoft developed Dynamics 365 Guides to address these challenges. The integration of Microsoft Copilot into Guides brings generative AI to this mixed reality solution. Copilot in Dynamics 365 Guides transforms frontline operations, putting AI in the flow of work, giving skilled and knowledge workers access to relevant information where and when they need it. This powerful combination—mixed reality together with AI—provides insight and context, allowing workers to focus on what truly matters.

Generative AI represents an enormous opportunity for manufacturers

With 63% of workers struggling to complete the repetitive tasks that take them away from more meaningful work, many are looking eagerly to technology for assistance. Generative AI addresses these realities by equipping skilled assembly, service, and knowledge workers with the information necessary to keep manufacturing moving. Integrating Copilot into Guides furthers Microsoft’s commitment to this underserved group within enterprises. Workers are using Copilot in Dynamics 365 Field Service to complete repair and service work orders faster, boosting overall productivity. Copilot is already creating efficiencies for organizations worldwide, though still in private preview, we’re excited to see how Guides unlocks frontline operations and use cases.

Copilot makes information and insight readily available. Generative AI enables Guides to put these details in context against neighboring machine components and functions, enabling technicians to repair and service faster. Copilot removes the guesswork or need to carry around those dusty old manuals. Users can ask questions using their natural language and simple gestures. Copilot summarizes relevant information to provide timely virtual guidance overlaid on top of their environment.

Manufacturers will see this innovation firsthand at Hannover Messe 2024. Partnering with Volvo Penta and BMW Group, Microsoft will illustrate generative AI’s potential on service and manufacturing frontlines. Read what we have planned at Hannover with Volvo and BMW, and what other private preview customers are doing with Copilot.

Volvo Penta is focused on transforming training in the field

Volvo Penta, a global leader in sustainable power solutions, is always looking for ways to utilize new technology to increase efficiency and accuracy and has recently been utilizing augmented reality (AR) capabilities that enhance worker training and productivity. As an early adopter of Guides and Microsoft HoloLens 2, Volvo Penta was eager to participate in the private preview for Copilot in Dynamics 365 Guides. For Volvo Penta, Copilot is another technology with the potential to unlock further value for their stakeholders.

Volvo Penta is part of a conceptual innovation exploration, to evaluate how Copilot can help optimize the training of entry-level technicians by enhancing self-guided instruction. As Volvo Penta’s Director of Diagnostics put it, “Copilot makes it feel as though a trainer is always on hand to answer questions in the context of your workflow.” Locating 10 to 15 sensors used to take new technicians an hour or more, and now it only takes five minutes. This time savings has the potential to significantly increase productivity and learning retention, helping Volvo Penta, its customers, and dealers, accomplish more. The company continues to innovate with AI and mixed reality solutions to modernize service and streamline frontline operations.

At Hannover Messe 2024, the company is showcasing how Copilot could serve their customers to improve uptime and productivity. In the demo scenario, Volvo Penta envisions its ferry captains using Copilot to address a filter issue prior to departure. Left without a service technician onboard, the captain troubleshoots replacing the filter, using Copilot and HoloLens 2 to do so with step-by-step guidance.

Overhead view of a person looking at a large piece of equipment.

Volvo Penta

See how Volvo Penta streamlines frontline operations with Copilot in Dynamics 365 Guides

BMW Group is pushing the boundaries of vehicle design and development

BMW Group is improving its product lifecycle, incorporating generative AI, human-machine interactions, and software-hardware integrations for better predictability, optimization, and vehicle innovation. As a global HoloLens 2 customer, BMW Group has spent the last couple years developing its own immersive experiences and metaverse using mixed reality. Now participating in the private preview for Copilot in Dynamics 365 Guides, they are exploring how the combination of mixed reality and generative AI, together, can push the boundaries of innovation.

In private preview, BMW Group’s Digitization and virtual reality (VR) Team within research and development (R&D) is the first to evaluate Copilot’s potential on design and development. With Copilot, product designers and engineers are simulating how the use of different materials and components impact vehicle design and their environmental footprint. The insights gained through this approach will help BMW Group optimize engineering and production processes. The organization believes generative AI will also benefit its Aftersales frontline workers, providing them access to expert knowledge and guidance, whenever and where it is needed.

This joint collaboration will ultimately enable BMW Group to spark innovation and target the use cases that drive its own digital transformation forward.

Chevron is exploring the potential impact on frontline operations

AI, automation, and mixed reality solutions are poised to reshape industries everywhere. Within energy, a focus on safety and the desire to accelerate skilling has Chevron looking to advance the capabilities of its frontline workers for the future. Copilot in Dynamics 365 Guides offers Chevron the opportunity to optimize these operations, empower its workers, and infuse informed decisions throughout its value chain. AI and mixed reality, together enables Chevron to define energy in human terms.

Through the private preview for Copilot in Dynamics 365 Guides, Chevron is exploring new use cases at its El Segundo Refinery that could unlock further enhancements in worker skilling and safety.

Get started with Copilot in Dynamics 365 Guides

Interested customers can get started by deploying Dynamics 365 Guides and Dynamics 365 Remote Assist on either HoloLens 2 or mobile devices as the first step. If you want to see how AI can transform your workforce, learn how you can start implementing Microsoft Copilot today.

The post Early adopters of Microsoft Copilot in Dynamics 365 Guides recognize the potential for productivity gains appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

LeVar Burton joins Vasu Jakkal to share his hope for transformative technologies like generative AI

LeVar Burton joins Vasu Jakkal to share his hope for transformative technologies like generative AI

This article is contributed. See the original author and article here.

LeVar Burtonknown for his role as Chief Engineer Geordi La Forge in Star Trek and as the host and executive producer of the beloved PBS children’s series Reading Rainbowrecently sat down for a one-on-one chat with CVP of Microsoft Security, Vasu Jakkal, to discuss the impact of generative AI on our world.


Figure 1: LeVar Burton- pop culture icon, content creator, and literacy advocateFigure 1: LeVar Burton- pop culture icon, content creator, and literacy advocate

The conversation began with a discussion of the impact of Star Trek on both speakers’ lives. Burton spoke about how seeing actress Nichelle Nichols on the bridge of the USS Enterprise meant the world to him, as it showed him what creator Gene Roddenberry said was true: “When the future came, there would be a place for me.” Jakkal shared how Star Trek was a pivotal influence in her childhood and is in part responsible for her career in cybersecurity. “Star Trek is a perfect example of what we imagine is what we create in this realm. Human beings, we are manifesting machines,” said Burton. “And Star Trek has been responsible for helping to sow the seeds of germination for a lot of different technologies that are in use in our everyday lives today.”


Figure 2: Vasu Jakkal and LeVar Burton discussing Star Trek's impact on technology and their hope for how generative AI will transform our world.Figure 2: Vasu Jakkal and LeVar Burton discussing Star Trek’s impact on technology and their hope for how generative AI will transform our world.

Generative AI (GenAI) is the transformational technology of our generation. So, we asked LeVar Burton—one of the world’s foremost storytellers and champions of learning through his work in Reading Rainbow —to help us tell the story of how GenAI will improve education and opportunities for everyone across the globe. In addition to reshaping our everyday lives, our emails, and meetings, GenAI is changing how security work gets done. These new solutions—like Microsoft Copilot for Security—help SecOps professionals make sense of large amounts of data at machine speed. They simplify the complex to help defenders find a needle in the haystack, or even a specific needle in a needle stack. Jakkal also discussed how AI can help reduce the talent shortage in the security industry and make it more diverse. 


The Microsoft mission is to empower every person and organization in the world to achieve more. And the security mission is to build a safer world for all. Burton expressed his hope that generative AI will help in ways that we haven’t thought of before, referencing the cultural shift that happened in just eight nights when the groundbreaking television miniseries Roots aired. “My hope, my prayer is that generative AI can help us educate our kids in ways that we haven’t been able to and perhaps haven’t even thought of,” stated Burton. He also emphasized the importance of making GenAI safe and accessible to all. Jakkal agreed, touching on the importance of responsibility when using AI, mentioning the Microsoft responsible AI framework—a set of steps to ensure AI systems uphold six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.  


Central to the conversation was the concept of hope, and hope for the future. Burton said the younger generation gives him hope, as they see the world and technology in a different way. Jakkal expressed her hope that we can use GenAI to change the world in a good way, by working together and being responsible. Jakkal closed the discussion by saying “I think collectively together we have to use generative AI and the technologies that we have to change this course. Storytelling, the narrative to change the narrative to one of optimism, to one of hope, to one of inclusion… for all and done by all.”  


Watch the full video here: