This article is contributed. See the original author and article here.
Field service management is a complex process that requires seamless coordination among technicians, customers, equipment, and suppliers. To address these complexities, we have introduced two new capabilities in Dynamics 365 Field Service to make this easier.
The first capability, Complete Work Order by Status, allows field service organizations to use a booking status to indicate if further work is required to complete a work order. This feature gives technicians the flexibility to indicate when additional work is needed on a work order, streamlining the process for field service teams to gain better insight into the tasks required to fulfill a work order.
The second capability, Complete Booking While Preserving End-Time, ensures that when a booking is marked as completed by someone other than the assigned resource, the end-time value will no longer automatically update to the current timestamp. Instead, it will retain the end-time value of the booking. This helps ensure accuracy and consistency in the record-keeping process.
These features aim to optimize service delivery, reduce costs, and enhance customer satisfaction. Now, let’s explore each feature in greater detail.
Complete Work Order by Status
Why is it needed?
In the Field Service industry, it’s not uncommon that what begins as a routine service request can morph into a complex task, requiring multiple follow-up visits to finalize a work order.
Consider a scenario where a commercial building is experiencing recurrent refrigerant leaks from its rooftop HVAC unit. Typically, these leaks are addressed by replacing a worn-out seal. However, upon closer examination, it becomes evident that the evaporator coil is corroded and requires replacement. This insight only emerges once the field service technician arrives on site. Typically, when this occurs, technicians find themselves without the required parts for immediate repairs. This results in the need to contact suppliers, procure necessary parts, and schedule follow-up visits—unforeseen tasks not factored into the initial work order creation. In this scenario, the technician would close the booking while still needing to acknowledge the need for a follow-up visit to complete the work order.
Previously, field service teams encountered challenges in accurately reflecting this information without implementing custom logic. However, with the introduction of the “Complete Work Order by Status” feature, these unexpected visits can now be effortlessly marked as requiring follow-up by utilizing a booking status to indicate the need for further work to fulfill the work order.
How it works?
To configure this feature, administrators should access the Resources section within Dynamics 365 Field Service. Navigate to Booking settings and choose Booking Status. Here, administrators can either select an existing completed status or create a new one. Next, administrators should navigate to the Field Service tab within the selected booking status. Within the “Field Service Status” dropdown, update the “Status Completes Work Order” toggle to “off”.
Upon adjusting this setting, technicians can utilize the newly configured booking status to indicate both the completion of a booking and the need for follow-up work on the associated work order. This adjustment optimizes the workflow for field service teams and enhances their understanding of the tasks necessary for work order fulfillment.
Complete Booking while preserving ‘End Time’
Why is it needed?
Field Service technicians are the backbone, tirelessly striving to meet deadlines, resolve customer issues, and ensure tasks are completed promptly. However, amidst their hectic schedules, oversights may arise, such as neglecting to mark a job as “completed” once finished. In such cases, dispatchers or field service managers step in to manually update the booking status on behalf of the technician.
For example, a dispatcher schedules a repair job from 1:00 PM to 2:00 PM. Despite an unforeseen delay, the technician completes the task at 2:00 PM but forgets to update the booking status to “completed.”
The dispatcher later notices the oversight and manually marks the booking as completed at 9:00 AM the following day.
Previously, this would inaccurately reflect a job duration of 19 hours, with the end time value set to 9:00 AM. With the implementation of the new “Complete bookings while preserving end-time” logic, when a user other than the assigned resource updates the booking to complete on behalf of the technician, the original end-time value of 2:00 PM is maintained.
How it works?
Exciting update: No setup is needed! When a booking is marked as completed by someone other than the assigned resource, the end-time value will no longer automatically update to the current timestamp. Instead, it will retain the end-time value prior to completion. This ensures accuracy and consistency in the record-keeping process.
We’re eagerly anticipating your experience with these new updates! Feel free to share your thoughts with us.
Share Your Feedback for Continuous Improvement
These new capabilities for Dynamics 365 Field Service are designed to simplify tasks for technicians and empower field service teams with enhanced tracking abilities for work orders and bookings. These advancements offer precision and efficiency, driving improvements in service quality, cost reduction, and customer satisfaction.
Explore more on Dynamics 365 Field Service documentation and share your feedback within the Field Service product or via our ideas portal. Your input drives continuous improvement for enhanced operational performance.
This article is contributed. See the original author and article here.
Microsoft partnered with Fortra’s Terranova Security in October 2023 once again to kick off the annual Gone Phishing Tournament. The Gone Phishing Tournament (GPT) is an annual online phishing initiative that uses real-world simulations to establish accurate phishing clickthrough rates and additional benchmarking statistics for user behaviors. This helps organizations strengthen their security awareness training programs with accurate phishing benchmarking data.
In this blog, we would like to share the key takeaways from this report and provide insights on what it means to improve organizational resilience against phishing and social engineering attacks with tools like Attack Simulation and Training.
With nearly 300 organizations across 142 different countries that participated in the 2023 tournament, there were over 1.37 million users who received the event’s phishing simulation email. This was an exciting 10% year over year increase of participants and provided a strong look into benchmarking status across many industries.
Gone Phishing Tournament Outcome
Phishing is one of the most common and effective cyberattacks that target individuals and organizations.
Overall click rates from this year’s GPT was at 10.4% with 6.5% of the recipients also submitted their credentials.
What this means is that 3 out of every 5 users who clicked on the phishing email link did not recognize the phishing attempt and submitted their credentials.
This suggests that continual phishing awareness education is incredibly important to protect your organization. With new AI and LLM technology, bad actors can set up attacks and create credible looking phishing messages at an even faster rate than previously.
While we continue to develop technical safeguards such as better phishing message detection, it’s important to recognize that humans are still the last line of defense between bad actors and the security of your organization. It is absolutely vital to an organization’s security to continue to educate their employees to enforce awareness and security.
To put it into perspective, if this phishing simulation was a real attack, almost 90,000 passwords could have been collected from the participating organizations. This data could have been used for nefarious purposes like Account takeover, Business email compromise, and Credential stuffing attacks.
What’s next?
It is important to take a security-first culture within your organization. Leverage your simulation and awareness programs to set realistic goals, use engaging training content, develop ongoing training programs, regularly assess and refine your security strategy, and foster a company-wide culture of security awareness.
In this changing landscape of cybersecurity and threats, mitigating the human risk factor and strengthening your organization’s resilience against social engineering is more important than ever. Phishing simulations can help individuals continue to stay vigilant against these threats.
We hope to continue to work with our customers and partners to further invest in user education and security program that matches different organizational needs. Attack Simulation and Training, part of Defender for Office Plan 2, helps organizations train their end users with realistic phishing simulations and security training.
Attack Simulation and Training is an intelligent phish risk reduction tool that measures behavior change and automates deployment of an integrated security awareness training program across an organization. It is available with Microsoft 365 E5 or Microsoft Defender for Office 365 P2 plan.
Learn more:
To learn more about Microsoft Security solutions, visit our website. Bookmark the security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
In the relentless pursuit of customer-centric excellence, optimizing our return process emerges as a pivotal strategy for both customer satisfaction and operational prowess. This blog exploration delves into Dynamics 365 SCM Warehouse Management’s transformative role in enhancing customer return receiving process, showcasing a strategic framework that promises elevated customer experience and operational efficiencies.
Elevating Standard Processes
In the traditional realm of return management, the issuance of a Return Material Authorization (RMA) order has been the cornerstone. This process, while effective, faces challenges in scenarios where the reason for return is not immediately apparent. Our executive journey begins with understanding the standard RMA process and its limitations.
This current sales return approach with RMA order, while effective in many respects, introduces potential delays and customer frustration due to its structured and sometimes rigid procedures. The complexity of documentation, especially in unplanned return scenarios, coupled with communication gaps and limited automation, poses challenges. Additionally, as businesses scale, the classic RMA order process may face scalability concerns, raising questions about its ability to efficiently handle increased volumes of return requests. Addressing these limitations is crucial for executives aiming to optimize return management processes, enhance customer satisfaction, and ensure operational resilience. The previous return system does not support the creation of return labels that is a must have for many industries, such as e-commerce.
Image: Warehouse worker managing return orders.
Enhanced processes in customer return receiving
In the innovative landscape of our enhanced customer return receiving process within Dynamics 365 SCM, the traditional requirement for a pre-existing Return Material Authorization (RMA) order has been reimagined. In the new way of enhanced customer return process, we can pre-generate return data, create return labels that can be shipped in a shipment and used for seamless return of items. If items need to be returned, customers now have the flexibility to initiate returns seamlessly through two distinct scenarios: the ‘Blind Return’ and ‘Return Details,’ eliminating the upfront necessity of an RMA order.
Blind Return Scenario: In the ‘Blind Return’ scenario, customers can initiate return requests without disclosing specific details initially. We will still need to have a customer number to assign (it can be a dummy customer) to the return order and the item numbers, but we do not need to have an issued return order number. This approach caters to situations where the reason for return may not be immediately apparent or where customers prefer a swift and straightforward process. By bypassing the need for upfront details, this streamlined approach accelerates the return initiation phase, enhancing the overall customer experience.
Return Details Scenario: Alternatively, the ‘Return Details’ scenario allows customers to provide comprehensive information about the return from the outset. This more structured approach is ideal for situations where a detailed explanation of the reason for return, return until date or additional information is available. It facilitates smoother and more informed return processing, using pre-generated RMA-number included on the return labels, enabling the receiving team to address customer needs with precision and efficiency.
Crucially, what sets this enhanced process apart is the automation that follows these customer-initiated returns. In the subsequent stages, the system intelligently generates the necessary RMA order automatically in the background. The post-return generation of the RMA order streamlines the process, combining customer convenience with the structured documentation needed for efficient internal processing. As we evolve in our commitment to customer-centric operations, this innovative approach sets the stage for a more agile, responsive, and efficient return management process. In the image below, we can see an illustration of the return receiving process:
Image: Return orders process.
Conclusion
In the executive suite, strategic decisions are paramount. The enhanced customer return receiving process in Dynamics 365 SCM Warehouse Management is not merely a feature; it’s a strategic tool that aligns our organization with the demands of modern business. As we navigate the complexities of customer interactions and operational excellence, let us leverage this innovation to propel our brand into the forefront of customer-centric leadership.
This article is contributed. See the original author and article here.
Image with text “Challenge”
Register for the VS Code Day Skills Challenge! Whether you’re just starting or looking to change your career, this program is designed for you to get to know VS Code and GitHub Copilot in different career areas like Data Science, Artificial Intelligence, and much more! With easy-to-follow lessons, exercises, and live workshops, learn what’s new in VS Code. Register now and discover the world of opportunities offered by VS Code at: https://aka.ms/VSCodeDayChallenge!
VS Code Day is our annual event where you’ll learn how to elevate your development workflow with the latest and greatest features of VS Code. This year, we’re excited to delve into AI and you’ll hear from the VS Code team and other industry experts on topics like AI-powered programming with GitHub Copilot, building and deploying generative AI apps to the cloud, enhancing the C# development experience, and more!
Whether you’re just starting out or you’re an experienced developer, join us on April 24th, 2024 for a day focused on the editor that lets you code anything, cross-platform and free!
Image with text “Acknowledgements”
By completing the 11 modules in this challenge you will be able to earn a badge in your Microsoft Learn profile! Stay tuned for more information on social media and on this blog – we’ll share more about this #VSCodeDay’s events!
IMPORTANT!:Your badge will be added to your Microsoft Learn profile within 1 week of the challenge end date.
Image text “Frecuently Asked Questions”
When does this challenge start and end? It starts on April 24, 2024 and ends on May 17, 2024.
How much experience do I need? Just basic programming skills.
How much time should I dedicate to it per day? It is designed for you to learn according to your needs and available time, remember that you have to complete it by May 17.
Prerequisites: None.
Is there a cost to participate in this challenge? There is no cost
image with text “sessions and live-workshops”
Get ready to find out what’s new in VS Code with our livestream series! It will be full of tips, tricks, and practical exercises to help you with your personal and professional projects. Whether you’re just starting out or looking to improve your skills, this is a must-see event for anyone interested in programming.
To register for all these #VSCodeDay sessions, visit here: (aka.ms/VSCodeDay)! It starts on April 24 from 11 am to 6 pm (GMT-6)*.
Live session
Speaker(s)
Social Media
Keynote: View Source: What gets into VS Code and why
If you’ve used VS Code before, please let us know in the comments about your experience: your favorite extensions, your favorite sessions during the livestreams, and even if it’s the first time you’ve heard about VS Code!
We’d love to hear your favorite/funny stories from the times you’ve coded in VS Code!
Tag us on social media using the following hashtag: #VSCodeDayCSC
This article is contributed. See the original author and article here.
O Global AI Bootcamp é um evento gratuito e anual que é realizada em diferentes cidades espalhadas pelo mundo inteiro com o intuito de promover, ensinar e discutir sobre:
Inteligência Artificial
OpenAI
Azure OpenAI
LLM’s
LangChain
RAG
Chatbots
E, muito mais!
O evento é organizado pela comunidade Global AI Community e é realizado em parceria com diversas empresas e organizações locais. E, gostaria de trazer aqui uma grande novidade à toda Comunidade Técnica do Rio de Janeiro!
Em 2024, o Global AI Bootcamp será realizado no Rio de Janeiro! E, será um evento presencial e gratuito. Contará com:
Palestras
Workshops
Laboratórios
Debates
Ementa do Evento
Vamos agora falar um pouco sobre o que será abordado no evento:
Agenda
09:00 – Keynote – Global AI Bootcamp (Abertura)
09:05 – Main Talk –Glaucia Lemos: Como Fazer uma Aplicação com Copilot e IA Generativa?
O evento será realizado na Faculdade Ibmec da Barra da Tijuca, localizada na Avenida Armando Lombardi, 940 – Barra da Tijuca, Rio de Janeiro – RJ.
Então, vamos resumir as informações:
Data:19 de Abril de 2024
Horário:09:00 às 14:00
Local:Faculdade Ibmec da Barra da Tijuca
Endereço:Avenida Armando Lombardi, 940 – Barra da Tijuca, Rio de Janeiro – RJ
Inscrições
As inscrições para o evento já estão abertas e as vagas são limitadas. Então, clique na imagem abaixo e garanta já a sua vaga!
As inscrições pelo site oficial do evento é extremamente importante para àqueles que participarão. Pois se você decidir fazer um dos workshops que serão administrados, será gerado uma chave para fazer uso dos recursos da OpenAI de graça!
Material de Apoio para quem já se inscreveu!
Se você já se inscreveu no evento, acessem agora mesmo o material de apoio. Pois, será muito importante para ajudar a vocês a se prepararem para o evento:
A área de Inteligência Artificial é uma das áreas que mais estão em grande ascensão no mercado de tecnologia nos dias de hoje. E, eventos como o Global AI Bootcamp são essenciais para aprendermos e discutirmos sobre essas novas tecnologias e seus impactos em nossas vidas, seja pessoal e principalmente profissional!
Assim sendo, não percam a oportunidade de participar desse grande evento que será realizado no Rio de Janeiro em 2024. E, espero encontrar vocês por lá!
This article is contributed. See the original author and article here.
Imagen “reto”.
¡Regístrate al VS Code Day Skills Challenge!Ya sea que estes comenzando o buscando cambiar tu carrera, este programa está diseñado para que conozcas VS Code y GitHub Copilot en diferentes áreas profesionales como Ciencia de Datos, Inteligencia Artificial y Desarrollo Web. Con lecciones fáciles de seguir, ejercicios prácticos y talleres en vivo, aprende las novedades en VS Code. ¡Regístrate ahora y descubre el mundo de oportunidades que te ofrece VS Code en:https://aka.ms/VSCodeDayChallenge!
VS Code Day es nuestro evento anualen el cualaprenderás a mejorar tu proceso de programación con las novedades y grandes características de VS Code! Este año, estamos muy emocionados ya que tendremos sesiones enfocadas en la Inteligencia Artificial y escucharás al equipo de VS Code y a expertos de la industria hablar sobre temas como GitHub Copilot, la creación e implementación de aplicaciones de IA generativa en la nube, la mejora de la experiencia de programación en C# y ¡mucho más!
Si estás empezando en el mundo de programación o si eres un programador con experiencia, ¡acompáñanos este 24 de abril de 2024 para conocer más sobre este super editor de código gratuito que te permite programar cualquier cosa!
Imagen de “Reconocimientos”
Al completar los 11 módulos de este reto, podrás obtener un reconocimiento (badge) digital en tu perfil de Microsoft Learn por haber finalizado esta experiencia. ¡Mantente al tanto de más información en redes sociales y en este blog – te compartiremos más sobre los eventos de este #VSCodeDay!
¿Cuándo inicia y finaliza este reto?Inicia el 24 de abril 2024 y finaliza el 17 de mayo del 2024.
¿Cuánta experiencia necesito? Solo conocimientos básicos de programación.
¿Cuánto tiempo debo dedicarle al día? Está diseñado para que vayas aprendiendo conforme a tus necesidades y tiempo disponible, recuerda que tienes que completarlo hasta el 17 de mayo.
Prerrequisitos:Ninguno.
¿Tiene costo alguno?No tiene costo
Imagen de “Charlas y talleres en vivo”
¡Prepárate para conocer las novedades de VS Code con nuestra serie de charlas y talleres en vivo! Estarán llenas de consejos, trucos y ejercicios prácticos para ayudarles con sus proyectos personales y profesionales. Ya sea que estas comenzando o buscando mejorar tus habilidades, este es un evento imperdible para cualquier persona interesada en programación.
¡Para registrarte a todas estas charlas del #VSCodeDay ingresa a la siguiente página web (aka.ms/VSCodeDay)! Inicia el 24 de Abril desde las 11 am hasta las 6 pm (GMT-6)*.
*Horario Ciudad de México
Evento
Ponente
Redes Sociales
Keynote: View Source: What gets into VS Code and why
Si anteriormente has utilizado VS Code, por favor, cuéntanos en los comentarios tu experiencia: tus extensiones favoritas, que te han parecido las charlas del evento e incluso si es la primera vez que has escuchado sobre VS Code!
¡Nos fascinaría escuchar tus historias o anécdotas favoritas/divertidas de las veces que has programado en VS Code!
Etiquétanos en redes sociales usando el siguiente hashtag: #VSCodeDayCSC
This article is contributed. See the original author and article here.
At Microsoft, trust is the foundation of everything we do. As more organizations adopt Copilot in Dynamics 365 and Power Platform, we are committed to helping everyone use AI responsibly. We do this by ensuring our AI products deliver the highest levels of security, compliance, and privacy in accordance with our Responsible AI Standard—our framework for the safe deployment of AI technologies.
Take a moment to review the latest steps we are taking to help your organization securely deploy Copilot guided by our principles of safety, security, and trust.
Copilot architecture and responsible AI principles in action
Let’s start with an overview of how Copilot works, how it keeps your business data secure and adheres to privacy requirements, and how it uses generative AI responsibly.
First, Copilot receives a prompt from a user within Dynamics 365 or Power Platform. This prompt could be in the form of a question that the user types into a chat pane, or an action, such as selecting a button labeled “Create an email.”
Copilot processes the prompt using an approach called grounding, which might include retrieving data from Microsoft Dataverse, Microsoft Graph, or external sources. Grounding improves the relevance of the prompt, so the user gets responses that are more appropriate to their task. Interactions with Copilot are specific to each user. This means that Copilot can only access data that the current user has permissions to.
Copilot uses Azure OpenAI Service to access powerful generative AI models that understand natural language inputs and returns a response to the user in the appropriate form. For example, a response might be in the form of a chat message, an email, or a chart. Users should always review the response before taking any action.
How Copilot uses your proprietary business data
Responses are grounded in your business content and business data. Copilot has real-time access to both your content and context to generate answers that are precise, relevant, and anchored in your business data for accuracy and specificity. This real-time access goes through our Dataverse platform (which includes all Power Platform connectors), honoring the data loss prevention and other security policies put in place by your organization. We follow the pattern of Retrieval Augmentation Generation (RAG), which augments the capabilities of language models by adding dynamic grounding data to the prompt that we send to the model. Our system dynamically looks up the relevant data schema using our own embedding indexes and then uses the language models to help translate the user’s question into a query that we can run against the system of record.
We do not use your data to train language models. We believe that our customers’ data is their data in accordance with Microsoft’s data privacy policy. AI-powered language models are trained on a large but limited corpus of data—but prompts, responses, and data accessed through Microsoft Graph and Microsoft services are not used to train Copilot for Dynamics 365 or Power Platform capabilities for use by other customers. Furthermore, the models are not improved through your usage. This means that your data is accessible only by authorized users within your organization unless you explicitly consent to other access or use.
How Copilot protects business information and data
Enterprise-grade AI, powered by Azure OpenAI Service. Copilot is powered by the trusted and compliant Azure OpenAI Service, which provides robust, enterprise-grade security features. These features include content filtering to identify and block output of harmful content and protect against prompt injections (jailbreak attacks), which are user prompts that provoke the generative AI model into behaving in ways it was trained not to. Azure AI services are designed to enhance data governance and privacy and adhere to Microsoft’s strict data protection and privacy standards. Azure OpenAI also supports enterprise features like Azure Policy and AI-based security recommendations by Microsoft Defender for Cloud, meeting compliance requirements with customer-managed data encryption keys and robust governance features.
Built on Microsoft’s comprehensive approach to security, privacy, and compliance. Copilot is integrated into Microsoft Dynamics 365 and Power Platform. It automatically inherits all your company’s valuable security, compliance, and privacy policies and processes. Copilot is hosted within Microsoft Cloud Trust Boundary and adheres to comprehensive, industry-leading compliance, security, and privacy practices. Our handling of Copilot data mirrors our treatment of other customer data, giving you complete autonomy in deciding whether to retain data and determining the specific data elements you wish to keep.
Safeguarded by multiple forms of protection. Customer data is protected by several technologies and processes, including various forms of encryption. Service-side technologies encrypt organizational content at rest and in transit for robust security. Connections are safeguarded with Transport Layer Security (TLS), and data transfers between Dynamics 365, Power Platform, and Azure OpenAI occur over the Microsoft backbone network, ensuring both reliability and safety. Copilot uses industry-standard secure transport protocols when data moves over a network—between user devices and Microsoft datacenters or within the datacenters themselves.
Watch this presentation by James Oleinik for a closer look at how Copilot allows users to securely interact with business data within their context, helping to ensure data remains protected inside the Microsoft Cloud Trust Boundary. You’ll also learn about measures we take to ensure that Copilot is safe for your employees and your data, such as how Copilot isolates business data from the language model so as not to retrain the AI model.
Architected to protect tenant, group, and individual data. We know that data leakage is a concern for customers. Microsoft AI models are not trained on and don’t learn from your tenant data or your prompts unless your tenant admin has opted in to sharing data with us. Within your environment, you can control access through permissions that you set up. Authentication and authorization mechanisms segregate requests to the shared model among tenants. Copilot utilizes data that only you can access. Your data is not available to others.
Committed to building AI responsibly
As your organization explores Copilot for Dynamics 365 and Power Platform, we are committed to delivering the highest levels of security, privacy, compliance, and regulatory commitments, helping you transform into an AI-powered business with confidence.
This article is contributed. See the original author and article here.
The timeline is a crucial tool for users to monitor customer engagements, track activities, and stay updated on record progress. With Generative AI, we’re introducing timeline highlights, enabling users to grasp activity details in milliseconds.
Streamlined timeline highlights revolutionize the way users interact with essential activities such as emails, notes, appointments, tasks, phone calls, and conversations. With a single click, agents gain access to summaries of key events, including records like cases, accounts, contacts, leads, opportunities, and customized entities.
Agents save time with timeline highlights
This new feature optimizes agent productivity, eliminating the need for excessive clicks and extra reading. Agents can efficiently absorb crucial information, enabling faster and more transparent interactions with customers. Users can expand the highlights section in the timeline by clicking on the chevron.
The highlights show relevant items in a clear and concise bulleted format, facilitating quick analysis and easy reference. The copy functionality empowers users to reuse content by pasting it into notes, with the flexibility to make modifications as needed.
In summary, our innovative approach to timelines, driven by generative AI technology, offers users a transformative experience. Consequently, agents can effortlessly track customer engagements and monitor progress with unparalleled speed and accuracy.
The timeline highlights feature is available within the apps like Dynamics 365 Customer Service, Dynamics 365 Sales, Dynamics 365 Marketing, Dynamics Field Service and custom model-driven Power Apps, providing a unified experience across Dynamics 365.
Timeline highlights are enabled by default. You can enable and disable timeline highlights at the app level and also at form level via the maker portal make.powerapps.com
This article is contributed. See the original author and article here.
Healthcare and Life Sciences (HLS) is a demanding and complex field that requires constant innovation, collaboration, and communication. HLS professionals often have to deal with large amounts of data, information, and documentation, which can be overwhelming and time-consuming. Moreover, the COVID-19 pandemic has added more pressure and stress to the already challenging work environment, leading to increased risks of burnout and mental fatigue.
How can HLS professionals cope with these challenges and improve their productivity and well-being? One possible solution is to leverage the power of AI and use Copilot, a tool that helps you write better code faster. Copilot is a smart assistant that can assist with email overload, summarize information from various sources, generate documentation, and more. Copilot also integrates with other applications like Teams, Word, Outlook, and more, creating a seamless workflow that can enhance your efficiency and creativity.
Check out the ever-growing repository of Use case workflow leveraging the power of Copilot.
* Note: all examples are demonstrations for educational purposes only and not intended to be used as production. No warranty or support is stated or implied.
This article is contributed. See the original author and article here.
Last updated 4/3/2024 to include v2 Tiers features.
Authors: Faisal Mustafa, Ben Gimblett, Jose Moreno, Srini Padala, and Fernando Mejia.
There are different options when it comes to integrating your API Management with your Azure Virtual Network (VNet) which are important to understand. These options will depend on your network perimeter access requirements and the available tiers and features in Azure API Management.
This blog post aims to guide you through the different options available on both the classic tiers and v2 tiers of Azure API Management, to help you decide which choice works best for your requirements.
TL; DR
Decision tree describing how to choose the right Azure API Management tier based on networking scenarios.
Here is the relevant documentation to implement these tiers and features:
Before we jump into the options and differences it’s worth taking a step back to understand more about how Azure Platform as a Service products (PaaS) work regarding networking. If you need a refresher and so we don’t repeat ourselves here, we’d ask the reader to spend a few minutes over at Jose’s excellent “Cloudtrooper blog” and his deep dive post on all things “PaaS networking”. We’ll use some of the same labels and terms in this post for consistency Taxonomy of Azure PaaS service access – Cloudtrooper.
What is API Management, what tiers are available and why does it matter in relation to networking?
The first thing to remember is that the API Management API Gateway is a Layer 7 (in OSI model terms) HTTP Proxy. Keeping this in mind helps a lot when you think about the networking options available through the different tiers. In simple terms:
An HTTP proxy terminates HTTP connections from any client going to a set of [backend] servers and establishes new HTTP connections to those servers. For most API Management Gateway use cases the resource would reside close to the [backend] servers its facades (usually in the same Azure region).
Diagram describing all the components included in Azure API Management, and the difference between inbound and outbound sections.
Why does this matter? When we talk about the available networking-options we talk about features which relate to the initial client connection to API Management(inbound) OR features relating to the connection from API Management to the API backends(outbound). From now on, we will call them inbound and outbound connections, and there are different options/features for each type.
Regarding Azure API Management tiers we will rely in the following categories:
Consumption tier, the tier that exposes serverless properties.
Classic tiers, this category refers to the Developer, Basic, Standard and Premium tiers.
V2 tiers, this category refers to the Basic v2 and Standard v2.
Networking scenarios
Let’s jump right in. To make it easier to navigate and for you to get the information you need to make the right decisions for your use case, lets summarize by the applicable use-cases, we’ll list the tiers where the functionality is available and add any applicable notes.
I have no specific networking requirements and just want to keep things simple.
Supported tiers: Consumption, Classic tiers, and V2 tiers.
Of course, there’s more to implementing a workload with API Management than just networking features and still a lot of choice when it comes to an API Management tier that fits your workload and scale requirements. But if you are ok with having inbound and outbound connections going through the Azure Backbone or Public Internet, any tier of Azure API Management can help you with this scenario. Of course, we recommend securing your endpoints using Authentication/Authorization mechanisms like subscription keys, certificates, and Oauth2/OIDC.
Diagram describing what tiers of Azure API Management allow public access for inbound and outbound.
I have a requirement to connect privately to API Management for one or more of my Client Applications
Option 1: Consider deploying a Private Endpoint into API Management.
Supported tiers: Classic tiers.
“Private endpoints allow you to access services privately within your virtual network, avoiding exposure over the public internet.” (Thanks Microsoft co-pilot).
Deploying a Private Endpoint for inbound connectivity is a good option to support secure client connections into API Management. Remember, in this context the Private Endpoint you deploy for API Management creates an alternative network path into your API MANAGEMENT service instance; it’s about facilitating inbound communication (the client connecting to API Management), and it is “one way only” meaning it doesn’t help for scenarios where you also want to connect privately to your backends.
Diagram describing what tiers of Azure API Management allow public access and private endpoint for inbound.
Note: Whilst it’s supported to use Private Endpoints in the Premium or Developer tiers – the service must not have been added to a Virtual Network (VNET). This makes private endpoints and the “VNET Injection” capability supported by Premium and Developer mutually exclusive. The Basic and Standard tiers can’t be added to a Virtual Network.
Option 2: Consider adding your API Management to your VNet.
Supported tiers: Developer tier and Premium tier.
Developer and Premium are the only tiers where you can deploy the service into your virtual network – what we sometimes refer to as “VNet injection” – and allows you to set the inbound as public or private.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
As far as the Developer Tier is concerned, this is NOT meant for production use and no workloads showed be deployed on this Developer Tier.
The API backends I want to reach from API Management are private.
Supported tiers: Developer tier, Premium tier, and Standard v2 tier.
For the classic tiers Developer and Premium, as we mentioned before you can deploy the service into your virtual network. This could be in “internal” (private) or external mode.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
For the v2 tiers, Standard v2, allows you to rely on a feature called “VNet Integration” (please not the difference between VNet Integration and VNet injection) which allows API Management to “see” into your network and access services through a private IP in the connected Virtual Network, or in peered / connected network(s).
Diagram describing what tiers of Azure API Management allow private access for outbound using VNet integration.
I need to connect to API Management privately as well as reach private backends with no public access.
Supported tiers: Developer tier and Premium tier.
Add API Management Premium or Developer to your Virtual Network. The best practice would be to set the mode to “internal” – meaning inbound connectivity is via a private IP, via an internal load balancer.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
It’s something that relates to networking and is oft asked, so it would be remiss of us not to add a few lines summarizing the behavior and NAT (network address translation) for the different deployment modes:
Inbound
By default, inbound is via a public IP address assigned to the service.
This changes if you opt into using a Private Endpoint (for any of the tiers supporting this feature). Always remember to explicitly turn off the public ingress if you deploy a private endpoint instead and no longer require it.
This also changes if you deploy the Premium (or Developer) tier, added to your Virtual Network (“VNet-injected”), and set the mode to “internal”. In this mode, although a public IP is still deployed for control plane traffic, all data plane traffic will go through the private IP (Internal load balancer endpoint) which is provided in this mode.
Outbound
By default, outbound traffic is via the public IP(s) assigned to the service.
This changes for Premium (or Developer) tiers when the service is added to your Virtual Network (irrespective of the inbound mode being public or private). In this case
For internal traffic leaving API Management – for example API Management reaching to a backend service hosted on a downstream VM within the network – there is no SNAT (source network address translation). If the next hop is an NVA (network virtual appliance/firewall) any rules for the source should use the subnet prefix, not individually mapped IPs.
External bound traffic breaking out (non RFC1918) is SNAT to the single-tenant (dedicated) public IP assigned. This includes traffic to other PaaS services via a public endpoint (although you should note the traffic in this instance stays on the Azure backbone).
For Standard V2 using “VNet integration”
NAT for internal traffic is the same.
External bound traffic breaking out is SNAT to one of the hosting stamps shared public IPs. If you want control over the outbound IP, use an Azure NAT Gateway on the subnet or route via an NVA (network virtual appliance, or firewall). Same note as above applies for PaaS-to-PaaS traffic via Public IPs.
Note for v2 tiers: API MANAGEMENT control plane and its own dependency traffic is not seen on the customers network, which is an enhancement from the GA tiers, and simplifies scenarios requiring force-tunnelling / firewall integration.
Recent Comments