¡Registrate al reto del VS Code Day!

¡Registrate al reto del VS Code Day!

This article is contributed. See the original author and article here.

Imagen “reto”.


¡Regístrate al VS Code Day Skills Challenge! Ya sea que estes comenzando o buscando cambiar tu carrera, este programa está diseñado para que conozcas VS Code y GitHub Copilot en diferentes áreas profesionales como Ciencia de Datos, Inteligencia Artificial y Desarrollo Web. Con lecciones fáciles de seguir, ejercicios prácticos y talleres en vivo, aprende las novedades en VS Code. ¡Regístrate ahora y descubre el mundo de oportunidades que te ofrece VS Code en: https://aka.ms/VSCodeDayChallenge!


 


VS Code Day es nuestro evento anual en el cual aprenderás a mejorar tu proceso de programación con las novedades y grandes características de VS Code! Este año, estamos muy emocionados ya que tendremos sesiones enfocadas en la Inteligencia Artificial y escucharás al equipo de VS Code y a expertos de la industria hablar sobre temas como GitHub Copilot, la creación e implementación de aplicaciones de IA generativa en la nube, la mejora de la experiencia de programación en C# y ¡mucho más! 


 


Si estás empezando en el mundo de programación o si eres un programador con experiencia, ¡acompáñanos este 24 de abril de 2024 para conocer más sobre este super editor de código gratuito que te permite programar cualquier cosa! abrilurena_0-1712097510662.pngabrilurena_1-1712097510665.png


 


 


Imagen de “Reconocimientos”


 


Al completar los 11 módulos de este reto, podrás obtener un reconocimiento (badge) digital en tu perfil de Microsoft Learn por haber finalizado esta experiencia. ¡Mantente al tanto de más información en redes sociales y en este blog – te compartiremos más sobre los eventos de este #VSCodeDay!


 


NOTA IMPORTANTE: Tu reconocimiento (badge) digital se agregará a tu perfil de Microsoft Learn en el plazo de 1 semana a partir de la fecha de finalización del desafío. ¡Descubre en este tutorial como puedes compartir tus badges en LinkedIn!


 


 


Imagen “Preguntas frecuentes”



  • ¿Cuándo inicia y finaliza este reto? Inicia el 24 de abril 2024 y finaliza el 17 de mayo del 2024.

  • ¿Cuánta experiencia necesito? Solo conocimientos básicos de programación.

  • ¿Cuánto tiempo debo dedicarle al día? Está diseñado para que vayas aprendiendo conforme a tus necesidades y tiempo disponible, recuerda que tienes que completarlo hasta el 17 de mayo.

  • Prerrequisitos: Ninguno.

  • ¿Tiene costo alguno? No tiene costo


Imagen de “Charlas y talleres en vivo”


¡Prepárate para conocer las novedades de VS Code con nuestra serie de charlas y talleres en vivo! Estarán llenas de consejos, trucos y ejercicios prácticos para ayudarles con sus proyectos personales y profesionales. Ya sea que estas comenzando o buscando mejorar tus habilidades, este es un evento imperdible para cualquier persona interesada en programación.


 


¡Para registrarte a todas estas charlas del #VSCodeDay ingresa a la siguiente página web (aka.ms/VSCodeDay)! Inicia el 24 de Abril desde las 11 am hasta las 6 pm (GMT-6)*.


 


*Horario Ciudad de México


 























































Evento Ponente Redes Sociales
Keynote: View Source: What gets into VS Code and why Burke Holland @burkehollan

JavaScript developers: Build fast and have fun with VS Code and Azure AI CLI


Natalia Venditto @anfibiacreativa 
Generating Synthetic Datasets with GitHub Copilot Alfredo Deza Alfredo Deza on LinkedIn
Real-World Development with VS Code and C# Scott Hanselman + Leslie Richardson @shanselman +@lyrichardson01 
Building a RAG-powered AI chat app with Python and VS Code Pamela Fox @pamelafox
Beyond the Editor: Tips to get the Most out of GitHub Copilot Kedasha Kerr

@itsthatladydev


LangChain Examples with Azure OpenAI Service + VS Code Rishab Kumar @rishabincloud
AI Made Clear: Practical AI Coding Sessions in VS Code Bruno Capuano @elbruno
Asking Copilot about your workspace Matt Bierner @mattbierner


 


Imagen “Comparte tu experiencia”


Si anteriormente has utilizado VS Code, por favor, cuéntanos en los comentarios tu experiencia: tus extensiones favoritas, que te han parecido las charlas del evento e incluso si es la primera vez que has escuchado sobre VS Code!


 


:stareyes: ¡Nos fascinaría escuchar tus historias o anécdotas favoritas/divertidas de las veces que has programado en VS Code!


 


abrilurena_2-1712097686463.png Etiquétanos en redes sociales usando el siguiente hashtag: #VSCodeDayCSC 


 

Building digital trust in Microsoft Copilot for Dynamics 365 and Power Platform

Building digital trust in Microsoft Copilot for Dynamics 365 and Power Platform

This article is contributed. See the original author and article here.

At Microsoft, trust is the foundation of everything we do. As more organizations adopt Copilot in Dynamics 365 and Power Platform, we are committed to helping everyone use AI responsibly. We do this by ensuring our AI products deliver the highest levels of security, compliance, and privacy in accordance with our Responsible AI Standard—our framework for the safe deployment of AI technologies.

Take a moment to review the latest steps we are taking to help your organization securely deploy Copilot guided by our principles of safety, security, and trust.

Copilot architecture and responsible AI principles in action

Let’s start with an overview of how Copilot works, how it keeps your business data secure and adheres to privacy requirements, and how it uses generative AI responsibly.

First, Copilot receives a prompt from a user within Dynamics 365 or Power Platform. This prompt could be in the form of a question that the user types into a chat pane, or an action, such as selecting a button labeled “Create an email.”

Copilot processes the prompt using an approach called grounding, which might include retrieving data from Microsoft Dataverse, Microsoft Graph, or external sources. Grounding improves the relevance of the prompt, so the user gets responses that are more appropriate to their task. Interactions with Copilot are specific to each user. This means that Copilot can only access data that the current user has permissions to.

Copilot uses Azure OpenAI Service to access powerful generative AI models that understand natural language inputs and returns a response to the user in the appropriate form. For example, a response might be in the form of a chat message, an email, or a chart. Users should always review the response before taking any action.

How Copilot uses your proprietary business data

Responses are grounded in your business content and business data. Copilot has real-time access to both your content and context to generate answers that are precise, relevant, and anchored in your business data for accuracy and specificity. This real-time access goes through our Dataverse platform (which includes all Power Platform connectors), honoring the data loss prevention and other security policies put in place by your organization. We follow the pattern of Retrieval Augmentation Generation (RAG), which augments the capabilities of language models by adding dynamic grounding data to the prompt that we send to the model. Our system dynamically looks up the relevant data schema using our own embedding indexes and then uses the language models to help translate the user’s question into a query that we can run against the system of record.

We do not use your data to train language models. We believe that our customers’ data is their data in accordance with Microsoft’s data privacy policy.  AI-powered language models are trained on a large but limited corpus of data—but prompts, responses, and data accessed through Microsoft Graph and Microsoft services are not used to train Copilot for Dynamics 365 or Power Platform capabilities for use by other customers. Furthermore, the models are not improved through your usage. This means that your data is accessible only by authorized users within your organization unless you explicitly consent to other access or use.

How Copilot protects business information and data

Enterprise-grade AI, powered by Azure OpenAI Service. Copilot is powered by the trusted and compliant Azure OpenAI Service, which provides robust, enterprise-grade security features. These features include content filtering to identify and block output of harmful content and protect against prompt injections (jailbreak attacks), which are user prompts that provoke the generative AI model into behaving in ways it was trained not to. Azure AI services are designed to enhance data governance and privacy and adhere to Microsoft’s strict data protection and privacy standards. Azure OpenAI also supports enterprise features like Azure Policy and AI-based security recommendations by Microsoft Defender for Cloud, meeting compliance requirements with customer-managed data encryption keys and robust governance features.

Built on Microsoft’s comprehensive approach to security, privacy, and compliance. Copilot is integrated into Microsoft Dynamics 365 and Power Platform. It automatically inherits all your company’s valuable security, compliance, and privacy policies and processes. Copilot is hosted within Microsoft Cloud Trust Boundary and adheres to comprehensive, industry-leading compliance, security, and privacy practices. Our handling of Copilot data mirrors our treatment of other customer data, giving you complete autonomy in deciding whether to retain data and determining the specific data elements you wish to keep.

Safeguarded by multiple forms of protection. Customer data is protected by several technologies and processes, including various forms of encryption. Service-side technologies encrypt organizational content at rest and in transit for robust security. Connections are safeguarded with Transport Layer Security (TLS), and data transfers between Dynamics 365, Power Platform, and Azure OpenAI occur over the Microsoft backbone network, ensuring both reliability and safety.  Copilot uses industry-standard secure transport protocols when data moves over a network—between user devices and Microsoft datacenters or within the datacenters themselves.

Watch this presentation by James Oleinik for a closer look at how Copilot allows users to securely interact with business data within their context, helping to ensure data remains protected inside the Microsoft Cloud Trust Boundary. You’ll also learn about measures we take to ensure that Copilot is safe for your employees and your data, such as how Copilot isolates business data from the language model so as not to retrain the AI model. 

Architected to protect tenant, group, and individual data. We know that data leakage is a concern for customers. Microsoft AI models are not trained on and don’t learn from your tenant data or your prompts unless your tenant admin has opted in to sharing data with us. Within your environment, you can control access through permissions that you set up. Authentication and authorization mechanisms segregate requests to the shared model among tenants. Copilot utilizes data that only you can access. Your data is not available to others.

Committed to building AI responsibly

As your organization explores Copilot for Dynamics 365 and Power Platform, we are committed to delivering the highest levels of security, privacy, compliance, and regulatory commitments, helping you transform into an AI-powered business with confidence.

Learn more about Copilot and Responsible AI

The post Building digital trust in Microsoft Copilot for Dynamics 365 and Power Platform appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Introducing timeline highlights powered by generative AI

Introducing timeline highlights powered by generative AI

This article is contributed. See the original author and article here.

The timeline is a crucial tool for users to monitor customer engagements, track activities, and stay updated on record progress. With Generative AI, we’re introducing timeline highlights, enabling users to grasp activity details in milliseconds. 

Streamlined timeline highlights revolutionize the way users interact with essential activities such as emails, notes, appointments, tasks, phone calls, and conversations. With a single click, agents gain access to summaries of key events, including records like cases, accounts, contacts, leads, opportunities, and customized entities. 

Agents save time with timeline highlights

This new feature optimizes agent productivity, eliminating the need for excessive clicks and extra reading. Agents can efficiently absorb crucial information, enabling faster and more transparent interactions with customers. Users can expand the highlights section in the timeline by clicking on the chevron. 

The highlights show relevant items in a clear and concise bulleted format, facilitating quick analysis and easy reference. The copy functionality empowers users to reuse content by pasting it into notes, with the flexibility to make modifications as needed.

In summary, our innovative approach to timelines, driven by generative AI technology, offers users a transformative experience. Consequently, agents can effortlessly track customer engagements and monitor progress with unparalleled speed and accuracy. 

graphical user interface, text, application, email

The timeline highlights feature is available within the apps like Dynamics 365 Customer Service, Dynamics 365 Sales, Dynamics 365 Marketing, Dynamics Field Service and custom model-driven Power Apps, providing a unified experience across Dynamics 365.   

Timeline highlights are enabled by default. You can enable and disable timeline highlights at the app level and also at form level via the maker portal make.powerapps.com 

Learn more

To learn more, read the documentation:

Responsible AI FAQ for Copilot in the timeline – Power Apps | Microsoft Learn 

Timeline Overview for Users – Power Apps | Microsoft Learn

Add and configure the timeline control in Power Apps – Power Apps | Microsoft Learn

The post Introducing timeline highlights powered by generative AI appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

HLS Copilot Workflows

HLS Copilot Workflows

This article is contributed. See the original author and article here.

CopilotWorkflows.pngHealthcare and Life Sciences (HLS) is a demanding and complex field that requires constant innovation, collaboration, and communication. HLS professionals often have to deal with large amounts of data, information, and documentation, which can be overwhelming and time-consuming. Moreover, the COVID-19 pandemic has added more pressure and stress to the already challenging work environment, leading to increased risks of burnout and mental fatigue.


How can HLS professionals cope with these challenges and improve their productivity and well-being? One possible solution is to leverage the power of AI and use Copilot, a tool that helps you write better code faster. Copilot is a smart assistant that can assist with email overload, summarize information from various sources, generate documentation, and more. Copilot also integrates with other applications like Teams, Word, Outlook, and more, creating a seamless workflow that can enhance your efficiency and creativity.


Check out the ever-growing repository of Use case workflow leveraging the power of Copilot.


 * Note: all examples are demonstrations for educational purposes only and not intended to be used as production. No warranty or support is stated or implied.


Provider Examples:



Payor Examples:


Pharma Examples


Medical Devices Examples:

Choosing the right Azure API Management tier for your networking scenarios

Choosing the right Azure API Management tier for your networking scenarios

This article is contributed. See the original author and article here.

Last updated 4/3/2024 to include v2 Tiers features.


 


Authors: Faisal Mustafa, Ben Gimblett, Jose Moreno, Srini Padala, and Fernando Mejia.


 


There are different options when it comes to integrating your API Management with your Azure Virtual Network (VNet) which are important to understand. These options will depend on your network perimeter access requirements and the available tiers and features in Azure API Management.


 


This blog post aims to guide you through the different options available on both the classic tiers and v2 tiers of Azure API Management, to help you decide which choice works best for your requirements.


 


TL; DR


 


feranto_8-1712160745483.png


 


Decision tree describing how to choose the right Azure API Management tier based on networking scenarios.


 


Here is the relevant documentation to implement these tiers and features:



 


Background


 


Before we jump into the options and differences it’s worth taking a step back to understand more about how Azure Platform as a Service products (PaaS) work regarding networking. If you need a refresher and so we don’t repeat ourselves here, we’d ask the reader to spend a few minutes over at Jose’s excellent “Cloudtrooper blog” and his deep dive post on all things “PaaS networking”. We’ll use some of the same labels and terms in this post for consistency Taxonomy of Azure PaaS service access – Cloudtrooper.


 


What is API Management, what tiers are available and why does it matter in relation to networking?


 


The first thing to remember is that the API Management API Gateway is a Layer 7 (in OSI model terms) HTTP Proxy. Keeping this in mind helps a lot when you think about the networking options available through the different tiers. In simple terms:


An HTTP proxy terminates HTTP connections from any client going to a set of [backend] servers and establishes new HTTP connections to those servers. For most API Management Gateway use cases the resource would reside close to the [backend] servers its facades (usually in the same Azure region).


 


feranto_9-1712160745488.png


 


Diagram describing all the components included in Azure API Management, and the difference between inbound and outbound sections.


 


 


Why does this matter? When we talk about the available networking-options we talk about features which relate to the initial client connection to API Management(inbound) OR features relating to the connection from API Management to the API backends(outbound). From now on, we will call them inbound and outbound connections, and there are different options/features for each type.


Regarding Azure API Management tiers we will rely in the following categories:



  • Consumption tier, the tier that exposes serverless properties.

  • Classic tiers, this category refers to the Developer, Basic, Standard and Premium tiers.

  • V2 tiers, this category refers to the Basic v2 and Standard v2.


 


Networking scenarios


 


Let’s jump right in. To make it easier to navigate and for you to get the information you need to make the right decisions for your use case, lets summarize by the applicable use-cases, we’ll list the tiers where the functionality is available and add any applicable notes.


 


I have no specific networking requirements and just want to keep things simple.


Supported tiers: Consumption, Classic tiers, and V2 tiers.


 


Of course, there’s more to implementing a workload with API Management than just networking features and still a lot of choice when it comes to an API Management tier that fits your workload and scale requirements. But if you are ok with having inbound and outbound connections going through the Azure Backbone or Public Internet, any tier of Azure API Management can help you with this scenario. Of course, we recommend securing your endpoints using Authentication/Authorization mechanisms like subscription keys, certificates, and Oauth2/OIDC.


 


 


feranto_10-1712160745493.png


 


Diagram describing what tiers of Azure API Management allow public access for inbound and outbound.


 


 


 


I have a requirement to connect privately to API Management for one or more of my Client Applications


 


Option 1: Consider deploying a Private Endpoint into API Management.


Supported tiers: Classic tiers.


 


 


Private endpoints allow you to access services privately within your virtual network, avoiding exposure over the public internet.” (Thanks Microsoft co-pilot).


Deploying a Private Endpoint for inbound connectivity is a good option to support secure client connections into API Management. Remember, in this context the Private Endpoint you deploy for API Management creates an alternative network path into your API MANAGEMENT service instance; it’s about facilitating inbound communication (the client connecting to API Management), and it is “one way only” meaning it doesn’t help for scenarios where you also want to connect privately to your backends.


 


feranto_11-1712160745494.png


 


Diagram describing what tiers of Azure API Management allow public access and private endpoint for inbound.


 


Note: Whilst it’s supported to use Private Endpoints in the Premium or Developer tiers – the service must not have been added to a Virtual Network (VNET). This makes private endpoints and the “VNET Injection” capability supported by Premium and Developer mutually exclusive. The Basic and Standard tiers can’t be added to a Virtual Network.


 


Option 2: Consider adding your API Management to your VNet.


Supported tiers: Developer tier and Premium tier.


 


Developer and Premium are the only tiers where you can deploy the service into your virtual network – what we sometimes refer to as “VNet injection” – and allows you to set the inbound as public or private.


 


feranto_12-1712160745495.png


Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.


 


As far as the Developer Tier is concerned, this is NOT meant for production use and no workloads showed be deployed on this Developer Tier.


 


The API backends I want to reach from API Management are private.


Supported tiers: Developer tier, Premium tier, and Standard v2 tier.


 


For the classic tiers Developer and Premium, as we mentioned before you can deploy the service into your virtual network. This could be in “internal” (private) or external mode.


 


feranto_13-1712160745496.png


 


Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.


 


For the v2 tiers, Standard v2, allows you to rely on a feature called “VNet Integration” (please not the difference between VNet Integration and VNet injection) which allows API Management to “see” into your network and access services through a private IP in the connected Virtual Network, or in peered / connected network(s).


 


feranto_14-1712160745498.png


 


Diagram describing what tiers of Azure API Management allow private access for outbound using VNet integration.


 


 


I need to connect to API Management privately as well as reach private backends with no public access.


 


Supported tiers: Developer tier and Premium tier.


Add API Management Premium or Developer to your Virtual Network. The best practice would be to set the mode to “internal” – meaning inbound connectivity is via a private IP, via an internal load balancer.


 


feranto_15-1712160745499.png


 


Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.


 


This allows you to keep all API’s private/internal but offers the option of later placing a Reverse Proxy in front of API Management (for example Azure Application Gateway) to electively open access to any private APIs you want to make publicly accessible.



A last word about IP Addresses…


 


It’s something that relates to networking and is oft asked, so it would be remiss of us not to add a few lines summarizing the behavior and NAT (network address translation) for the different deployment modes:


Inbound



  • By default, inbound is via a public IP address assigned to the service.

  • This changes if you opt into using a Private Endpoint (for any of the tiers supporting this feature). Always remember to explicitly turn off the public ingress if you deploy a private endpoint instead and no longer require it.

  • This also changes if you deploy the Premium (or Developer) tier, added to your Virtual Network (“VNet-injected”), and set the mode to “internal”. In this mode, although a public IP is still deployed for control plane traffic, all data plane traffic will go through the private IP (Internal load balancer endpoint) which is provided in this mode.


 


Outbound



  • By default, outbound traffic is via the public IP(s) assigned to the service.

  • This changes for Premium (or Developer) tiers when the service is added to your Virtual Network (irrespective of the inbound mode being public or private). In this case

    • For internal traffic leaving API Management – for example API Management reaching to a backend service hosted on a downstream VM within the network – there is no SNAT (source network address translation). If the next hop is an NVA (network virtual appliance/firewall) any rules for the source should use the subnet prefix, not individually mapped IPs.

    • External bound traffic breaking out (non RFC1918) is SNAT to the single-tenant (dedicated) public IP assigned. This includes traffic to other PaaS services via a public endpoint (although you should note the traffic in this instance stays on the Azure backbone).



  • For Standard V2 using “VNet integration”

    • NAT for internal traffic is the same.

    • External bound traffic breaking out is SNAT to one of the hosting stamps shared public IPs. If you want control over the outbound IP, use an Azure NAT Gateway on the subnet or route via an NVA (network virtual appliance, or firewall). Same note as above applies for PaaS-to-PaaS traffic via Public IPs.




Note for v2 tiers: API MANAGEMENT control plane and its own dependency traffic is not seen on the customers network, which is an enhancement from the GA tiers, and simplifies scenarios requiring force-tunnelling / firewall integration.