Network Analytics available now in Viva Engage

Network Analytics available now in Viva Engage

This article is contributed. See the original author and article here.

Listen to your employees, monitor their engagement, and understand the pulse of your organization better than ever before by using Network Analytics in Viva Engage. Network Analytics provides an at-a-glance overview of your organization’s top engagement trends across the entire network. This includes employee sentiment, cross-community insights and AI-powered conversation summarization to help you stay-up-to-date with all the activity happening in your network. Network admins and those assigned a corporate communicator role will be able to access these advanced analytics. In order to access Network Analytics, users must have a Viva suite or Employee Communications and Communities (C&C) license. 


 


VivaEngage1.png


 


Gone are the days of manually searching for the most engaging conversations across your network or trying to tally up the most mentioned themes and hashtags. With Network Analytics, you can see detailed metrics that show you exactly where conversations are taking place, which themes employees are most passionate about, how effective announcements are, and which communities are most active.



Best Practices



Review top themes and top conversations – we’ve made triaging these conversations across your entire organization easier than ever. Now you can deep dive into the conversations that are occurring within your organization and quickly review themes related to the most critical commentary.


 


VivaEngage2.png


Network analytics helps you easily identify themes, trends, and engagement across the network.



You can even see daily trends by hovering over the graphs on the dashboard. To learn more about our sentiment analysis, see: Sentiment and theme analysis in Viva Engage – Microsoft Support.


 


VivaEngage3.png


Post sentiment is included in Network Analytics



Understand the effectiveness of broad communications within your organization by analyzing the announcements breakdown. You can also review which leaders and employees are most active on Engage by reviewing the Frequent Contributors panel. Acknowledge these employees directly from Network Analytics by praising their contributions to the organization.


 


VivaEngage4.png


 


Finally, if you’d like to review which Communities are implementing best practices, look no further than the popular communities table. Here you can sort communities by those with the most posts, or most active members. Understanding which community rituals are leading to high engagement can be a great way to pass along helpful tips to other Community admins.


 


VivaEngage5.png


 


Get started today!



To access Network Analytics, select the global analytics entry point (at the top of the web browser) and click on the “Network analytics” tab:


 


VivaEngage6.png


 


If you cannot see the tab, confirm that you have either the network admin or corporate communicator role assigned to your user profile on Viva Engage. If you need to be assigned as corporate communicator, contact your network admin to help you gain access to the role.


 


Learn more about setting up Network Analytics here: Viva Engage Network Analytics


 


What’s coming soon?



New! Employee retention analysis – we’ll help you understand how employees who use Engage are more likely to be retained at your organization. The Viva Engage employee retention metric in Network Analytics shows the difference in the 28-day employee retention rates of employees who do and don’t use Viva Engage. Learn more about our retention analysis here Viva Engage Employee Retention – Microsoft Support


 


Resources



Watch the recording of the Deep Dive Webinar! Demos and lots of Q&A shared during the webinar as well! 


 


Screenshot 2024-02-08 125129.png


 


Interested in more analytics? See View and manage analytics in Viva Engage



Check out this Analytics Adoption guide for more about the analytics in Viva Engage.


 


FAQ



How is sentiment analysis determined?
Sentiment analysis is a Viva Engage premium feature that aggregates data across Viva Engage conversations to surface trends. To understand more, see Sentiment and theme analysis in Viva Engage – Microsoft Support



Who has access to view and manage network analytics?
Access to the data in this dashboard is restricted to include only network admins and corporate communicators. These users can change settings via the Engage admin center.


 


What admin controls are available? Can analytics features be turned off?
Yes, we provide the network admin and corporate communicator roles the ability to adjust which analytics features are enabled within the admin center.



What licensing requirements need to be met?
Network analytics is only available to Viva Suite or Employee Communications and Communities licensed users.



How often is data refreshed?
Analytics are refreshed daily. If you don’t see changes reflected immediately, check analytics the next day.

Manage time off requests with Human Resources app for Microsoft Teams

Manage time off requests with Human Resources app for Microsoft Teams

This article is contributed. See the original author and article here.

Introduction

In today’s dynamic work environment, managing employee leave and absence efficiently is crucial for maintaining a productive and harmonious workplace. For this reason, we are announcing the public preview of the Human Resources app for Dynamics 365 Human Resources on Finance and Operations environments.

With the announcement of the infrastructure merge, the Human Resources app will be the go-forward solution for the former Teams app for leave and absence.

The application is designed to be used within Microsoft Teams or in a web browser and it provides an overall view of employees leave requests, leave balances, draft leave requests, and leave requests taken in the past.

The Human Resources app can be used both on Mobile and Desktop.

Benefits of the Human Resources app

Human Resources is an app that is integrated with Dynamics 365 Human Resources on Finance and Operations environments. It is designed and developed for organizations to ensure their employees can request, edit, and cancel time off and leave of absence requests seamlessly. Employees can now view their leave balances, upcoming leaves and leave history using the same application. In addition to this, managers can also efficiently view and approve or reject requests in one intuitive interface.

graphical user interface
a screenshot of a computer
graphical user interface, application

Next Steps

The Human Resources app is now available for public preview and we’re looking forward to hearing your feedback and how the app is helping your organization. Enable the Human Resources (Preview) app for Dynamics 365 Human Resources from the AppSource.

To learn more about these exciting new capabilities and on how to use the app, refer to the Human Resources app.

The post Manage time off requests with Human Resources app for Microsoft Teams appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

The Philosophy of the Federal Cyber Data Lake (CDL): A Thought Leadership Approach

This article is contributed. See the original author and article here.

Pursuant to Section 8 of Executive Order (EO) 14028, “Improving the Nation’s Cybersecurity”, Federal Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs) aim to comply with the U.S. Office of Management and Budget (OMB) Memorandum 21-31, which centers on system logs for services both within authorization boundaries and deployed on Cloud Service Offerings (CSOs). This memorandum not only instructs Federal agencies to provide clear guidelines for service providers but also offers comprehensive recommendations on logging, retention, and management to increase the Government’s visibility before, during and after a cybersecurity incident. Additionally, OMB Memorandum 22-09, “Moving the U.S. Government Toward Zero Trust Cybersecurity Principles”, references M-21-31 in its Section 3. 


 


While planning to address and execute these requirements, Federal CIO and CISO should explore the use of Cyber Data Lake (CDL). A CDL is a capability to assimilate and house vast quantities of security data, whether in its raw form or as derivatives of original logs. Thanks to its adaptable, scalable design, a CDL can encompass data of any nature, be it structured, semi-structured, or unstructured, all without compromising quality. This article probes into the philosophy behind the Federal CDL, exploring topics such as: 


 



  • The Importance of CDL for Agency Missions and Business 

  • Strategy and Approach 

  • CDL Infrastructure 

  • Application of CDL 


 


The Importance of CDL for Agency Missions and Business 


 


The overall reduction in both capital and operational expenditures for hardware and software, combined with enhanced data management capabilities, makes CDLs an economically viable solution for organizations looking to optimize their data handling and security strategies. CDLs are cost-effective due to their ability to consolidate various data types and sources into a single platform, eliminating the need for multiple, specialized data management tools. This consolidation reduces infrastructure and maintenance costs significantly. CDLs also adapt easily to increasing data volumes, allowing for scalable storage solutions without the need for expensive infrastructure upgrades. By enabling advanced analytics and efficient data processing, they reduce the time and resources needed for data analysis, further cutting operational costs. Additionally, improved accuracy in threat detection and reduction in false positives lead to more efficient security operations, minimizing the expenses associated with responding to erroneous alerts and increasing the speed of detection and remediation.  


 


However, CDLs are not without challenges. As technological advancements and the big data paradigm evolve, the complexity of network, enterprise, and system architecture escalates. This complexity is further exacerbated by the integration of tools from various vendors into Federal ecosystem, managed by diverse internal and external teams. For security professionals, maintaining pace with this intricate environment and achieving real-time transparency into technological activities is becoming an uphill battle. These professionals require a dependable, almost instantaneous source that adheres to the National Institute of Standards and Technology (NIST) core functions—identify, protect, detect, respond, and recover. Such a source empowers them to strategize, prioritize, and address any anomalies or shifts in their security stance. The present challenge lies in acquiring a holistic view of security risk, especially when large agencies might deploy hundreds of applications across the US and in some cases globally. The security data logs, scattered across these applications, clouds and environments, often exhibit conflicting classifications or categorizations. Further complicating matters are logging maturity levels at different cloud deployment models, infrastructure, platform, and software. 


 


It is vital to scrutinize any irregularities to ensure the environment is secure, aligning with zero-trust principles which advocate for a dual approach: never automatically trust and always operate under the assumption that breaches may occur. As security breaches become more frequent and advanced, malicious entities will employ machine learning to pinpoint vulnerabilities across expansive threat landscape. Artificial intelligence will leverage machine learning and large language models to further enhance organizations’ abilities to discover and adapt to changing risk environments, allowing security professionals to do more with less.  


 


Strategy and Approach 


 


The optimal approach to managing a CDL depends on several variables, including leadership, staff, services, governance, infrastructure, budget, maturity, and other factors spanning all agencies. It is debatable whether a centralized IT team can cater to the diverse needs and unique challenges of every agency. We are seeing a shift where departments are integrating multi-cloud infrastructure into their ecosystem to support the mission. An effective department strategy is pivotal for success, commencing with systems under the Federal Information Security Modernization Act (FISMA) and affiliated technological environments. Though there may be challenges at the departmental level in a federated setting, it often proves a more effective strategy than a checklist approach. 


 


Regarding which logs to prioritize, there are several methods. CISA has published a guide on how to prioritize deployment: Guidance for Implementing M-21-31: Improving the Federal Government’s Investigative and Remediation Capabilities. Some might opt to begin with network-level logs, followed by enterprise and then system logs. Others might prioritize logs from high-value assets based on FISMA’s security categorization, from high to moderate to low. Some might start with systems that can provide logs most effortlessly, allowing them to accumulate best practices and insights before moving on to more intricate systems. 


 


Efficiently performing analysis, enforcement, and operations across data repositories dispersed across multiple cloud locations in a departmental setting involves adopting a range of strategies. This includes data integration and aggregation, cross-cloud compatibility, API-based connectivity, metadata management, cloud orchestration, data virtualization, and the use of cloud-agnostic tools to ensure seamless data interaction. Security and compliance should be maintained consistently, while monitoring, analytics, machine learning, and AI tools can enhance visibility and automate processes. Cost optimization and ongoing evaluation are crucial, as is investing in training and skill development. By implementing these strategies, departments can effectively manage their multi-cloud infrastructure, ensuring data is accessible, secure, and cost-effective, while also leveraging advanced technologies for analysis and operations. 


 


CDL Infrastructure 


 


One of the significant challenges is determining how a CDL aligns with an agency’s structure. The decision between a centralized, federated, or hybrid approach arises, with cost considerations being paramount. Ingesting logs in their original form into a centralized CDL comes with its own set of challenges, including accuracy, privacy, cost, and ownership. Employing a formatting tool can lead to substantial cost savings in the extract, transform, and load (ETL) process. Several agencies have experienced cost reductions of up to 90% and significant data size reductions by incorporating formatting in tables, which can be reorganized as needed during the investigation phase. A federated approach means the logs remain in place, analyses are conducted locally, and the results are then forwarded to a centralized CDL for further evaluation and dissemination. 


 


For larger and more complex agencies, a multi-tier CDL might be suitable. By implementing data collection rules (DCR), data can be categorized during the collection process, with department-specific information directed at the respective department’s CDL, while still ensuring that high value and timely logs are forwarded to a centralized CDL at the agency level, prioritizing privileged accounts. Each operating division or bureau could establish its own CDL, reporting on to the agency’s headquarters’ CDL. The agency’s Office of Inspector General (OIG) or a statistical component of a department may need to create their own independent CDL for independence purposes. This agency HQ CDL would then report to DHS. In contrast, smaller agencies might only need a single CDL. This could integrate with the existing Cloud Log Aggregation Warehouse (CLAW) a CISA-deployed architecture for collecting and aggregating security telemetry data from agencies using commercial CSP services — and align with the National Cybersecurity Protection System (NCPS) Cloud Interface Reference Architecture. This program ensures security data from cloud-based traffic is captured, analyzed, and enables CISA analysts to maintain situational awareness and provide support to agencies. 


 


If data is consolidated in a central monolithic, stringent data stewardship is crucial, especially concerning data segmentation, access controls, and classification. Data segmentation provides granular access control based on a need-to-know approach, with mechanisms such as encryption, authorization, access audits, firewalls, and tagging. If constructed correctly, this can eliminate the need for separate CDL infrastructures for independent organizations. This should be compatible with role-based user access schemes, segment data based on sensitivity or criticality, and meet Federal authentication standards. This supports Zero Trust initiatives in Federal agencies and aligns with Federal cybersecurity regulations, data privacy laws, and current TLS encryption standards. Data must also adhere to retention standards outlined in OMB 21-31 Appendix C and the latest National Archives and Records Administration (NARA) publications, and comply with Data Loss Prevention requirements, covering data at rest, in transit, and at endpoints, in line with NIST 800-53 Revision 5. 


 


In certain scenarios, data might require reclassification or recategorization based on its need-to-know status. Agencies must consider storage capabilities, ensuring they have a scalable, redundant and highly available storage system that can handle vast amounts of varied data, from structured to unstructured formats. Other considerations include interoperability, migrating an existing enterprise CDL to another platform, integrating with legacy systems, and supporting multi-cloud enterprise architectures that source data from a range of CSPs and physical locations. When considering data portability, the ease of transferring data between different platforms or services is crucial. This necessitates storing data in widely recognized formats and ensuring it remains accessible. Moreover, the administrative efforts involved in segmenting and classifying the data should also be considered. 


 


Beyond cost and feasibility, the CDL model also provides the opportunity for CIOs and CISOs to achieve data dominance with their security and log data.  This concept of data dominance allows them to gather data, quickly and securely, reduces processing time, which provides quicker time to respond.  This quicker time to respond, the strategic goal of any security implementation, is only possible with the appropriate platform and infrastructure so organizations can get closer to real-time situational awareness. 


 


The Application of CDL 


 


With a solid strategy in place, it’s time to delve into the application of a CDL. Questions arise about its operation, making it actionable, its placement relative to the Security Operations Center (SOC), and potential integrations with agency Governance Risk Management, and Compliance (GRC) tools and other monitoring systems. A mature security program needs a comprehensive real-time view of an agency’s security posture, encompassing SOC activities and the agency’s governance, risk management, and compliance tasks. The CDL should interface seamlessly with existing or future Security Orchestration and Response (SOAR) and End Point Detection (EDR) tools, as well as ticketing systems. 


 


CDLs facilitate the sharing of analyses within their agencies, as well as with other Federal entities like the Department of Homeland Security (DHS), Cybersecurity and Infrastructure Security Agency (CISA), Federal law enforcement agencies, and intelligence agencies. Moreover, CDLs can bridge the gaps in a Federal security program, interlinking entities such as the SOC, GRC tools, and other security monitoring capabilities. At the highest levels of maturity, the CDL will leverage Network Operations Center (NOC) and even potentially administration information such as employee leave schedules. The benefit of modernizing the CDL lies in eliminating the requirement to segregate data before ingestion. Data is no longer categorized as security-specific or operations-specific. Instead, it is centralized into a single location, allowing CDL tools and models to assess the data’s significance. Monolithic technology stacks are effective when all workloads are in the same cloud environment. However, in a multi-cloud infrastructure, this approach becomes challenging. With workloads spread across different clouds, selecting one as a central hub incurs egress costs to transfer log data between clouds. Departments are exploring options to store data in the cloud where it’s generated, while also considering if Cloud Service Providers (CSPs) offer tools for analysis, visibility, machine learning, and artificial intelligence.  


 


The next step is for agencies to send actionable information to security personnel regarding potential incidents and provide mission owners with the intelligence necessary to enhance efficiency. Additionally, this approach eliminates the creation of separate silos for security data, mission data, financial information, and operations data. This integration extends to other Federal security initiatives such as Continuous Diagnostics and Mitigation (CDM), Authority to Operate (ATO), Trusted Internet Connection (TIC), and the Federal Risk and Authorization Management Program (FedRAMP). 


 


It’s also pivotal to determine if the CDL aligns with the MITRE ATT&CK Framework, which can significantly assist in incident response. MITRE ATT&CK® is a public knowledge base outlining adversary tactics and techniques based on observed events. The knowledge base aids in developing specific threat models and methodologies across various sectors. 


 


Lastly, to gauge the CDL’s applicability, one might consider creating a test case. Given the vast amount of log data — since logs are perpetual — this presents an ideal scenario for machine learning. Achieving real-time visibility can be challenging with the multiple layers of log aggregation, but timely insights might be within reach. For more resources from Microsoft Federal Security, please visit https://aka.ms/FedCyber 


 


Stay Connected


Connect with the Public Sector community to keep the conversation going, exchange tips and tricks, and join community events. Click “Join” to become a member and follow or subscribe to the Public Sector Blog space to get the most recent updates and news directly from the product teams. 

Simplifying Azure Kubernetes Service Authentication Part 1

This article is contributed. See the original author and article here.

If you are looking for a step-by-step guide on how to enable authentication for your Azure Kubernetes Service (AKS) cluster, you may have encountered some challenges. The documentation on this topic is scarce and often outdated or incomplete. Moreover, you may have specific requirements for your use case that are not covered by the existing resources. That is why I have created this comprehensive guide using the latest Azure cloud resources.


 


In this guide, you will learn how to set up an AKS cluster and provide authentication to that cluster using NGINX and the OAuth2 proxy. This guide is intended for educational purposes only and does not guarantee proper authentication as certified by NIST. It is also not a complete solution for securing your AKS cluster, which involves more than just authentication. Therefore, this guide should be used as a learning tool to help you understand how authentication works and how to implement it using Azure.


 


By following this guide, you will be able to set up an AKS cluster with authentication using NGINX, OAuth2 Proxy, and Microsoft Entra ID. You will not need a domain name as we will use a fully qualified domain name (FQDN). However, you can also use a domain name if you prefer. Additionally, we will use Let’s Encrypt for TLS certificates so that our application will use HTTPS.


 


Additionally, I have broken this guide into several parts. This is the first part where you will be guided through the creation of your AKS cluster and the initial NGINX configuration. I will provide the remaining parts in future posts.


 


To learn how to use NGINX with Oauth2 Proxy, I conducted thorough online research and consulted various tutorials, guides, and other sources of information. The following list contains some of the most helpful references that I used to create this guide. You may find them useful as well if you need more details or clarification on any aspect of this topic.



 


Getting Started


Before you begin, you will need to meet the following prerequisites:  



  • Azure CLI or Azure PowerShell

  • An Azure subscription

  • An Azure Resource Group


Create an Azure Container Registry (ACR)


To create an Azure container registry, you can follow the steps outlined in the official documentation here Create a new ACR. An Azure container registry is a managed Docker registry service that allows you to store and manage your private Docker container images and related artifacts. For now I’ll set up an ACR using PowerShell:


 


First, you’ll need to login:


 

az login –use-device-code

 


 


Then define a name for your ACR resource:


 


 

$MYACR=your_acr_name

 


 


Then create your ACR resource:


 


 

New-AzContainerRegistry -Name $MYACR -ResourceGroupName YOUR_RESOURCE_GROUP_NAME -Sku Basic

 


 


You may be prompted for a location, enter your location, and proceed.


Create an AKS cluster and integrate with ACR


Now create a new AKS cluster and integrate with the existing ACR you just created by running the following command.


 


 

New-AzAksCluster -Name YOUR_AKS_CLUSTER_NAME -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR

 


 


The command above will configure the appropriate AcrPull role for the managed identity and allows you to authorize an existing ACR in your subscription. A managed identity from Microsoft Entra ID allows your app to easily access other Microsoft Entra protected resources.


Validate the Deployment


We will verify the deployment using the Kubernetes command line client. Ensure that you have this tool installed by running the following command.


 


 

Install-Module Az.Aks

 


 


Configure the kubectl client to connect to your Kubernetes cluster. The following command downloads credentials and configures the Kubernetes CLU to use them.


 


 

Import-AzAksCredential -ResourceGroupName myResourceGroup –Name myAKSCluster

 


 


Verify the connection to your cluster by running the following command.


 


 

kubectl get nodes

 


 


You should see some output with the name of the nodes on your cluster.


NGINX Ingress controller configuration


Now that we have our AKS cluster up and running with an attached ACR we can configure our ingress controller NGINX. The NGINX ingress controller provides a reverse proxy for configurable traffic routing and TLS termination. We will utilize NGINX to fence off our AKS cluster providing a public IP address accessible through the load balancer which we can then assign a FQDN for accessing our applications.  Additionally, we can configure NGINX to integrate with Microsoft Entra ID for authenticating users via an OAuth2 Proxy. Those details will be shared in a later post. You can follow the basic configuration for an ingress controller on the official documentation here Create an unmanaged ingress controller.


Before configuration begins, make sure you have Helm installed. Then run the following commands.


 


 

$Namespace = 'ingress-basic'

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx `
  --create-namespace `
  --namespace $Namespace `
  --set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz `
  --set controller.service.externalTrafficPolicy=Local

 


 


Check the load balancer


Now that you have configured and installed the NGINX ingress controller you can check the load balancer. Run the following command.


 


 

kubectl get services --namespace ingress-basic -o wide -w ingress-nginx-controller

 


 


You should see some output. When Kubernetes creates the load balancer service a public IP address is assigned. You can view the IP address under the column EXTERNAL-IP. Make note of this IP address. If you browse to that IP address you should get a 404 Not Found.


This wraps up the first part of this series. In the next part I will go over deploying two applications and creating the ingress routes to route to the applications. Then we will move on to setting up cert manager and getting things ready for our OAuth2 Proxy provider.

Azure OpenAI Service announces Assistants API, New Models for Finetuning, Text-to-Speech and more

This article is contributed. See the original author and article here.

Developers across the world are building innovative generative AI solutions since the launch of Azure OpenAI Service in January 2023. Over 53,000 customers globally harness the capabilities of expansive generative AI models, supported by the robust commitments of Azure’s cloud and computing infrastructure backed by enterprise grade security.


 


Today, we are thrilled to announce many new capabilities, models, and pricing improvements within the service. We are launching Assistants API in public preview, new text-to-speech capabilities, upcoming updated models for GPT-4 Turbo preview and GPT-3.5 Turbo, new embeddings models and updates to the fine-tuning API, including a new model, support for continuous fine-tuning, and better pricing. Let’s explore our new offerings in detail.


 


Build sophisticated copilot experiences in your apps with Assistants API


 


We are excited to announce, Assistants, a new feature in Azure OpenAI Service, is now available in public preview. Assistants API makes it simple for developers to create high quality copilot-like experiences within their own applications. Previously, building custom AI assistants needed heavy lifting even for experienced developers. While the chat completions API is lightweight and powerful, it is inherently stateless, which means that developers had to manage conversation state and chat threads, tool integrations, retrieval documents and indexes, and execute code manually. Assistants API, as the stateful evolution of the chat completion API, provides a solution for these challenges.


 



 


Building customizable, purpose-built AI that can sift through data, suggest solutions, and automate tasks just got easier. The Assistants API supports persistent and infinitely long threads. This means that as a developer you no longer need to develop thread state management systems and work around a model’s context window constraints. Once you create a Thread, you can simply append new messages to it as users respond. Assistants can access files in several formats – either while creating an Assistant or as part of Threads. Assistants can also access multiple tools in parallel, as needed. These tools include:


 



  • Code Interpreter: This Azure OpenAI Service-hosted tool lets you write and run Python code in a sandboxed environment. Use cases include solving challenging code and math problems iteratively, performing advanced data analysis over user-added files in multiple formats and generating data visualization like charts and graphs.

  • Function calling: You can describe functions of your app or external APIs to your Assistant and have the model intelligently decide when to invoke those functions and incorporate the function response in its messages.


Support for new features, including an improved knowledge retrieval tool, is coming soon.


 


Assistants API is built on the same capabilities that power OpenAI’s GPT product and offers unparalleled flexibility for creating a wide range of copilot-like applications. Use cases range AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on start building with the API.


 


As with the rest of our offerings, data and files provided by you to the Azure OpenAI Service are not used to improve OpenAI models or any Microsoft or third-party products or services, and developers can delete the data as per their needs. Learn more about data, privacy and security for Azure OpenAI Service here. We recommend using Assistants with trusted data sources. Retrieving untrusted data using Function calling, Code Interpreter with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches here.


 


Fine-tuning: New model support, new capabilities, and lower prices


 


Since we announced Azure OpenAI Service fine-tuning for OpenAI’s Babbage-002, Davinci-002 and GPT-35-Turbo on October 16, 2023, we’ve enabled AI builders to build custom models. Today we’re releasing fine-tuning support for OpenAI’s GPT-35-Turbo 1106, a next gen GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Fine-tuning with GPT-35-Turbo 1106 supports 16k context length in training data, allowing you to fine-tune with longer messages and generate longer and more coherent texts.


 


In addition, we are introducing two new features to enable you to create more complex custom models and easily update them. First, we are launching support for fine-tuning with function calling that enables you to teach your custom model when to make function calls and improve the accuracy and consistency of the responses. Second, we are launching support for continuous fine-tuning, which allows you to train a previously fine-tuned model with new data, without losing the previous knowledge and performance of the model. This lets you add additional training data to an existing custom model without starting from scratch and lets you experiment more iteratively.


 


Besides new model support and features, we are making it more affordable for you to train and host your fine-tuned models on Azure OpenAI Service, including decreasing the cost of training and hosting GPT-35-Turbo by 50%.


 


Coming soon: New models and model updates


 


The following models and model updates are coming this month to Azure OpenAI Service. You can review the latest model availability here.


 


Updated GPT-4 Turbo preview and GPT-3.5 Turbo models


 


We are rolling out an updated GPT-4 Turbo preview model, gpt-4-0125-preview, with improvements in tasks such as code generation and reduced cases of “laziness” where the model doesn’t complete a task. The new model fixes a bug impacting non-English UTF-8 generations. Post-launch, we’ll begin updating Azure OpenAI deployments that use GPT-4 version 1106-preview to use version 0125-preview. The update will start two weeks after the launch date and complete within a week. Because version 0125-preview offers improved capabilities, customers may notice some changes in the model behavior and compatibility after the upgrade. Pricing for gpt-4-0125-preview will be same as pricing for gpt-4-1106-preview.


 


In addition to the updated GPT-4 Turbo, we will also be launching GPT-3.5-turbo-0125, a new GPT-3.5 Turbo model with improved pricing and higher accuracy at responding in various formats. We will reduce input prices for the new model by 50% to $0.0005 /1K tokens and output prices by 25% to $0.0015 /1K tokens.


 


New Text-to-Speech (TTS) models


 


Our new text-to-speech model generates human-quality speech from text in six preset voices, each with its own personality and style. The two model variants include tts-1, the standard voices model variant, which is optimized for real-time use cases, and tts-1-hd, the high-definition (HD) equivalent, which is optimized for quality. This new includes capabilities such as building custom voices and avatars already available in Azure AI and enables customers to build entirely new experiences across customer support, training videos, live-streaming and more. Developers can now access these voices through both services, Azure OpenAI Service and Azure AI Speech.


 


A new generation of embeddings models with lower pricing


 


Azure OpenAI Service customers have been incorporating embeddings models in their applications to personalize, recommend and search content. We are excited to announce a new generation of embeddings models that are significantly more capable and meet a variety of customer needs. These models will be available later this month.



  • text-embedding-3-small is a new smaller and highly efficient embeddings model that provides stronger performance compared to its predecessor text-embedding-ada-002. Given its efficiency, pricing for this model is $0.00002 per 1k tokens, a 5x price reduction compared to that of text-embedding-ada-002. We are not deprecating text-embedding-ada-002 so you can continue using the previous generation model, if needed.

  • text-embedding-3-large is our new best performing embeddings model that creates embeddings with up to 3072 dimensions. This large embeddings model is priced at $0.00013 / 1k tokens.


Both embeddings models offer native support for shortening embeddings (i.e. remove numbers from the end of the sequence) without the embedding losing its concept-representing properties. This allows you to make trade-off between the performance and cost of using embeddings.


 


What’s Next


 


It has been great to see what developers have built already using Azure OpenAI Service. You can further accelerate your enterprise’s AI transformation with the products we announced today. Explore the following resources to get started or learn more about Azure OpenAI Service.



 


We cannot wait to see what you build next!