Microsoft is named a Leader in 2023 Gartner® Magic Quadrant™ for B2B Marketing Automation Platform

Microsoft is named a Leader in 2023 Gartner® Magic Quadrant™ for B2B Marketing Automation Platform

This article is contributed. See the original author and article here.

Note: As announced at Microsoft Inspire 2023, as of September 1, 2023, Microsoft Dynamics 365 Marketing and Microsoft Dynamics 365 Customer Insights have been brought together into one offering. We are retaining the existing Dynamics 365 Customer Insights name to encompass this new offer of both applications. Customers can start with one or both applications and then further invest in the application they want to scale by buying the capacity they need.

In today’s turbulent economic times, companies are facing critical business challenges such as customer acquisition, increasing customer loyalty, and maximizing lifetime value. Often, to save time, they follow a one-size-fits-all approach—resulting in impersonal marketing strategies with low customer engagement. According to the Microsoft Work Trend Index, 89 percent of marketers say they struggle with having time to do their jobs.

To meet these complex challenges, it is crucial for companies to shift their approach from traditional mass communication to personalized engagement based on a deep understanding of each customer’s preferences and actions while ensuring their marketers have more time to leverage their creative and strategic skills to engage their customers. With this very goal in mind, Microsoft launched Dynamics 365 Marketing in 2018.

We are pleased and honored to share that in a short span of five years in market, Microsoft has been recognized as a Leader within the 2023 Gartner Magic Quadrant for B2B Marketing Automation Platforms* for the second consecutive year. In this year’s report, Microsoft is positioned highest in Ability to Execute.

A Gartner Magic Quadrant for B2B Marketing Automation Platforms graph with relative positions of the market’s technology providers, including Microsoft.
Figure 1: Gartner Magic Quadrant for B2B Marketing Automation Platforms**

For Microsoft, this placement recognizes our commitment to help companies better connect with their customers at scale, across all departments, to make this simple and easy for any company with a broad range of skillsets to employ.

Accelerating the journey to more personalized customer engagement

We started our Dynamics 365 Marketing journey in April 2018. Since then, we’ve gathered feedback and continued to learn at a rapid pace to help our customers on their journey to drive meaningful customer engagement, ensure long-term loyalty, and accelerate business success. To be competitive in today’s market, organizations must harness the power of data to gain a deeper understanding of their customers, anticipate behaviors, and craft one-on-one personalized experiences across all touchpoints, including sales, marketing, business operations, and service functions. Generative AI makes these capabilities within reach for every company. That’s why we’ve brought together Dynamics 365 Marketing and Dynamics 365 Customer Insights as one offering named Dynamics 365 Customer Insights, an AI-led solution to revolutionize customer experience. The new Customer Insights enables our customers to be more flexible by giving them access to both a modern, AI-driven customer data platform (Customer Insights data application) and real-time marketing with customer journey orchestration (Customer Insights journeys application). Customers can start with one or both applications and invest in the areas where they most want to scale.

To drive the necessary customer experience (CX) transformation, companies cannot rely on piecemeal integration of sales, service, and marketing products. Gartner predicts that by 2026, 50 percent of replacement customer relationship management (CRM) sales technology decisions will involve solutions including non-sales software comprising other modules from a CRM or a CX suite.[1] However, the reality is that only a few companies are currently delivering on these expectations. Customer experiences often remain fragmented across channels and departments, leading to inconsistencies. Microsoft is uniquely positioned to help customers overcome these challenges, and Dynamics 365 Customer Insights was built exactly for this purpose—to support customers throughout their end-to-end CX journeys.

Like all Dynamics 365 offerings, Customer Insights relies on Microsoft Dataverse to store CRM software data, which enables our customers to securely store and manage their data and harness the true power of that data by removing silos across sales, service, and marketing via a unified platform approach. Customer Insights helps marketers and customer engagement professionals gain a holistic view of their customers, anticipate their needs, and discover growth opportunities. Marketers can also deliver more relevant, contextual, customer-triggered engagements through the power of Copilot in Dynamics 365 Customer Insights. Some of our most recent Copilot capabilities in Customer Insights enable marketers to:

Enabling our customers to increase their reach

Zurich Insurance Group, a global insurer serving people and businesses in more than 200 countries, wanted to optimize marketing processes to help create more personalized customer experiences. Its Switzerland business unit connects to its customers through hosting online and in-person events—but to drive the highest impact, it must be sure it invites the right customers to the right events. It wanted to improve its ability to track if customers opened event invitations—or even received them, as well as the connection to registration and attendance. It also wanted a formalized way to collect feedback or easily use engagement data to continue to optimize the sales process after the event. Zurich selected Dynamics 365 Marketing to give it the flexibility to reach customers in new ways and drive more effective follow-ups to help shape their journeys. With Dynamics 365 Marketing, Zurich increased its lead quality by over 40 percent.

Over the past decade, Natuzzi, a globally hailed creator of exceptional luxury furniture that delivers a harmonious combination of design, function, aesthetics, and ethics, has seen a rapid global expansion of its heralded luxury brand. Natuzzi lacked a customer engagement platform capable of unifying data from its retail point of sale (POS), enterprise resource planning (ERP) system, and CRM systems. The company also wanted a way to bring together its business-to-business (B2B) and business-to-consumer (B2C) related data sets to drive greater insight between audiences. Adopting Dynamics 365 Marketing and Dynamics 365 Customer Insights, Natuzzi implemented an extensive customer experience platform to transform how its luxury brand discovers and sustains its customers. It uses customer data and insights to nurture customers and prospects through personalized campaigns, delivering emails, SMS texts, promotions, events, sales appointment reminders, and other relationship-building messages.

Microsoft named a Leader by Gartner

Microsoft is named a Leader in the 2023 Gartner Magic Quadrant for B2B Marketing Automation Platforms.

Learn more about Dynamics 365 Customer Insights

We’re excited to have been recognized as a Leader in the Gartner Magic Quadrant and are committed to helping our customers unify and enrich their customer data to deliver personalized, connected, end-to-end customer journeys across sales, marketing, and service. We truly believe that bringing together Dynamics 365 Marketing and Dynamics 365 Customer Insights enables us to continue investing in capabilities that will enable stronger, insights-based marketing that helps marketers and data analysts glean insights from customer data.

Read the 2023 Gartner Magic Quadrant for B2B Marketing Automation Platforms report.

Learn more about:

Contact your Microsoft representative to learn more about the value and return on investments, as well as the latest Microsoft Dynamics 365 Customer Insights offer.


  1. Gartner Forecast Analysis: CRM Sales Software, Worldwide, Roland Johnson, Amarendra, Julian Poulter, 12 December 2022.

Source: Gartner, Magic Quadrant for B2B Marketing Automation Platforms, Rick LaFond, Jeffrey L. Cohen, Matt Wakeman, Jeff Goldberg, Alan Antin, 20 September 2023.

*Gartner is a registered trademark and service mark and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

**This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.


The post Microsoft is named a Leader in 2023 Gartner® Magic Quadrant™ for B2B Marketing Automation Platform appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Revolutionizing Requirement Gathering: Azure DevOps Meets Azure OpenAI using Semantic kernel

Revolutionizing Requirement Gathering: Azure DevOps Meets Azure OpenAI using Semantic kernel

This article is contributed. See the original author and article here.

This blog is a deep dive into the future of requirement gathering. This blog explores how Azure DevOps and Azure OpenAI are joining forces to transform the way we manage project requirements. From automated requirement generation to intelligent analysis, learn how these powerful tools are reshaping the landscape of project management. Stay tuned for an enlightening journey into the world of AI-powered requirement gathering!

Setting up environment

Pre-requisite

Visual studio code

    Please install below extension

    – Jupyter (Publisher- Microsoft)

    – Python (Publisher- Microsoft)

    – Pylance (Publisher- Microsoft)

    – Semantic Kernel Tools (Publisher- Microsoft)

Python

  Please install below python packages

    – PIP

    – Semantic-kernel

Download the content from GitHub repo

 


Define the Semantic Function to generate feature description-

Now that you have below mentioned folder structure.

image1.png

 

Create semantic function for generating Feature description.

The first step is to define a semantic function that can interpret the input string and map it to a specific action. In our case, the action is to generate feature description from title. The function could look something like this:

 1. Create folder structure

    Create /plugins folder

    Create folder for semantic plugins inside Plugins folder, in this case its “AzureDevops”. (For more details on plugins)

    Create Folder for semantic function inside the skills folder ie ‘/plugin/AzureDevops’, in this case “FeatureDescription” (For more details on functions)

2. Define semantic function

    Once we have folder structure in place lets define the function by having

        ‘config.json’ with below JSON content for more details on content refer here.

{
  "schema": 1,
  "description": "get standard feature title and description",
  "type": "completion",
  "completion": {
    "max_tokens": 500,
    "temperature": 0.0,
    "top_p": 0.0,
    "presence_penalty": 0.0,
    "frequency_penalty": 0.0
  },
     "input": {
          "parameters": [
               {
               "name": "input",
               "description": "The feature name.",
               "defaultValue": ""
               }
          ]
     }
}


 


In above file, we are defining semantic function which accept ‘input’ parameter to perform “get standard feature title and description” as mentioned in Description section.

 

    Now, let’s put the single shot prompt for our semantic function in ‘skprompt.txt’. where ‘{{input}}’ where our input ask would be replaced.


 

Create feature title and description for {{$input}}  in below format
Feature Title:"[Prodive a short title for the feature]"
Description: "[Provide a more detailed description of the feature's purpose, the problem it addresses, and its significance to the product or project.] 
 
User Needs- 
[Outline the specific user needs or pain points that this feature aims to address.] 
 
Functional Requirements:-
- [Requirement 1] 
- [Requirement 2] 
- [Requirement 3] 
- ... 
 
Non-Functional Requirements:-
- [Requirement 1] 
- [Requirement 2] 
- [Requirement 3] 
- ... 
 
Feature Scope: 
[Indicates the minimum capabilities that feature should address. Agreed upon between Engineering Leads and Product Mangers] "


 



Execute above semantic function in action.



Rename “.env.example’ as ‘.env’ and update the parameters with actual values

Open notebook “Create-Azure-Devops-feature-from-requirement-text” in visual studio code and follow the steps mentioned to test

        Step 1 Install all python libraries

!python -m pip install semantic-kernel==0.3.10.dev0
!python -m pip install azure-devops







Step 2 Import Packages required to prepare a semantic kernel instance first.

import os
from dotenv import dotenv_values
import semantic_kernel as sk
from semantic_kernel import ContextVariables, Kernel # Context to store variables and Kernel to interact with the kernel
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, OpenAIChatCompletion # AI services
from semantic_kernel.planning.sequential_planner import SequentialPlanner # Planner

kernel = sk.Kernel() # Create a kernel instance
kernel1 = sk.Kernel() #create another kernel instance for not having semanitc function in the same kernel 

useAzureOpenAI = True

# Configure AI service used by the kernel
if useAzureOpenAI:
    deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
    kernel.add_chat_service("chat_completion", AzureChatCompletion(deployment, endpoint, api_key))
    kernel1.add_chat_service("chat_completion", AzureChatCompletion(deployment, endpoint, api_key))
else:
    api_key, org_id = sk.openai_settings_from_dot_env()
    kernel.add_chat_service("chat-gpt", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))


 


  Step 3 Importing skills and function from folder

# note: using skills from the samples folder
plugins_directory = "./plugins"

# Import the semantic functions
DevFunctions=kernel1.import_semantic_skill_from_directory(plugins_directory, "AzureDevOps")
FDesFunction = DevFunctions["FeatureDescription"]  


 


– Step 4 calling the semantic function with feature title to generate feature description based on predefined template

resultFD = FDesFunction("Azure Resource Group Configuration Export and Infrastructure as Code (IAC) Generation")
print(resultFD)



 

 

Create native function to create features in Azure DevOps


 – Create file “native_function.py” under “AzureDevOps” or download the file from repo.

 – Copy the code base and update Azure Devops parameter. you can access this as context parameter but for simplicity of this exercise, we kept it as hardcoded. Please find below code flow

        – Importing python packages

        – Defining class ‘feature‘ and native function as “create” under “@sk_function”.

        – Call semantic function to generate feature description.

        – Use this description to create Azure DevOps feature.

from semantic_kernel.skill_definition import (
    sk_function,
    sk_function_context_parameter,
)

from semantic_kernel.orchestration.sk_context import SKContext
from azure.devops.v7_1.py_pi_api import JsonPatchOperation

from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
import base64
from semantic_kernel import ContextVariables, Kernel
import re
class feature:
    def __init__(self, kernel: Kernel):
        self._kernel = kernel
    _function(
        description="create a Azure DevOps feature with description",
        name="create",
    )
    _function_context_parameter(
        name="title",
        description="the tile of the feature",
    )
    _function_context_parameter(
        name="description",
        description="Description of the feature",
    )
    async def create_feature(self, context: SKContext) -> str:
        feature_title = context["title"]
        get_feature = self._kernel.skills.get_function("AzureDevOps", "FeatureDescription")
        fdetails = get_feature(feature_title)
        # Define a regular expression pattern to match the feature title
        pattern = r"Feature Title:s+(.+)"
        # Search for the pattern in the input string
        match = re.search(pattern, str(fdetails))
        # Check if a match was found
        if match:
            feature_title = match.group(1)
        # Define a regular expression pattern to match the feature description
        # Split the string into lines
        lines = str(fdetails).split('n')
        lines = [line for index, line in enumerate(lines) if index not in [0]]
        description = 'n'.join(lines)
        jsonPatchList = [] 
        #description=context["description"]
        targetOrganizationName= "XXX"
        targetProjectName= "test"
        targetOrganizationPAT = "XXXXXX"
        workItemCsvFile= "abc"
        teamName = "test Team"
        areaName = teamName
        iterationName ="Sprint 1"
        targetOrganizationUri='https://dev.azure.com/'+targetOrganizationName
        credentials = BasicAuthentication('', targetOrganizationPAT)
        connection = Connection(base_url=targetOrganizationUri, creds=credentials)
        userToken = "" + ":" + targetOrganizationPAT
        base64UserToken = base64.b64encode(userToken.encode()).decode()
        headers = {'Authorization': 'Basic' + base64UserToken}
        core_client = connection.clients.get_core_client()
        targetProjectId = core_client.get_project(targetProjectName).id
        workItemObjects = [
                {
                    'op': 'add',
                    'path': '/fields/System.WorkItemType',
                    'value': "Feature"
                },
                {
                    'op': 'add',
                    'path': '/fields/System.Title',
                    'value': feature_title
                },
                {
                    'op': 'add',
                    'path': '/fields/System.State',
                    'value': "New"
                },
                {
                    'op': 'add',
                    'path': '/fields/System.Description',
                    'value': description
                },
                {
                    'op': 'add',
                    'path': '/fields/Microsoft.VSTS.Common.AcceptanceCriteria',
                    'value': "acceptance criteria"
                },      
                {
                    'op': 'add',
                    'path': '/fields/System.IterationPath',
                    'value': targetProjectName+""+iterationName
                }
            ]
        jsonPatchList = JsonPatchOperation(workItemObjects)
        work_client = connection.clients.get_work_item_tracking_client()
        try:
            WorkItemCreation = work_client.create_work_item(jsonPatchList.from_, targetProjectName, "Feature")
        except Exception as e:
            return feature_title+"Feature created unsuccessfully"
        return feature_title+" Feature created successfully"








 

Let’s execute native function


Let’s go back to notebook.

        –   Step 5 Importing native function

    

from plugins.AzureDevops.native_function import feature
math_plugin = kernel.import_skill(feature(kernel1), skill_name="AzureDevOps")
variables = ContextVariables()



 

 –   Step 6 Executing native function by putting natural language queries in title field

variables["title"] = "creating a nice pipelines"
variables["description"] = "test"
result = await kernel.run_async(
                math_plugin["create"], input_vars=variables
            )
print(result)


 

Use of Sequential planner to dynamical create N number of features.

– Step 6 Initiate sequential planner with semantic kernel

from plugins.AzureDevops.native_function import feature
planner = SequentialPlanner(kernel)
# Import the native functions
AzDevplugin = kernel.import_skill(feature(kernel1), skill_name="AzureDevOps")
ask = "create two Azure DevOps features for one with title creating user and one with creating work items with standard feature title and description"
plan = await planner.create_plan_async(goal=ask)
for step in plan._steps:
        print(step.description, ":", step._state.__dict__)


This would generate a plan to meet the goal which is above case is “create two Azure DevOps features for one with title creating user and one with creating work items with standard feature title and description” using available function in kernel.

– Step 7 once the plan is created, we can use this plan and execute it to create multiple features.


print("Plan results:")
result = await plan.invoke_async(ask)
for step in plan._steps:
        print(step.description, ":", step._state.__dict__)


 

This will create two features one for user and one for work item. Using this block, you can create a semantic function-based solution that can interpret natural language requirement document or transcript of reequipment call and use it to create features in azure DevOps. You can increase the accuracy of this solution by brining multi-shot prompt and historical data using collections. 

 



 

 



Lesson Learned #432: Resolving DataSync Failures in Azure SQL Database Caused by Custom Triggers

This article is contributed. See the original author and article here.

Azure SQL Database provides a robust DataSync service to synchronize data across multiple Azure SQL databases or between on-premises SQL Server and Azure SQL databases. While generally reliable, some exceptions can disrupt the smooth flow of data synchronization. One such error occurs when custom-defined triggers interfere with DataSync’s internal processes, resulting in a failure like the one described below: Sync failed with the exception ‘An unexpected error occurred when applying batch file sync_XXXXX-XXX-XYZ-afb1-XXXX.batch. See the inner exception for more details.Inner exception: Index was outside the bounds of the array. For more information, provide tracing ID ‘NNNN-3414-XYZ-ZZZ-NNNNNNNX’ to customer support.’


 


Analyzed the logs, we found that the error message points to a failure when applying a batch file for data synchronization, with an inner exception indicating that an “Index was outside the bounds of the array.” In this situation the error occurs when a custom trigger modifies the underlying data in a way that interferes with DataSync’s internal “data sync trigger” responsible for bulk-insert operations.


 


In this situation, once we have identified the trigger and table that is causing this issue, we temporarily disable the identified custom triggers and attempt to synchronize the data again. If the data syncs successfully, this confirms that the custom trigger is causing the issue.


 

-- Disable Trigger
DISABLE TRIGGER [Trigger_Name] ON [Table1];
-- Enable Trigger
ENABLE TRIGGER [Trigger_Name] ON [Table1];

 

What’s new in Microsoft Intune (2309) September edition

This article is contributed. See the original author and article here.

We’ve got several new capabilities to announce with our September service release (2309), including Microsoft Intune Suite Remote Help expanding to macOS and enhancements to Remote Help for Windows. We’re releasing the Zebra Lifeguard Over-the-Air integration with Intune, which we offered for public preview in May, and we’ve added more than 30 settings for Apple devices, part of our ongoing effort to ensure Intune has Day zero support for the latest Apple releases. Finally, we’ve released Microsoft Intune Endpoint Privilege Management for Windows 365 devices so customers can facilitate elevations for users on Cloud PC devices.


Your feedback is important! Please let us know your thoughts on these new developments by commenting on this post or connecting with me on LinkedIn.


Advancing Remote Help


This month, we’re expanding the capabilities of Remote Help to make it easier for helpdesk agents to assist users and solve issues remotely.


Firstly, Remote Help is now available on macOS! We’ve heard from customers that this is an essential feature of the Microsoft Intune Suite, and we’re excited to expand this capability to macOS. Helpdesk staff on macOS can now connect in view-only sessions to assist macOS users remotely.


Additionally, we’re now offering the ability to launch Remote Help for Windows from the Intune admin center. With this capability, helpdesk agents can seamlessly launch Remote Help on both their device and the user’s. Previously, both the helpdesk and the user had to launch Remote Help on their devices manually. With the new capability, the user receives a notification on their device that the helpdesk agent wants to begin a Remote Help session making it a more streamlined experience.


Intune integration with Zebra LifeGuard OTA


This month, as part of our efforts to improve the experience for frontline workers, the Zebra LifeGuard Over-the-Air (LG OTA) integration with Intune moves from public preview to generally available. With this firmware over-the-air (FOTA) solution, IT admins can update ruggedized Zebra Android devices securely and efficiently without physical access to the devices.


Zebra device updates are managed from the Intune admin center and distributed wirelessly. This makes it easier to keep devices up to date, prevents compatibility issues for users, and reduces security risks. Customers have been asking for the ability to use Intune to manage Zebra devices, and we’re happy to deliver!


New Apple features and iOS/iPadOS 17 and macOS 14 release


We’re always working to improve the Intune experience for Apple users—including for the latest operating systems. With the Apple release of iOS 17.0 and macOS 14.0, our goal is to ensure that Microsoft Intune can provide Day zero support so that features work seamlessly. As part of this effort, we’ve improved the settings catalog and simplified and expedited settings updates for IT admins and users.


To prepare for the releases, we’ve provided many additional settings for Apple devices. We’re aiming to speed up response time and bring these settings in as quickly as possible. Now, we can provide them in a matter of hours instead of months, which is critical as features and capabilities are added to address new Apple releases. The latest batch includes more than 30 additional settings. The settings catalog for macOS, iOS, and iPadOS lists all the settings admins can configure in a device policy.


EPM for Windows 365 devices


Microsoft Intune Endpoint Privilege Management (EPM), part of the Microsoft Intune Suite, enables IT admins to selectively allow applications to run with administrative privileges. Organizations can now facilitate elevations for users on Cloud PC devices via EPM enabling users to easily elevate approved applications without the need for full administrative rights on their Windows device. This means greater efficiency and security for your organization.


Let us know how we’re doing!


Your comments help us improve. Let us know how our new features are working for you by commenting on this post or connect with me on LinkedIn. Stay tuned for more announcements next month!

Surface Hub 3: Bridging workforce collaboration with Microsoft Teams Rooms

Surface Hub 3: Bridging workforce collaboration with Microsoft Teams Rooms

This article is contributed. See the original author and article here.

In the constantly evolving landscape of modern work, success involves effective meetings and a collaborative workforce. Microsoft understands this well and has introduced Surface Hub 3, an all-in-one hybrid meeting and collaboration device set to transform the way we work.


 


With this device – the only collaboration board designed end-to-end by Microsoft – we are offering consistency and simplicity to organizations that have Surface Hubs and other Microsoft Teams Rooms in their spaces, while delivering the most options for active collaboration so that teams can get more done.


 


 



Learn more about Surface Hub 3 from Sonia and me in our YouTube video!


 


Unified Microsoft Teams Rooms Experience


Surface Hub 3 is joining the Microsoft Teams Rooms family as an all-in-one Teams Rooms board running Teams Rooms on Windows. This means that with a consistent experience across all meeting spaces now, your team can effortlessly transition from one room to another, whether the space features the streamlined, touch-first interface on Surface Hub 3 or the traditional console-based Teams Rooms setup. This also means Surface Hub 3 now supports long-requested features by Hub customers—including persistent chat, the Front Row layout (which looks particularly beautiful on the 85” screen), and more. And, going forward, customers can look forward to newly released Teams Rooms features now also coming to Surface Hub on Day 1.


SUR24-COMR-Hub3-85-50-50-Portrait-001-RGB.png


Immersive Meeting Experience


Surface Hub 3 brings a wave of new capabilities.


 



  • Smart Rotation and Portrait: physically rotate Surface Hub 3 50” between Portrait or Landscape at any time to adapt the screen layout to suit your needs, whether for a natural Whiteboarding session or a more personable one-on-one call.

  • Mobility and Versatility: The Surface Hub 3 50” is fully mobile on a Steelcase Roam Stand* , offering flexibility in deployment. Choose from a variety of stands and wall-mounting solutions from Steelcase and our Designed for Surface partners.  With the APCTM Charge Mobile Battery* , the Surface Hub 3 50” can be taken virtually anywhere in the building.

  • Premium  Design: Surface Hub 3 prioritizes inclusive meetings with clear audio and visuals. The high-resolution, 4K PixelSense display with an anti-glare coating makes content visible in any lighting condition.

  • Intelligent Audio: The Surface Hub 3 50” features two microphone arrays and speaker pairings. Smart AV optimizes audio based on device orientation, delivering the best stereo experience whether in Portrait or Landscape.

  • Seamless Integration: Surface Hub 3 pairs with Microsoft Teams Rooms certified peripherals in larger conference rooms, thanks to the Microsoft Teams Rooms on Windows platform. This creates a world of possibilities for different meeting spaces, from traditional setups to large classrooms, with external microphones, speakers, cameras, and more.

  • Enhanced Collaboration: Surface Hub 3 supports active inking with up to two Surface Hub Pens or Surface Slim pens, providing 20 points of multitouch for immersive on-device collaboration. Built-in palm rejection ensures a natural interaction experience.

  • Faster Performance: with a 60% CPU performance increase, and a 160% GPU graphics performance increase gen-on-gen, Surface Hub 3 customers will enjoy a more powerful system that is also primed to capitalize on future software innovation. With these capabilities and more, Surface Hub 3 revolutionizes meetings, offering a versatile and inclusive solution for modern workspaces.


Hub_50_1-1_VideoChat_09212023_Blog.png


 1:1 video chat in Portrait on Surface Hub 3 50”


 


AI-Powered Meetings and Brainstorming


Surface Hub 3 enables customers to leverage AI more than ever to enhance hybrid meetings and collaboration sessions. For example, Cloud IntelliFrame** allows remote attendees to see in-person Surface Hub users more clearly through a smart video feed that separates participants into individual boxes and helps remove distractions. Video segmentation with a unified background in Front Row uses AI to foster inclusion by removing backgrounds and adjusting video sizes, so remote attendees are literally on the same level with each other.** And in the future, Surface Hub 3 will take brainstorming to a new level with AI-powered features from Microsoft Copilot. Copilot in Whiteboard on Surface Hub will help generate and organize ideas efficiently, freeing up time for your team to focus on creative ideation. Stay tuned for more details.


Hub_85_FrontRow_Copilot-in-Whiteboard_09212023_Blog_v2.png


Copilot in Whiteboard, and video segmentation with a unified background, both in Front Row on Surface Hub 3 85”


 


Streamlined IT Management


As an IT professional, managing devices in your organization can be a complex task. Surface Hub 3 reduces IT complexity with a streamlined management experience through Microsoft Teams admin center and the new Microsoft Teams Rooms Pro Management Portal**. This allows you to manage all devices seamlessly, making your job easier and ensuring a hassle-free experience for your users.


 


Microsoft Teams Admin Center IT management.png


Microsoft Teams admin center, managing Teams Rooms on Windows


 


Easy Transition and Support


In-market Surface Hub 2S devices can upgrade to the full Surface Hub 3 experience with the Surface Hub 3 Pack. Starting next year, software migration will also be available for Surface Hub 2S devices to move to Microsoft Teams Rooms on Windows. For those customers continuing to run Windows 10 Team edition on their Surface Hub 2S devices, support for that OS will continue until October 14, 2025.


 


SUR19_Hub2S_Feature_Compute_Module_013_RGB.png


 


The Surface Hub 3 Pack is easy to swap into both 50” & 85” Surface Hub 2S devices


 


Innovation is at the heart of our journey, from our origins over a decade ago with Perceptive Pixel and PixelSense to Surface Hub 3. As we continue to push the limits of what’s possible in meetings and teamwork, Surface Hub 3 stands ready to empower your organization for the modern workplace.


Preorder now to elevate your meeting room experience to new heights and embrace the future of collaboration!


 


*Steelcase Mobile Roam Stand and Schnieder Electric, APC Charge Mobile Battery sold separately


**Software license required. Sold separately.


 

Announcing Microsoft 365 Copilot general availability and Microsoft 365 Chat

Announcing Microsoft 365 Copilot general availability and Microsoft 365 Chat

This article is contributed. See the original author and article here.

Today at an event in New York, we announced our vision for Microsoft Copilot—a digital companion for your whole life—that will create a single Copilot user experience across Bing, Edge, Microsoft 365, and Windows.

The post Announcing Microsoft 365 Copilot general availability and Microsoft 365 Chat appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Enforcing and Managing Azure DDoS Protection with Azure Policy

Enforcing and Managing Azure DDoS Protection with Azure Policy

This article is contributed. See the original author and article here.

Introduction


In today’s interconnected digital landscape, Distributed Denial of Service (DDoS) attacks have become a persistent threat to organizations of all sizes. These attacks can disrupt services, compromise sensitive data, and lead to financial losses. To counter this threat, Microsoft Azure offers robust DDoS protection capabilities. In this blog post, we will explore how organizations can leverage Azure Policy to enforce and manage Azure DDoS Protection, enhancing their security posture and ensuring uninterrupted services.


 


The main objective of this post is to equip you with the knowledge to effectively utilize the built-in policies for Azure DDoS protection within your environment. This includes enabling automated scaling without the need for manual intervention and ensuring that DDoS protection is enabled across your public endpoints.


 


Understanding Azure DDoS Protection


Microsoft Azure DDoS Protection is a service designed to protect your applications from the impact of DDoS (Distributed Denial of Service) attacks. These attacks aim to overwhelm an application’s resources, rendering it inaccessible to legitimate users. Azure DDoS Protection provides enhanced mitigation capabilities that are automatically tuned to protect your specific Azure resources within a virtual network. It operates at both layer 3 (network layer) and layer 4 (transport layer) to defend against volumetric and protocol attacks.


 


Azure Policy Overview


Azure Policy is an integral part of Azure Governance, offering centralized automation for enforcing and monitoring organizational standards and compliance across your Azure environment. It streamlines the deployment and management of policies, ensuring consistency in resource configurations. Azure Policy is a powerful tool for aligning resources with industry and organizational security standards, reducing manual effort, and enhancing operational efficiency.


 


SaleemBseeu_1-1695217828434.png


 


Benefits of Using Azure Policy for DDoS Protection


1- Consistency Across Resources:


Azure Policy enables you to establish a uniform DDoS protection framework across your entire Azure environment. This consistency ensures that no resource is left vulnerable to potential DDoS attacks due to misconfigurations or oversight.


 


2- Streamlined Automation:


The automation capabilities provided by Azure Policy are great for managing DDoS protection. Instead of manually configuring DDoS settings for each individual resource, Azure Policy allows you to define policies once and apply them consistently across your entire Azure infrastructure. This streamlining of processes not only saves time but also minimizes the risk of human error in policy implementation.


 


3- Enhanced Compliance:


Adherence to industry and organizational security standards is a top priority for businesses of all sizes. Azure Policy facilitates compliance by allowing you to align your resources with specific security baselines. By enforcing DDoS protection policies that adhere to these standards, you can demonstrate commitment to security and regulatory compliance, thereby improving the trust of your customers and partners.


 


Built-In Azure DDoS Protection definitions


Note: Azure Standard DDoS Protection has been renamed as Azure DDoS Network Protection. However, it’s important to be aware that the names of the built-in policies have not yet been updated to reflect this change.


 


Azure DDoS Protection Standard should be enabled


This Azure policy is designed to ensure that all virtual networks with a subnet that have an application gateway with a public IP, have Azure DDoS Network Protection enabled. The application gateway can be configured to have a public IP address, a private IP address, or both. A public IP address is required when you host a backend that clients must access over the Internet via an Internet-facing public IP. This policy ensures that these resources are adequately protected from DDoS attacks, enhancing the security and availability of applications hosted on Azure.


 


For detailed guidance deploying Application gateway with Azure DDoS protection, see here: Tutorial: Protect your application gateway with Azure DDoS Network Protection – Azure Application Gateway | Microsoft Learn


 


Public IP addresses should have resource logs enabled for Azure DDoS Protection Standard


This policy ensures that resource logs for all public IP addresses are enabled and configured to stream to a Log Analytics workspace. This is important as it provides detailed visibility into the traffic data and DDoS attack information.


The diagnostic logs provide insights into DDoS Protection notifications, mitigation reports, and mitigation flow logs during and after a DDoS attack. These logs can be viewed in your Log Analytics workspace. You will get notifications anytime a public IP resource is under attack, and when attack mitigation is over. Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic, and other interesting data-points during an active DDoS attack in near-real time. Mitigation flow logs offers regular reports on DDoS mitigation, with updates provided every 5 minutes. Additionally, a post-mitigation report is generated for a comprehensive overview.


 


SaleemBseeu_2-1695217994307.png


 


This policy ensures that these logs are properly configured and streamed to a Log Analytics workspace for further analysis and monitoring. This enhances the security posture by providing detailed insights into traffic patterns and potential security threats while also providing a scalable way to enable telemetry without manual work.


 


Virtual networks should be protected by Azure DDoS Protection


This policy is designed to ensure that all your virtual networks are associated with a DDoS Protection Network plan. This policy scans your Azure environment and identifies any virtual networks that do not have the DDoS Protection Network plan enabled. If such a network is found, the policy can optionally create a remediation task. This task will associate the non-compliant virtual network with the specified DDoS Protection Plan. This policy helps maintain the security and integrity of your Azure environment by enforcing the best practices for DDoS protection.


We also have a more granular version of this policy, called “Virtual Networks should be protected by Azure DDoS Protection Standard – tag based”. This policy allows you to audit only those VNets that carry a specific tag. This means you can enable DDoS protection exclusively on VNets that contain your chosen tag. While this feature, you can deploy it directly from our GitHub repository: Azure-Network-Security/Azure DDoS Protection/Policy – Azure Policy Definitions/Policy – Virtual Networks should be enabled with DDoS plan at master · Azure/Azure-Network-Security (github.com)


 


Implementing Azure Policy for DDoS Protection


Defining the Policy


The first step starts with the selection of policy definitions. Given that we already have a set of built-in policies at our disposal, we will choose one of them. In the ‘Definitions’ section, search for ‘DDoS’. For the purposes of this tutorial, I will use the definition titled ‘Virtual networks should be protected by Azure DDoS Protection Standard.’ Upon opening this definition, you can read its description and look at the definition logic.


If you wish to modify the built-in definition before assigning it, you can select the duplicate option to create a copy of it. Choose a name for your duplicated definition, specify its category, and provide a customized description. After saving your changes, a new definition will be created, complete with your changes and categorized as a custom definition.


 


SaleemBseeu_3-1695218062308.png


 


Policy Assignment and Scope


For the next step let’s start assigning our policy definition. To do this, select the ‘Assign’ option located in the top left corner under the definition. The first section you’ll see is ‘Scope’. Here, select the subscription where you want the policy to be active. For a more granular approach, you can also select a specific resource group. In the ‘Basics’ section, you have the option to change the assignment name and add a description.


 


SaleemBseeu_0-1695224200427.png


 


Note: Make sure to select ‘Enabled’ under policy enforcement if you want the policy to be actively enforced. If you only want to identify which resources are compliant without enforcing the policy, you can leave this setting as ‘Disabled’. For more information about policy enforcement, here Details of the policy assignment structure – Azure Policy | Microsoft Learn


 


Next, go to the ‘Parameters’ section and choose the DDoS protection plan that you intend to use for protecting your VNets. This selected plan will be used to add your VNets to it.


 


The final section is ‘Remediation’. Here, you have the option to create a remediation task. This means that when the policy is created, the remediation will apply not only to newly created resources but also to existing ones. If this aligns with your desired outcome, check the box for ‘Create a remediation task’ and select the DDoS policy.


 


SaleemBseeu_1-1695224256835.png


 


Since our policy has a modify effect, it requires either an existing user-assigned managed identity or a system-assigned managed identity. The portal will automatically provide an option to create a managed identity with the necessary permissions, which in this case is ‘Network Contributor’. To learn more about managed identity, see here Remediate non-compliant resources – Azure Policy | Microsoft Learn


 


Policy Enforcement Best Practices


1- Granularity: Policies should be customized to match the specific needs of different resource types and their importance levels. For example, not all VNets may need DDoS protection, and applying a one-size-fits-all policy across all resources could lead to unnecessary expenses. That’s why it’s important to evaluate the needs of each resource. Resources that handle sensitive data or are vital for business operations may need stricter policies compared to those that are less important. This approach ensures that each resource is properly secured while also being cost-effective.


 


2- Testing: Before deploying policies to critical resources, it’s recommended to test them in a non-production environment. This allows you to assess the impact of the policies and make necessary adjustments without affecting your production environment. It also helps in identifying any potential issues or conflicts with existing configurations.


 


3- Monitoring: Regularly reviewing policy compliance is crucial for maintaining a secure and compliant Azure environment. This involves checking the compliance status of your resources and adjusting policies as necessary based on the review. Azure Policy provides compliance reports that can help in this process. For more information on how to get compliance data or manually start an evaluation scan, see here Get policy compliance data – Azure Policy | Microsoft Learn


 


Conclusion


Using Azure Policy to enforce and manage Azure DDoS Protection is an essential part of a proactive and comprehensive security strategy. It allows you to continuously monitor your Azure environment, identify non-compliant resources, and take corrective action promptly. This approach not only enhances the security of your applications but also contributes to maintaining their availability and reliability.


 


Resources


Azure DDoS Protection Overview | Microsoft Learn


Overview of Azure Policy – Azure Policy | Microsoft Learn


Details of the policy definition structure – Azure Policy | Microsoft Learn


Understand scope in Azure Policy – Azure Policy | Microsoft Learn


Deploying DDoS Protection Standard with Azure Policy – Microsoft Community Hub

An introduction to Microsoft Defender EASM’s Data Connections functionality

An introduction to Microsoft Defender EASM’s Data Connections functionality

This article is contributed. See the original author and article here.

Microsoft Defender External Attack Surface Management (EASM) continuously discovers a large amount of up-to-the-minute attack surface data, helping organizations know where their internet-facing assets lie. Connecting and automating this data flow to all our customers’ mission-critical systems that keep their organizations secure is essential to understanding the data holistically and gaining new insights, so organizations can make     informed, data-driven decisions.


 


In June, we released the new Data Connections feature within Defender EASM, which enables seamless integration into Azure Log Analytics and Azure Data Explorer, helping users supplement existing workflows to gain new insights as the data flows from Defender EASM into the other tools. The new capability is currently available in public preview for Defender EASM customers.


 


Why use data connections?


The data connectors for Log Analytics and Azure Data Explorer can easily augment existing workflows by automating recurring exports of all asset inventory data and the set of potential security issues flagged as insights to specified destinations to keep other tools continually updated with the latest findings from Defender EASM. Benefits of this feature include:


 



  • Users have the option to build custom dashboards and queries to enhance security intelligence. This allows for easy visualization of attack surface data, to then go and perform data analysis.

  • Custom reporting enables users to leverage tools such as Power BI. Defender EASM data connections will allow the creation of custom reports that can be sent to CISOs and highlight security focus areas.

  • Data connections enable users to easily access their environment for policy compliance.

  • Defender EASM’s data connectors significantly enrich existing data to be better utilized for threat hunting and incident handling.

  • Data connectors for Log Analytics and Azure Data Explorer enable organizations to integrate Defender EASM workflows into the local systems for improved monitoring, alerting, and remediation.


In what situations could the data connections be used?


While there are many reasons to enable data connections, below are a few common use cases and scenarios you may find useful.


 



  • The feature allows users to push asset data or insights to Log Analytics to create alerts based on custom asset or insight data queries. For example, a query that returns new High Severity vulnerability records detected on Approved inventory can be used to trigger an email alert, giving details and remediation steps to the appropriate stakeholders. The ingested logs and Alerts generated by Log Analytics can also be visualized within tools like Workbooks or Microsoft Sentinel.

  • Users can push asset data or insights to Azure Data Explorer/Kusto to generate custom reports or dashboards via Workbooks or Power BI. For example, a custom-developed dashboard that shows all of a customer’s approved Hosts with recent/current expired SSL Certificates that can be used for directing and assigning the appropriate stakeholders in your organization for remediation.

  • Users can include asset data or insights in a data lake or other automated workflows. For example, generating trends on new asset creation and attack surface composition or discovering unknown cloud assets that return 200 response codes.


How do I get started with Data Connections?


We invite all Microsoft Defender EASM users to participate in using the data connections to Log Analytics and/or Azure Data Explorer so you can experience the enhanced value it can bring to your data, and thus, your security insights.


 


Step 1) Ensure your organization meets the preview prerequisites


















Aspect



Details



Required/Preferred 


Environmental Requirements 



Defender EASM resource must be created and contain an Attack Surface footprint.
Must have Log Analytics and/or Azure Data Explorer/ Kusto



Required Roles & Permissions 



– Must have a tenant with Defender EASM created (or be willing to create one). This provisions the EASM API service principal.


– User and Ingestor roles assigned to EASM API (Azure Data Explorer)



 


Step 2) Access the Data Connections


Users can access Data Connections from the Manage section of the left-hand navigation pane (shown below) within their Defender EASM resource blade. This page displays the data connectors for both Log Analytics and Azure Data Explorer, listing any current connections and providing the option to add, edit or remove connections.


Step 2 - Access data connections.png


Connection prerequisites: To successfully create a data connection, users must first ensure that they have completed the required steps to grant Defender EASM permission for the tool of their choice. This process enables the application to ingest our exported data and provides the authentication credentials needed to configure the connection.


 


Step 3: Configure Permissions for Log Analytics and/or Azure Data Explorer


Log Analytics:



  1. Open the Log Analytics workspace that will ingest your Defender EASM data or create a new workspace.

  2. On the leftmost pane, under Settings, select Agents.


Step 3 - log analytics permissions.png


 


Azure Data Explorer:



  1. Expand the Log Analytics agent instructions section to view your workspace ID and primary key. These values are used to set up your data connection.

  2. Open the Azure Data Explorer cluster that will ingest your Defender EASM data or create a new cluster

  3. Select Databases in the Data section of the left-hand navigation menu.


Select + Add Database to create a database to house your Defender EASM data. 


Step 3 - azure data explorer.png


4. Name your database, configure retention and cache periods, then select Create.


step 3 - azure data explorer - name database.png


5. Once your Defender EASM database has been created, click on the database name to open the details page. Select Permissions from the Overview section of the left-hand navigation menu.


step 3 - permissions.png


To successfully export Defender EASM data to Data Explorer, users must create two new permissions for the EASM API: user and ingestor.


 


6. First, select + Add and create a user. Search for “EASM API,” select the value, then click Select.


7. Select + Add to create an ingestor. Follow the same steps outlined above to add the EASM API as an ingestor.


8. Your database is now ready to connect to Defender EASM.


 


Step 4: Add data connections for Log Analytics and/or Azure Data Explorer


Log Analytics:


Users can connect their Defender EASM data to either Log Analytics or Azure Data Explorer. To do so, select “Add connection” from the Data Connections page for the appropriate tool.  The Log Analytics connection addition is covered below.


 


A configuration pane will open on the right-hand side of the Data Connections screen as shown below. The following fields are required:


 


step 4 - add data connection.png



  • Name: enter a name for this data connection. 

  • Workspace ID For Log Analytics, users enter the Workspace ID and the coinciding API key associated with their account.

  • Api key Log Analytics users enter the API key associated with their account

  • Content: users can select to integrate asset data, attack surface insights, or both datasets.

  • Frequency: select the frequency that the Defender EASM connection sends updated data to the tool of your choice. Available options are daily, weekly, and monthly.


Azure Data Explorer:


The Azure Data Explorer connection addition is covered below.


 


A configuration pane will open on the right-hand side of the Data Connections screen as shown below. The following fields are required:


 


step 4 - add data connection  - azure data explorer.png



  • Name: enter a name for this data connection. 

  • Cluster name: 

  • Region: The region associated with Azure Data explorer

  • Database: The database associated with the Azure Data explorer

  • Content: users can select to integrate asset data, attack surface insights, or both datasets.

  • Frequency: select the frequency that the Defender EASM connection sends updated data to the tool of your choice. Available options are daily, weekly, and monthly.


 


Step 5: View data and gain security insights


To view the ingested Defender EASM asset and attack surface insight data, you can use the query editor available by selecting the ”Logs” option from the left hand menu of the Azure Log Analytics Workspace you created earlier. These tables are also updated at the Data Connection configuration record frequency.


 


Extending Defender EASM Asset and Insights data, via these two new data connectors, into Azure ecosystem tools like Log Analytics and Data Explorer enables customers to orchestrate the creation of contextualized data views that can be operationalized into existing workflows and provides the facility and toolsets for analysts to investigate and develop new methods of applicative Attack Surface Management.


 


Additional resources:


Init Containers in Azure Container Apps : File Processing

Init Containers in Azure Container Apps : File Processing

This article is contributed. See the original author and article here.

In some scenarios, you might need to preprocess files before they’re used by your application. For instance, you’re deploying a machine learning model that relies on precomputed data files. An Init Container can download, extract, or preprocess these files, ensuring they are ready for the main application container. This approach simplifies the deployment process and ensures that your application always has access to the required data.


 


The below example defines a simple Pod that has an init container which downloads a file from some resource to a file share which is shared between the init and main app container. The main app container is running a php-apache image and serves the landing page using the index.php file downloaded into the shared file space.


 


Initcontainerimage-usecase2.jpg


 


The init container mounts the shared volume at /mydir , and the main application container mounts the shared volume at /var/www/html. The init container runs the following command to download the file and then terminates: wget -O /mydir/index.php http://info.cern.ch


 


Configurations and dockerfile for init container:


 



Astha_3-1695053540906.png


 


 



  • Dockerfile for init which downloads an index.php file under /mydir:


 

FROM busybox:1.28 
WORKDIR / 
ENTRYPOINT ["wget", "-O", "/mydir/index.php", "http://info.cern.ch"]

 


 


 


Configuration for main app container:


 



  • Create main app container mounting file share named init on path /var/www/html:


Astha_5-1695053835179.png


 


 



  • Main app container configuration which uses php-apache image and serves the index.php file from DocumentRoot /var/www/html:


Astha_4-1695053818985.png


 


Output:


 


Astha_2-1695053488081.png


 


 


 


Logs:


 


Astha_1-1695052932354.png


 

Lesson Learned #428: SqlError Number:229 provisioning DataSync in Azure SQL Database

This article is contributed. See the original author and article here.

We got a new issue when our customer found provisioning a Sync Group named ‘XXX’ failed with error: Database re-provisioning failed with the exception ‘SqlException ID: XX-900f-42a5-9852-XXX, Error Code: -2146232060 – SqlError Number:229, Message: SQL error with code 229.’. Following I would like to share some details what is the error and the solution to fix it.


 


Let’s break this down:


 



  1. Sync Group Issue: The Sync Group ‘XXX’ is experiencing a problem.

  2. Database re-provisioning failed: The attempt to reset or reprovision the database in this group failed.

  3. SqlException ID: A unique identifier associated with this particular SQL exception.

  4. Error Code -2146232060: The error code associated with this exception.

  5. SqlError Number 229: This points to the error number from the SQL Server. In SQL Server, error 229 is related to a “Permission Denied” error.


 


Root Cause


 


The SqlError Number 229, “Permission Denied,” is the most telling part of the error message. It means that the process trying to perform the action doesn’t have adequate permissions to carry out its task.


In the context of Sync Groups, several operations occur behind the scenes to ensure data is kept consistent across all nodes. These operations include accessing metadata tables, system-created tracking tables, and executing certain stored procedures. If any part of this chain lacks the necessary permissions, the entire sync process could fail.


 


Solution


The error was ultimately resolved by granting SELECT, INSERT, UPDATE, and DELETE permissions on sync metadata and system-created tracking tables. Moreover, EXECUTE permission was granted on stored procedures created by the service.


 


Here’s a more detailed breakdown:


 




  1. SELECT, INSERT, UPDATE, and DELETE Permissions: These CRUD (Create, Read, Update, Delete) permissions ensure that all basic operations can be performed on the relevant tables. Without these, data synchronization is impossible, as the system can’t read from the source, update the destination, or handle discrepancies.




  2. EXECUTE Permission on Stored Procedures: Stored procedures are sets of precompiled queries that might be executed during the sync process. Without permission to execute these procedures, the sync process might be hindered.




 


Conclusion


 


Errors like the “SqlException ID” are more than just roadblocks; they’re opportunities for us to delve deep into our systems, understand their intricacies, and make them more robust. By understanding permissions and ensuring that all processes have the access they need, we can create a more seamless and error-free synchronization experience. Always remember to regularly audit permissions, especially after updates or system changes, to prevent such issues in the future


 


If you need more information how DataSync works at database level, enabling SQL Profiler (using, for, example, the plugin of SQL Profiler in Azure Data Studio) for you could see a lot of internal details.


 


Enjoy!