Microsoft 365 Copilot drove up to 353% ROI for small and medium businesses—new study

Microsoft 365 Copilot drove up to 353% ROI for small and medium businesses—new study

This article is contributed. See the original author and article here.

As a small or medium-sized business (SMB) leader, you’ve likely heard a lot about generative AI and how it’s transforming businesses of all sizes. To better understand how AI is helping businesses grow and compete, Microsoft commissioned Forrester Consulting to study the potential return on investment (ROI) of Microsoft 365 Copilot for SMBs.

The post Microsoft 365 Copilot drove up to 353% ROI for small and medium businesses—new study appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Microsoft 365 Copilot drove up to 353% ROI for small and medium businesses—new study

Microsoft named a Leader in the 2024 Gartner® Magic Quadrant™ for Unified Communications as a Service for the sixth year in a row

This article is contributed. See the original author and article here.

We’re honored to announce that Microsoft has, once again, been recognized as a Leader in the 2024 Gartner® Magic Quadrant™ for Unified Communications as a Service (UCaaS), Worldwide. This is the sixth year we’ve received this recognition and we’re thrilled to be positioned highest in both the ability to execute and furthest on completeness of vision axes.

The post Microsoft named a Leader in the 2024 Gartner® Magic Quadrant™ for Unified Communications as a Service for the sixth year in a row appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Improve operational efficiency with Cross-Location Open Shifts

Improve operational efficiency with Cross-Location Open Shifts

This article is contributed. See the original author and article here.

We’re excited to introduce the cross-location shifts feature in Microsoft Shifts app, designed to enhance flexibility and efficiency for both frontline managers and workers. Currently in public preview, this feature empowers businesses to better manage staffing across multiple locations while giving employees more control over their schedules.


 


For Frontline Managers


With cross-location shifts, managers can offer open shifts across various teams and locations, helping to balance workforce needs, fill last-minute gaps, and improve customer satisfaction. By turning on the cross-location feature in Shifts settings, managers can post open shifts that employees from other locations can request, ensuring their store or site is always fully staffed.


 


Managers will be notified when employees from other locations request shifts and can easily approve or decline requests. Once approved, workers from other locations will appear in your schedule as external employees, making it seamless to track staff across locations.


JohnSteckroth_1-1728923088834.png


 


 


For Frontline Workers


The cross-location feature provides more flexibility for employees, allowing them to pick up open shifts at different locations that suit their schedules. Workers can view open shifts at other sites and submit a request to pick up the shift. Once approved by the manager at that location, the shift will appear in their schedule.


JohnSteckroth_1-1728923234146.png


 


Getting Started


For IT Admins: To enable cross-location open shifts in your organization, follow the steps outlined here: Set up open shifts across locations in Shifts for your frontline – Microsoft 365 for frontline workers | Microsoft Learn


For Managers and Workers: Learn more about using this feature here: Use open shifts across locations in Shifts – Microsoft Support


What’s coming next:



  1. Manager of employees can opt in for approval when they are working at other locations in addition to approval of the target store manager

  2. Manager of employee can view the location name in the team schedule when they are working at other locations


This powerful new feature helps businesses optimize staffing, enhance worker flexibility, and improve overall operational efficiency. Stay tuned as we refine this feature during its public preview phase, and we encourage you to share your feedback!


Please take a moment to share your feedback/questions on this feature via this brief survey (https://aka.ms/CrossLocationShifts) and include your email for any further queries. We’re eager to connect with you!


 


 


 


 


 

Performing ETL in Real-Time Intelligence with Microsoft Fabric

Performing ETL in Real-Time Intelligence with Microsoft Fabric

This article is contributed. See the original author and article here.

Introduction


In today’s data-driven world, the ability to act upon data as soon as its generated is crucial for businesses to make informed decisions quickly. Organizations seek to harness the power of up-to-the-minute data to drive their operations, marketing strategies, and customer interactions.


 


This becomes challenging in the world of real-time data where it is not always possible to do all the transformations while the data is being streamed. Therefore, you must come up with a flow that does not impact the data stream and is also quick.


 


This is where Microsoft Fabric comes into play. Fabric offers a comprehensive suite of services including Data Engineering, Data Factory, Data Science, Real-Time Intelligence, Data Warehouse, and Databases. But today, we are going to focus on Real-Time Intelligence.


 


Use-Cases


This set up can be used in scenarios where data transformation is needed to be used in downstream processing/analytical workload. As example of this would be to enable OneLake availability in KQL table and use that data to be accessed by other Fabric engines like Notebooks, Lakehouse etc. for training ML models/analytics.


 


Another example let’s say you have a timestamp column in your streaming data and you would like to change its format based on your standard. You can use the update policy to transform the timestamp data format and store it.


 


Overview


Fabric Real-Time Intelligence supports KQL database as its datastore which is designed to handle real-time data streams efficiently. After ingestion, you can use Kusto Query Language (KQL) to query the data in the database.


 


KQL Table is a Fabric item which is part of the KQL Database. Both these entities are housed within an Eventhouse. An Eventhouse is a workspace of databases, which might be shared across a certain project. It allows you to manage multiple databases at once, sharing capacity and resources to optimize performance and cost. Eventhouses provide unified monitoring and management across all databases and per database.


gurkamal_0-1728575168009.png


 


Figure 1: Hierarchy of Fabric items in an Eventhouse


 


Update policies are automated processes activated when new data is added to a table. They automatically transform the incoming data with a query and save the result in a destination table, removing the need for manual orchestration. A single table can have multiple update policies for various transformations, saving data to different tables simultaneously. These target tables can have distinct schemas, retention policies, and other configurations from the source table.


 


Scope


In this blog, we have a scenario where we will be doing data enrichment on the data that lands in the KQL table. In this case, we will be dropping the columns we don’t need but you can also do other transformations supported in KQL on the data.


Here we have a real-time stream pushing data to a KQL table. Once loaded into the source table, we will use an update policy which will drop columns not needed and push the data of interest to the destination table from the source table.


  
Screenshot 2024-10-10 130733.png


 


Prerequisites



 


Creating sample data stream



  1. In the Real-Time Intelligence experience, create a new event stream.

  2. Under source, add new source and select sample data.

    gurkamal_1-1728575168017.png


  3. Continue configuring the stream. I am using the Bicycles sample data stream in this blog.

  4. Select Direct ingestion as the Data Ingestion Mode for destination.

  5. Select your workspace and KQL database you have created as a prerequisite to this exercise for the destination.

  6. You should be seeing a pop-up to configure the database details and continue to configure the table where you need to land the data from the stream.


 


Configuring KQL Table with Update Policy



  1. Open the Eventhouse page in Fabric. There you should now be able to preview the data that is being ingested from the sample data stream.


  2. Create a new destination table. I used the following KQL to create the new table (destination):

    .create table RTITableNew (
        BikepointID: string,Street: string, Neighbourhood: string, No_Bikes: int, No_Empty_Docks: int )
    


  3. Under the Database tab, click on new and select Table Update Policy.                
     


    gurkamal_2-1728575168020.png




  4. You can edit the existing policy format or paste the one below that I used:


    NOTE: RTITable is source and RTITableNew is the destination table.

    .alter table RTITable policy update ```[
      {
        "IsEnabled": true,
        "Source": "RTITable",
        "Query": "RTITable | project BikepointID=BikepointID, Street=Street, Neighbourhood=Neighbourhood, No_Bikes=No_Bikes, No_Empty_Docks=No_Empty_Docks ",
        "IsTransactional": true,
        "PropagateIngestionProperties": false,
        "ManagedIdentity": null
      }
    ]```​


    The above policy drops the Longitude and Latitude columns and stores the rest of the columns in the destination table. You can do more transformations as per your requirements, but the workflow remains the same.


  5. After running the above command, your destination table will start populating with the new data as soon as the source table gets data. To review the policy on the destination table, you can run the following command:

     .show table  policy update​



 


Conclusion


To summarize, we took a real-time data stream, stored the data in a KQL database and then performed data enrichment on the data and stored in a destination table. This flow caters the scenarios where you want to perform processing on the data once its ingested from the stream.


 


Further Reading and Resources


Common scenarios for using table update policies – Kusto | Microsoft Learn


Create a table update policy in Real-Time Intelligence – Microsoft Fabric | Microsoft Learn

The Future of AI: Oh! One! Applying o1 Models to Business Data

This article is contributed. See the original author and article here.


In mid-September 2024, OpenAI introduced a groundbreaking family of models known as o1, often referred to as “Ph.D. models” due to their advanced capabilities. Now accessible through Azure OpenAI Service, o1 represents a significant leap in artificial intelligence, particularly in reasoning and problem-solving tasks. We’ve seen the o1 models solve problems like counting the number of R’s in the word “strawberry” and logic problems – but what does this mean for businesses?

 

One of the most remarkable features of o1 is its ability to do math and perform complex data analysis. Unlike previous models, o1 can calculate aggregate statistics, detect correlations across multiple datasets, and provide deep insights that were previously unattainable. To test its mettle, I decided to run the largest of the o1 models, o1-preview, through its paces, using datasets similar to those it might see in business scenarios. Note that the data used here is entirely synthetic, but it is patterned after real datasets that business might

use.

 


 



First, I tried a retail scenario. I took some example sales and staffing data, the kind a real store might have over a month and fed it into o1. I wanted to see if it could help figure out what’s driving sales and how staffing levels impact performance. Well, o1 didn’t disappoint. It crunched the numbers and pointed out that our sales were lower on weekends compared to other days. The funny thing is, the data didn’t label which days were weekends, but o1 figured it out anyway. It found the correlations between the sales and staffing datasets, even though the only obvious commonality between them was the time period they covered. It suggested that maybe having fewer staff scheduled over the weekend would lead to lower sales, and even recommended upping the weekend staff or adding self-checkout kiosks. It felt like having a seasoned retail analyst giving me personalized advice.

 

Next up, I wanted to see how o1 handled financial data. I used a fictional company’s financial statementsstructured just like real income statements, balance sheets, and cash flow statementsand asked o1 to create a sell-side research report. The results were impressive. It put together a detailed report, complete with an investment thesis and justification. It calculated growth rates, analyzed profit margins, and looked into financial ratios like price-to-earnings ratios. It justified its price target with solid analysis. Clearly, financial analysts can use this tool to make their jobs easier.

 

Then I decided to try something in the entertainment sector. I gave o1 some sample ticketing data from a fictional eventjust like the kind of data a real concert might generate. I wanted to find out who spent the most on tickets, and who bought the highest quantity. o1 not only identified the top spenders but also analyzed their buying habits and provided suggestions on how to encourage more high-volume ticket sales in the future. It was pretty cool to see how it turned raw numbers into real marketing insights. Even though I was using fictional datasets, o1 showed me its potential to make a real impact on businesses. It can help make better decisions by uncovering deeper insights, save time by handling complex tasks, spark innovation with its creative thinking, and help understand customers better to improve engagement.

 

Lastly, just for fun, I tested o1’s coding abilities. I asked o1 to create a simple HTML page with a playable Space Invaders game written entirely in JavaScript. To my surprise, it generated all the code I needed. When I ran it in my browser, there was the game, fully functional and ready to play. It worked on the first try! It was like magic, and I didn’t have to write a single line of code myself.

 

o1 has proven to be remarkably good at these sorts of technical tasks, but it turns out that its reasoning ability also extends to the creative realm. In fact, I fed it the transcript of the YouTube video I had created and prompted it to write this blog post (at least all the paragraphs above this one) – and it did! It took me four more little prompts to adjust the output to the tone I was looking for, but in just a few minutes, I had what I needed. So, as writing this blog post is one of my own business activities, the o1 model has now made a genuine impact on my business.

 



Why not get started?