Discover the Future of Data Engineering with Microsoft Fabric for Technical Students & Entrepreneurs

Discover the Future of Data Engineering with Microsoft Fabric for Technical Students & Entrepreneurs

This article is contributed. See the original author and article here.

Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence . It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. This makes it an ideal platform for technical students and entrepreneurial developers looking to streamline their data engineering and analytics workflows.



High-Level Overview of Microsoft Fabric


Microsoft Fabric brings together new and existing components from Power BI, Azure Synapse, and Azure Data Factory into a single integrated environment.  These components are then presented in various customized user experiences. Fabric brings together experiences such as Data Engineering, Data Factory, Data Science, Data Warehouse, Real-Time Analytics, and Power BI onto a shared SaaS foundation.

Fabric.png


This integration provides several advantages :



  • An extensive range of deeply integrated analytics in the industry.

  • Shared experiences across experiences that are familiar and easy to learn.

  • Developers can easily access and reuse all assets.

  • A unified data lake that allows you to retain the data where it is while using your preferred analytics tools.

  • Centralized administration and governance across all experiences.



Benefits of Learning and Using Microsoft Fabric


Learning and using Microsoft Fabric can provide numerous benefits. Here are a few key ones:
Simplicity: With Fabric, you don’t need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs.
Efficiency: Fabric allows creators to concentrate on producing their best work, freeing them from the need to integrate, manage, or understand the underlying infrastructure that supports the experience.
Scalability: Microsoft Fabric is a powerful platform that offers scalability, resilience, simplified development, fault tolerance, and support for microservices, making it an ideal choice for businesses aiming to stay agile and competitive in today’s digital landscape.



Microsoft Learn Resources for Microsoft Fabric


Microsoft Learn offers a variety of resources to help you get started with Microsoft Fabric. Here are a few key ones:



  1. Get started with Microsoft Fabric – Training: This learning path includes 11 modules that cover everything from an introduction to end-to-end analytics using Microsoft Fabric to administering Microsoft Fabric.

  2. Microsoft Fabric documentation: This comprehensive documentation provides an overview of Microsoft Fabric, its capabilities, and how to use it.

  3. Learn Live: Get started with Microsoft Fabric: Online Webinars and On Demand series showcasing Microsoft Fabric

  4. Get started documentation: This page provides a variety of documentation to help you get started with Microsoft Fabric.

  5. Exam DP-600: Implementing Analytics Solutions Using Microsoft Fabric (beta) – Certifications | Microsoft Learn This exam will be available in January 2024. This exam measures your ability to accomplish the following technical tasks: plan, implement, and manage a solution for data analytics; prepare and serve data; implement and manage semantic models; and explore and analyze data.


In conclusion, Microsoft Fabric is a game-changing platform that brings together a variety of Azure tools and services under one unified umbrella. Its core features empower businesses and data professionals to make smarter, data-driven decisions.

So, whether you’re a technical student looking to expand your skillset or an entrepreneurial developer aiming to streamline your data workflows, Microsoft Fabric is definitely worth considering.

Lesson Learned #456: Invalid Key Column Type Error in Azure SQL Database DataSync

This article is contributed. See the original author and article here.

One such error for Azure SQL Database users employing DataSync is: “Database provisioning failed with the exception ‘Column is of a type that is invalid for use as a key column in an index.” This article aims to dissect this error, providing insights and practical solutions for database administrators and developers.


 


Understanding the Error:


 


This error signifies a mismatch between the column data type used in an index and what is permissible within Azure SQL DataSync’s framework. Such mismatches can disrupt database provisioning, a critical step in synchronization processes.


 


Data Types and Index Restrictions in DataSync:


 


Azure SQL Data Sync imposes specific limitations on data types and index properties. Notably, it does not support indexes on columns with nvarchar(max)that our customer has. Additionally, primary keys cannot be of types like sql_variant, binary, varbinary, image, and xmlWhat is SQL Data Sync for Azure? – Azure SQL Database | Microsoft Learn


 


Practical Solutions:


 



  1. Modify Data Types: If feasible, alter the data type from nvarchar(max) to a smaller variant .

  2. Index Adjustments: Review your database schema and modify or remove indexes that include unsupported column types.

  3. Exclude Problematic Columns: Consider omitting columns with unsupported data types from your DataSync synchronization groups.


 


 

Windows Containers on AKS Customer Stories

Windows Containers on AKS Customer Stories

This article is contributed. See the original author and article here.

We have published a new page on Azure to highlight Windows Container customer stories on AKS with M365 (supporting products like Office and Teams), Forza (XBOX Game Studios), Relativity and Duck Creek


 


If you are looking for a way to modernize your Windows applications, streamline your development process, and scale your business with Azure, you might be interested in learning how other customers have achieved these goals by using Windows Containers on Azure Kubernetes Service (AKS). 


 


Fady_Azmy_0-1701382958318.png


 


 


Windows Containers on AKS is a fully managed Kubernetes service that allows you to run your Windows applications alongside Linux applications in the same cluster, with seamless integration and minimal code modifications. Windows Containers on AKS offers a number of benefits, such as: 



  • Reduced infrastructure and operational costs 

  • Improved performance and reliability 

  • Faster and more frequent deployments 

  • Enhanced security and compliance 

  • Simplified management and orchestration 


 


Stay tuned for new stories that will be published soon, featuring customers from new industries and with new scenarios using Windows Containers. 


 


In the meantime, we invite you to check out the Windows Container GitHub repository, where you can find useful resources, documentation, samples, and tools to help you get started. You can also share your feedback, questions, and suggestions with the Windows Container product team and the community of users and experts. 

Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central

Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central

This article is contributed. See the original author and article here.

In the rapidly evolving AI landscape, Microsoft Dynamics 365 Business Central is taking the lead with innovations that have equipped more than 30,000 small and medium-sized businesses to achieve success. Powered by next-generation AI, Microsoft Copilot offers new ways to enhance workplace efficiency, automate mundane tasks, and unlock creativity. At a time when nearly two in three people say they struggle with having the time and energy to do their job, Copilot helps to free up capacity and enables employees to focus on their most meaningful work.1 

Dynamics 365 Business Central brings the power of AI to small and medium-sized businesses to help companies work smarter, adapt faster, and perform better. AI in Dynamics 365 Business Central improves the way work gets done, enabling you to:

  • Get answers quickly and easily using natural language.
  • Save time by automating tedious, repetitive tasks.
  • Spark creativity with creative content ideas.
  • Anticipate and overcome business challenges.

Reclaim time for important work

In a small or medium-sized business, there is often a lot to do and not many people to help get it all done, so it’s important to make the most of your limited resources to accomplish your goals. Everyday activities like tracking down documents and bringing new employees up to speed can drain your valuable time. What if you had an AI-powered assistant ready to help you find exactly what you need without the hassle?

Available in early 2024, conversational chat using Copilot in Dynamics 365 Business Central helps you answer questions quickly and easily, locate records faster, and even learn new skills—all using natural language. Save time and effort by navigating to documents without having to use traditional menus, and rapidly onboard new users with answers to questions on how, when, or why to do things. Copilot is your everyday AI companion, helping you to speed through tasks, build momentum, and free time for your most impactful work. 

Streamline month-end tasks with enhanced bank reconciliation

Reconciling bank statement transactions with your financial system has often been a tedious monthly chore. Meticulously matching every line item to new or existing accounting entries takes time (and isn’t the most exciting way to spend an afternoon.) In the past, Business Central helped by auto-matching many of the simple one-to-one transactions, but the logic wasn’t able to decipher more complex scenarios such as when multiple charges were paid in a single transaction.

Now, Copilot in Business Central makes bank reconciliation even easier by analyzing bank statements that you import into Business Central, matching more transactions, and proposing entries for transactions that weren’t auto-matched. By comparing and interpreting transaction descriptions, amounts, dates, and patterns across fields, Copilot can help you improve the accuracy of your bank reconciliation while reducing manual effort.

Unlock creativity with marketing text suggestions

Copilot in Business Central helps product managers save time and drive sales with compelling AI-generated marketing text suggestions. Using key attributes like color and material, Copilot can create product descriptions in seconds tailored to your preferred tone, format, and length. Once you’ve made any adjustments, you can easily publish to Shopify or other ecommerce platforms with just a few clicks. Discover how Copilot can help you banish writer’s block and launch new products with ease.

Boost customer service with inventory forecasting

Effective inventory management is crucial in a competitive business environment as it can significantly influence a company’s success and customer retention. This process involves balancing customer service with cost control. Maintaining low inventory reduces working capital, but risks missing sales due to stock shortages. Using AI, the Sales and Inventory Forecast extension uses past sales data to forecast future demand, helping to prevent stockouts. Once a shortfall is identified, Business Central streamlines the replenishment process by generating vendor requests, helping you keep your customers happy by fulfilling their orders on time, every time.  

Reduce risk with late payment prediction

Managing receivables effectively is vital for a business’s financial wellbeing. With the Late Payment Prediction extension, you can reduce outstanding receivables and refine your collections approach by forecasting if outstanding sales invoices are likely to be paid on time. For instance, if a payment is anticipated to be delayed, you could modify the payment terms or method for that customer. By proactively addressing potential late payments and adapting accordingly, you can minimize overdue receivables, reduce risk of non-payment, and ultimately improve your financial performance.

Improve financial stability with Cash Flow Analysis

Powered by AI, Business Central can create a comprehensive Cash Flow Analysis to help you monitor your company’s cash position. Cash flow is a critical indicator of a company’s solvency, and cash flow analysis is an important future-focused planning tool that helps you maintain control over your financial health and make proactive adjustments to meet all your financial commitments. With insights from Business Central, you can pivot quickly to safeguard your company’s fiscal wellbeing, such as obtaining loans to cover cash shortfalls or cutting back on credit when you have surplus cash.

Work smarter with Copilot in Business Central

Copilot in Business Central gives your company an edge with AI-powered innovations that are a catalyst for unleashing human potential, fostering creativity, and driving efficiency in ways previously unimaginable. The integration of AI into everyday business processes is not just about staying ahead in a competitive market, it’s about redefining what’s possible in the workplace. With Business Central, your company is empowered to navigate today’s complex business environment with agility, precision, and a renewed focus on what truly matters.

Customer using a tablet while wearing headphones and working securely remote from a café.

Dynamics 365 Business Central

Work smarter, adapt faster, and perform better with Business Central.


Sources

1 Microsoft Work Trend Index Annual Report, May 2023

The post Transform the way work gets done with Microsoft Copilot in Dynamics 365 Business Central appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Managed Lustre with Automatic Synchronisation to Azure BLOB Storage

Azure Managed Lustre with Automatic Synchronisation to Azure BLOB Storage

This article is contributed. See the original author and article here.

Introduction


This blog post walks through how to setup an Azure Managed Lustre Filesystem (AMLFS) that will automatically synchronise to an Azure BLOB Storage container. The synchronisation is achieved using the Lustre HSM (Hierarchical Storage Management) interface combined with the Robinhood policy engine and a tool that reads the Lustre changelog and synchronises metadata with the archived storage. The lfsazsync repository on GitHub contains a Bicep template to deploy and setup a virtual machine for this purpose.


 



Disclaimer: The lfsazsync deployment is not a supported Microsoft product you are responsible for the deployment and operation of the solution. There are updates that need applying to AMLFS that will require a Support Request to be raised through the Azure Portal. These updates could effect the stabaility of AMLFS and customer requiring the same level of SLA should speak to their Microsoft representative.



Initial Deployment


The following is required before running the lfsazsync Bicep template:



  • Virtual Network

  • Azure BLOB Storage Account and container (HNS is not supported)

  • AMLFS deployed without HSM enabled


The lfsazsync repository contains a test/infra.bicep example to create the required resources:


 


lfsazsync-prerequisite.jpg


 


To deploy, first create a resource group, e.g.



TODO: set the variables below
resource_group=
location=
az group create –name $resource_group –location $location

 


Then deploy into this resource group:



az deployment group create –resource-group $resource_group –template-file test/infra.bicep

 



Note: The bicep file has parameters for names, ip ranges etc. that should be set if you do not want the default values.



 


Updating the AMLFS settings


Once deployment is complete, navigate to the Azure Portal, locate the AMLFS resource and click on “New Support Request”. The following shows the suggested request to get AMLFS updated:


 


amlfs-support-request.jpg


 


The lctl commands needed are listed here.


 


Deploying Azure BLOB Storage Synchronisation


The lfsazsync deployment sets up a single virtual machine for all tasks. The HSM copytools could be run on multiple virtual machines to increase transfer peformance. The bandwidth for archiving and retrieval is constrained to approximately half the network bandwidth available to the virtual machine. It is important to note that the same network will be utilized for both accessing the Lustre filesystem and accessing Azure Storage. This should be considered when deciding the virtual machine size. The virtual machine sizes and expected network performance is available here.


 


The Bicep template has the following parameters:


 























































Parameter Description
subnet_id The ID of the subnet to deploy the virtual machine to
vm_sku The SKU of the virtual machine to deploy
admin_user The username of the administrator account
ssh_key The public key for the administrator account
lustre_mgs The IP address/hostname of the Lustre MGS
storage_account_name The name of the Azure storage account
storage_container_name The container to use for synchonising the data
storage_account_key A SAS key for the storage account
ssh_port The port used by sshd on the virtual machine
github_release Release tag where the robinhood and lemur will be downloaded from
os The OS to use for the VM (options: ubuntu2004 or almalinux87)

 


The SAS key can be generated using the following Azure CLI command:



# TODO: set the account name and container name below
account_name=
container_name=

start_date=$(date -u +”%Y-%m-%dT%H:%M:%SZ”)
expiry_date=$(date -u +”%Y-%m-%dT%H:%M:%SZ” –date “next month”)

az storage container generate-sas
–account-name $account_name
–name $container_name
–permissions rwld
–start $start_date
–expiry $expiry_date
-o tsv


 


The following Azure CLI command can be used to get the subnet ID:



# TODO: set the variable below
resource_group=
vnet_name=
subnet_name=

az network vnet subnet show –resource-group $resource_group –vnet-name $vnet_name –name $subnet_name –query id –output tsv


 


The following Azure CLI command can be used to deploy the Bicep template (as an alterative to setting environment variables, the parameters could be set in a parameters.json file):



# TODO: set the variables below
resource_group=
subnet_id=
vmsku=”Standard_D32ds_v4″
admin_user=
ssh_key=
lustre_mgs=
storage_account_name=
storage_container_name=
storage_sas_key=
ssh_port=
github_release=”v1.0.1″
os=”almalinux87″

az deployment group create
–resource-group $resource_group
–template-file lfsazsync.bicep
–parameters
subnet_id=”$subnet_id”
vmsku=$vmsku
admin_user=”$admin_user”
ssh_key=”$ssh_key”
lustre_mgs=$lustre_mgs
storage_account_name=$storage_account_name
storage_container_name=$storage_container_name
storage_sas_key=”$storage_sas_key”
ssh_port=$ssh_port
github_release=$github_release
os=$os


 


After this call completes the virtual machine will be deployed although it will take more time to install and import the metadata from Azure BLOB storage into the Lustre filesystem. The progress can be monitored by looking at the /var/log/cloud-init-output.log file on the virtual machine.


 


Monitoring


The install will set up three systemd services for lhsmd, robinhood and lustremetasync. The log files are located here:



  • ‘lhsmd’: /var/log/lhsmd.log

  • ‘robinhood’: /var/log/robinhood*.log

  • ‘lustremetasync’: /var/log/lustremetasync.log


 


Default archive settings


The synchronisation parameters can be controlled through the Robinhood config file, /opt/robinhood/etc/robinhood.d/lustre.conf. Below are some of the default settings and their locations in the config file:


 




















































Name Default Location
Archive interval 5 minutes lhsm_archive_parameters.lhsm_archive_trigger
Rate limit 1000 files lhsm_archive_parameters.rate_limit.max_count
Rate limit interval 10 seconds lhsm_archive_parameters.rate_limit.period_ms
Archive threshold last modified time > 30 minutes lhsm_archive_parameters.lhsm_archive_rules
Release trigger 85% of OST usage lhsm_archive_parameters.lhsm_release_trigger
Small file release last access > 1 year lhsm_archive_parameters.lhsm_release_rules
Default file release last access > 1 day lhsm_archive_parameters.lhsm_release_rules
File remove removal time > 5 minutes lhsmd.lhsmd_remove_rules

 


To update the config file, edit the file and then restart the robinhood service, systemctl restart robinhood.


The lustremetasync service is processing the Lustre ChangeLog continuously. Therefore, actions will happen immediately unless there is a lot of IO all at once where it may take a few minutes to catch up. The following operations will be handled:


 




  • Create/delete directories


    Directories are created in BLOB storage as an empty object with the name of the directory. There is metadata on this file to indicate that it is a directory. The same object is deleted when removed on the filesystem.




  • Create/delete symbolic links


    Symbolic links are create in BLOB storage as an empty object with the name of the symbolic link. There is metadata on this file to indicate that it is a symbolic link and this contains the path that it is linking to. The same object is deleted when removed on the filesystem.




  • Moving files or directories


    Moving files or directories requires everything being moved to be restored to the Lustre filesystem. The files are then marked as dirty in their new location and the existing files are deleted from BLOB storage. Robinhood will handle archiving the files again in their new location.




  • Updating metadata (e.g. ownership and permissions)


    The metadata will only be updated for archived files that isn’t modified. Modified files will have the metadata set when Robinhood updated the archived file.




 


References