by Contributed | Jun 4, 2024 | Technology
This article is contributed. See the original author and article here.
More often than we want to admit, customers frequently come to us with cases where a Consumption logic app was unintentionally deleted. Although you can somewhat easily recover a deleted Standard logic app, you can’t get the run history back nor do the triggers use the same URL. For more information, see GitHub – Logic-App-STD-Advanced Tools.
However, for a Consumption logic app, this process is much more difficult and might not always work correctly. The definition for a Consumption logic app isn’t stored in any accessible Azure storage account, nor can you run PowerShell cmdlets for recovery. So, we highly recommend that you have a repository or backup to store your current work before you continue. By using Visual Studio, DevOps repos, and CI/CD, you have the best tools to keep your code updated and your development work secure for a disaster recovery scenario. For more information, see Create Consumption workflows in multitenant Azure Logic Apps with Visual Studio Code.
Despite these challenges, one possibility exists for you to retrieve the definition, but you can’t recover the workflow run history nor the trigger URL. A few years ago, the following technique was documented by one of our partners, but was described as a “recovery” method:
Recovering a deleted Logic App with Azure Portal – SANDRO PEREIRA BIZTALK BLOG (sandro-pereira.com)
We’re publishing the approach now as a blog post but with the disclaimer that this method doesn’t completely recover your Consumption logic app, but retrieves lost or deleted resources. The associated records aren’t restored because they are permanently destroyed, as the warnings describe when you delete a Consumption logic app in the Azure portal.
Recommendations
We recommend applying locks to your Azure resources and have some form of Continuous Integration/Continuous Deployment (CI/CD) solution in place. Locking your resources is extremely important and easy, not only to limit user access, but to also protect resources from accidental deletion.
To lock a logic app, on the resource menu, under Settings, select Locks. Create a new lock, and select either Read-only or Delete to prevent edit or delete operations. If anyone tries to delete the logic app, either accidentally or on purpose, they get the following error:

For more information, see Protect your Azure resources with a lock – Azure Resources Manager.
Limitations
- If the Azure resource group is deleted, the activity log is also deleted, which means that no recovery is possible for the logic app definition.
- Run history won’t be available.
- The trigger URL will change.
- Not all API connections are restored, so you might have to recreate them in the workflow designer.
- If API connections are deleted, you must recreate new API connections.
- If a certain amount of time has passed, it’s possible that changes are no longer available.
Procedure
- In the Azure portal, browse to the resource group that contained your deleted logic app.
- On the logic app menu, select Activity log.
- In the operations table, in the Operation name column, find the operation named Delete Workflow, for example:

- Select the Delete Workflow operation. On the pane that opens, select the Change history tab. This tab shows what was modified, for example, versioning in your logic app.

As previously mentioned, if the Changed Property column doesn’t contain any values, retrieving the workflow definition is no longer possible.
- In the Changed Property column, select .
You can now view your logic app workflow’s JSON definition.

- Copy this JSON definition into a new logic app resource.
- As you don’t have a button that restores this definition, the workflow should load without problems.
- You can also use this JSON workflow definition to create a new ARM template and deploy the logic app to an Azure resource group with the new connections or by referencing the previous API connections.
- If you’re restoring this definition in the Azure portal, you must go to the logic app’s code view and paste your definition there.

The complete JSON definition contains all the workflow’s properties, so if you directly copy and paste everything into code view, the portal shows an error because you’re copying the entire resource definition. However, in code view, you only need the workflow definition, which is the same JSON that you’d find on the Export template page.

So, you must copy the definition JSON object’s contents and the parameters object’s contents, paste them into the corresponding objects in your new logic app, and save your changes.

In this scenario, the API connection for the Azure Resource Manager connector was lost, so we have to recreate the connection by adding a new action. If the connection ID is the same, the action should re-reference the connection.

After we save and refresh the designer, the previous operation successfully loaded, and nothing was lost. Now you can delete the actions that you created to reprovision the connections, and you’re all set to go.

We hope that this guidance helps you mitigate such occurrences and speeds up your work.
by Contributed | Jun 3, 2024 | Technology
This article is contributed. See the original author and article here.
Introduction
In this article we will demonstrate how we leverage GPT-4o capabilities, using images with function calling to unlock multimodal use cases.
We will simulate a package routing service that routes packages based on the shipping label using OCR with GPT-4o.
The model will identify the appropriate function to call based on the image analysis and the predefined actions for routing to the appropriate continent.
Background
The new GPT-4o (“o” for “omni”) can reason across audio, vision, and text in real time.
- It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.
- It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.
- GPT-4o is especially better at vision and audio understanding compared to existing models.
- GPT-4o now enables function calling.
The application
We will run a Jupyter notebook that connects to GPT-4o to sort packages based on the printed labels with the shipping address.
Here are some sample labels we will be using GPT-4o for OCR to get the country this is being shipped to and GPT-4o functions to route the packages.



The environment
The code can be found here – Azure OpenAI code examples
Make sure you create your python virtual environment and fill the environment variables as stated in the README.md file.
The code
Connecting to Azure OpenAI GPT-4o deployment.
from dotenv import load_dotenv
from IPython.display import display, HTML, Image
import os
from openai import AzureOpenAI
import json
load_dotenv()
GPT4o_API_KEY = os.getenv("GPT4o_API_KEY")
GPT4o_DEPLOYMENT_ENDPOINT = os.getenv("GPT4o_DEPLOYMENT_ENDPOINT")
GPT4o_DEPLOYMENT_NAME = os.getenv("GPT4o_DEPLOYMENT_NAME")
client = AzureOpenAI(
azure_endpoint = GPT4o_DEPLOYMENT_ENDPOINT,
api_key=GPT4o_API_KEY,
api_version="2024-02-01"
)
Defining the functions to be called after GPT-4o answers.
# Defining the functions - in this case a toy example of a shipping function
def ship_to_Oceania(location):
return f"Shipping to Oceania based on location {location}"
def ship_to_Europe(location):
return f"Shipping to Europe based on location {location}"
def ship_to_US(location):
return f"Shipping to Americas based on location {location}"
Defining the available functions to be called to send to GPT-4o.
It is very IMPORTANT to send the function’s and parameters descriptions so GPT-4o will know which method to call.
tools = [
{
"type": "function",
"function": {
"name": "ship_to_Oceania",
"description": "Shipping the parcel to any country in Oceania",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "ship_to_Europe",
"description": "Shipping the parcel to any country in Europe",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "ship_to_US",
"description": "Shipping the parcel to any country in the United States",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
]
available_functions = {
"ship_to_Oceania": ship_to_Oceania,
"ship_to_Europe": ship_to_Europe,
"ship_to_US": ship_to_US,
}
Function to base64 encode our images, this is the format accepted by GPT-4o.
# Encoding the images to send to GPT-4-O
import base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
The method to call GPT-4o.
Notice below that we send the parameter “tools” with the JSON describing the functions to be called.
def call_OpenAI(messages, tools, available_functions):
# Step 1: send the prompt and available functions to GPT
response = client.chat.completions.create(
model=GPT4o_DEPLOYMENT_NAME,
messages=messages,
tools=tools,
tool_choice="auto",
)
response_message = response.choices[0].message
# Step 2: check if GPT wanted to call a function
if response_message.tool_calls:
print("Recommended Function call:")
print(response_message.tool_calls[0])
print()
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
function_name = response_message.tool_calls[0].function.name
# verify function exists
if function_name not in available_functions:
return "Function " + function_name + " does not exist"
function_to_call = available_functions[function_name]
# verify function has correct number of arguments
function_args = json.loads(response_message.tool_calls[0].function.arguments)
if check_args(function_to_call, function_args) is False:
return "Invalid number of arguments for function: " + function_name
# call the function
function_response = function_to_call(**function_args)
print("Output of function call:")
print(function_response)
print()
Please note that WE and not GPT-4o call the methods in our code based on the answer by GTP4-o.
# call the function
function_response = function_to_call(**function_args)
Iterate through all the images in the folder.
Notice the system prompt where we ask GPT-4o what we need it to do, sort labels for packages routing calling functions.
# iterate through all the images in the data folder
import os
data_folder = "./data"
for image in os.listdir(data_folder):
if image.endswith(".png"):
IMAGE_PATH = os.path.join(data_folder, image)
base64_image = encode_image(IMAGE_PATH)
display(Image(IMAGE_PATH))
messages = [
{"role": "system", "content": "You are a customer service assistant for a delivery service, equipped to analyze images of package labels. Based on the country to ship the package to, you must always ship to the corresponding continent. You must always use tools!"},
{"role": "user", "content": [
{"type": "image_url", "image_url": {
"url": f"data:image/png;base64,{base64_image}"}
}
]}
]
call_OpenAI(messages, tools, available_functions)
Let’s run our notebook!!!

Running our code for the label above produces the following output:
Recommended Function call:
ChatCompletionMessageToolCall(id='call_lH2G1bh2j1IfBRzZcw84wg0x', function=Function(arguments='{"location":"United States"}', name='ship_to_US'), type='function')
Output of function call:
Shipping to Americas based on location United States
That’s all folks!
Thanks
Denise
by Contributed | Jun 2, 2024 | Technology
This article is contributed. See the original author and article here.
Spotlight on AI in your DevOps Lifecycle
Explore the transformative power of artificial intelligence in DevOps with our comprehensive series, “Spotlight on AI in Your DevOps Lifecycle.” This series delves into the integration of AI into every stage of the DevOps process, providing invaluable insights and practical guidance. Whether you’re a seasoned professional or new to the field, these episodes will equip you with the knowledge to leverage AI effectively in your development and operations lifecycle.
Speakers

Sessions: Register Now. https://aka.ms/DevOpsAISeries
DevOps in the era of Generative AI: Foundations of LLMOps
With the advent of generative AI, the development life cycle of intelligent applications has undergone a significant change. This shift from classical ML to LLMs-based solutions leads to implications not only on how we build applications but also in how we test, evaluate, deploy, and monitor them. The introduction of LLMOps is an important development that requires understanding the foundations of this new approach to DevOps.
The session “DevOps in the era of Generative AI: Foundations of LLMOps” will explore the basics of LLMOps, providing examples of tools and practices available in the Azure ecosystem. This talk will be held on June 12th, 2024, from 4:00 PM to 5:00 PM (UTC).
Register Now. https://aka.ms/DevOpsAISeries
Continuous Integration and Continuous Delivery (CI/CD) for AI
The session “Continuous Integration and Continuous Delivery (CI/CD) for AI” will focus on MLOps for machine learning and AI projects. This talk will cover how to set up CI/CD and collaborate with others using GitHub. It will also discuss version control, automated testing, and deployment strategies.
The session will take place on June 20th, 2024, from 6:00 PM to 7:00 PM (UTC).
Register Now. https://aka.ms/DevOpsAISeries
Monitoring, Logging, and AI Model Performance
Building an AI application does not stop at deployment. The core of any AI application is the AI model that performs certain tasks and provides predictions to users. However, AI models and their responses change over time, and our applications need to adapt to these changes in a scalable and automated way.
The session “Monitoring, Logging, and AI Model Performance” will explore how to use tools to monitor the performance of AI models and adapt to changes in a scalable way. This talk will be held on June 26th, 2024, from 4:00 PM to 5:00 PM (UTC).
Register Now. https://aka.ms/DevOpsAISeries
Scaling and Maintaining Your Applications on Azure
Azure is a popular cloud platform that provides many benefits for running AI applications. This session will focus on the practical aspects of running your applications on Azure, with a special emphasis on leveraging Azure OpenAI and Python FastAPI. The talk will cover best practices for scaling your applications to meet demand and maintaining their health and performance.
The session will be held on July 3rd, 2024, from 4:00 PM to 5:00 PM (UTC).
Register Now. https://aka.ms/DevOpsAISeries
Security, Ethics, and Governance in AI
AI brings many exciting new features into the tech landscape, but it also introduces new security risks and challenges. In this session, we will learn about the best practices and tools for securing AI-enabled applications and addressing ethical and governance issues related to AI.
The session will take place on July 10th, 2024, from 4:00 PM to 5:00 PM (UTC).
Register Now. https://aka.ms/DevOpsAISeries
by Contributed | Jun 1, 2024 | Technology
This article is contributed. See the original author and article here.
Navigating the Future with Microsoft Copilot: A Guide for Technical Students
Introduction
Copilot learning hub
Copilot is an AI assistant powered by language models, which offers innovative solutions across the Microsoft Cloud. Find what you, a technical professional, need to enhance your productivity, creativity, and data accessibility, and make the most of the enterprise-grade data security and privacy features for your organization.

As a technical student, you’re always on the lookout for tools that can enhance your productivity and creativity.
Enter Microsoft Copilot, your AI-powered assistant that’s revolutionizing the way we interact with technology. In this blog post, we’ll explore how Copilot can be a game-changer for your learning and development.
Understanding Copilot Microsoft Copilot is more than just an AI assistant; it’s a suite of solutions integrated across the Microsoft Cloud. It’s designed to boost your productivity by providing enterprise-grade data security and privacy features. Whether you’re coding, creating content, or analyzing data, Copilot is there to streamline your workflow.
Getting Started with Copilot To get started, dive into the wealth of resources available on the official Copilot page. From curated training and documentation to informative videos and playlists, there’s a treasure trove of knowledge waiting for you.
Customizing Your Experience One of the most exciting aspects of Copilot is its flexibility. You can expand and enrich your Copilot experience with plugins, connectors, or message extensions. Even better, you can build a custom AI copilot using Microsoft Cloud technologies to create a personalized conversational AI experience.
Empowering Your Education Copilot isn’t just a tool; it’s a partner in your educational journey. It can assist you in implementing cloud infrastructure, solving technical business problems, and maximizing the value of data assets through visualization and reporting tools.
The Copilot Challenge Ready to put your skills to the test? Immerse yourself in cutting-edge AI technology and earn a badge by completing one of the unique, AI-focused challenges available until June 21, 2024. These challenges offer interactive events, expert-led sessions, and training assets to help you succeed.

Conclusion Microsoft Copilot is more than just an assistant; it’s a catalyst for innovation and productivity. As a technical student, embracing Copilot can help you stay ahead of the curve and unlock a new era of growth. So, what are you waiting for?
Let Copilot guide you through the exciting world of AI and cloud technologies. Learn how to use Microsoft Copilot | Microsoft Learn
Recent Comments