Simplifying the cloud data migration journey for enterprises

Simplifying the cloud data migration journey for enterprises

This article is contributed. See the original author and article here.

In this guest blog post, Kajol Patel, Senior Content Marketing Specialist at Data Dynamics, discusses digital transformation strategies for enterprises and how to utilize StorageX and the Azure File Migration Program to overcome common data migration challenges.


 


Data is foundational to any digital transformation strategy, yet enterprises worldwide struggle to find reliable and cost-efficient solutions to manage, govern, and extract valuable insights from it. According to a recent report published in Statista, the total volume of enterprise data worldwide increased from 1 petabyte (PB) to 2.02 PB between 2020 and 2022. This sizeable jump in volume indicates a 42.2 percent average annual growth in data over the last two years. The report also highlights that a majority of that data is stored in internal datacenters. Data storage and processing is costly and energy-intensive for enterprises.


 


Additionally, the cost of software for collection, analysis, and management of terabytes and petabytes of data residing in multiple storage centers adds to the expenditure. Breaking down siloes to extract real-time insights often ends up costing the enterprise exorbitant amounts of IT resources and revenue.


 


As unstructured data sprawl continues to grow, enterprises are turning to the cloud and embracing data as a strategic and valuable asset. By extracting useful insights from data, businesses can accelerate their digital journey by making data-driven decisions in real time to meet peak demand, grow revenue, and minimize storage cost. Enterprises such as Microsoft that offer cloud solutions give clients access to subscription-based remote computing services. It enables them to adjust cloud consumption to meet changing needs. As a possible recession looms, organizations that rely on the cloud are more likely to experience cost reduction as they effectively manage risk and compliance.


 


However, most enterprises face numerous challenges while migrating to the cloud: proprietary vendor lock-in, lack of migration skills, a labor-intensive process, and inadequate knowledge of data estate.


 


Top 3 data migration challenges for enterprises:



  • Lift-and-shift blind spots: Lack of knowledge of enterprise unstructured data estate may result in post-migration complexities such as security malfunction and non-compliance.

  • Lack of visibility: No clarity about what, when, and where around data may result in lack of storage optimization and delayed migration timelines.

  • Complexity of scope and scale: Lack of an integrated approach, governance, and skills, decrease in efficiency, low time to effort ratio, and other redundancies can cause chaos.


 


In a webinar hosted by Data Dynamics, Karl Rautenstrauch, Principal Program Manager, Storage Partners at Microsoft, spoke about the top challenges faced by enterprise customers while migrating to the cloud: “Over nine years of working closely with partners and customers in the field of migrating datasets and applications to Azure, we see a consistent theme of every enterprise in every industry being a little overburdened today – too much to do, too little time, and too few people, hence most of these enterprises are seeking automation. They want to ensure that they can engage in complex activities like moving an application comprised of virtual machines, databases, and file repositories in the simplest way possible with the least risk possible.”


 


He further emphasized the most consistent requirement for all customers he has worked with, regardless of size, was to migrate large data sets securely, quickly, and with minimal risk and disruption to user productivity.


 


Migrating file data between disparate storage platforms is always a daunting process. Microsoft recently announced the Azure File Migration Program to make customer data migration much easier and more secure. It helps address the customer’s need to reduce the time, effort, and risk involved in complex file data migration.


 


Data Dynamics_Central Console.png


 


Speaking at the webinar, Rautenstrauch emphasized the value of on-demand compute and modern cloud services: “We have built a platform of services called Azure Migrate, which is freely available, and it has cloud-driven capabilities. These services help customers move virtual machines easily, databases, and now even containerized applications in an automated, risk-free fashion. One area that is neglected is unstructured data, so what we are going to do is address it in the Azure File Migration Program.”


 


The Azure Migrate hub offers many effective tools and services to simplify database and server migration but doesn’t address the need for unstructured data migration. Hence, Azure File Migration Program is becoming a new favorite among enterprises possessing unstructured data sprawl.


Jurgen Willis, VP of Azure Optimized Workloads and Storage, states in his blog, “Azure Migrate offers a very powerful set of no-cost (or low-cost) tools to help you migrate virtual machines, websites, databases, and virtual desktops for critical applications. You can modernize legacy applications by migrating them from servers to containers and build a cloud native environment.”


 


Data Dynamics transforms data assets into competitive advantage with Azure File Migration


With over a decade of domain experience and a robust clientele of 300+ organizations, including 28 of the Fortune 100, Data Dynamics is a partner of choice for unstructured file data migrations. StorageX is Data Dynamics’ award-winning solution for unstructured data management. The mobility feature of StorageX provides intelligence-driven, automated data migrations to meet the needs and scale of global enterprises. 


 


Having migrated over 400 PB of data encompassing hundreds of trillions of files, this feature is trusted and proven and delivers without losing a single byte of data. It provides policy-based and automated data migration with reduced human intervention and without vendor lock-in. StorageX has proven capabilities to multi-thread and migrates at the speed where you can move millions and billions of files in hours, making it one of the most scalable and risk-free data migration solutions. 


 


It can easily identify workloads and migrate data based on characteristics such as the least-touched files, files owned by specific users or groups, or hundreds of other actionable insights. StorageX Migration is a powerful migration engine that moves large volumes of data across shares and exports with speed and accuracy.


 


Here’s a detailed comparative study of StorageX versus traditional migration tools.


 


Microsoft is sponsoring the use of Data Dynamics’ StorageX as a part of the Azure File Migration Program. Enterprises can leverage this product to migrate their unstructured files, Hadoop, and object storage data into Azure at zero additional cost to the customer and no separate migration licensing.


 


Learn more about the Azure File Migration Program or reach us at solutions@datdyn.com I (713)-491-4298 I +44-(20)-45520800

Discover how Microsoft 365 helps organizations do more with less

Discover how Microsoft 365 helps organizations do more with less

This article is contributed. See the original author and article here.

Now more than ever, IT leaders need to reduce costs while securing and empowering their workforce. Microsoft 365 combines the capabilities organizations need in one secure, integrated experience—powered by data and AI—to help people work better and smarter.

The post Discover how Microsoft 365 helps organizations do more with less appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Part 2 – Observability for your azd-compatible app

Part 2 – Observability for your azd-compatible app

This article is contributed. See the original author and article here.

In Part 1, I walked you through how to azdev-ify a simple Python app. In this post, we will:



  • add the Azure resources to enable the observability features in azd

  • add manual instrumentation code in the app 

  • create a launch.json file to run the app locally and make sure we can send data to Application Insights

  • deploy the app to Azure


 


Previously…


We azdev-ified a simple Python app: TheCatSaidNo and deployed the app to Azure. Don’t worry if you have already deleted everything. I have updated the code for part 1 because of the Bicep modules improvements we shipped in the azure-dev-cli_0.4.0-beta.1 release. You don’t need to update your codes, just start from my GitHub repository (branch: part1):



  1. Make sure have the pre-requisites installed:


  2. In a new empty directory, run 

    azd up -t https://github.com/puicchan/theCatSaidNo -b part1​

    If you run `azd monitor –overview` at this point, you will get an error – “Error: application does not contain an Application Insights dashboard.” That’s because we didn’t create any Azure Monitor resources in part 1,




 


Step 1 – add Application Insights


The Azure Developer CLI (azd) provides a monitor command to help you get insight into how your applications are performing so that you can proactively identify issues. We need to first add the Azure resources to the resource group created in part 1.



  1. Refer to a sample, e.g., ToDo Python Mongo. Copy the directory /infra/core/monitor to your /infra folder.

  2. In main.bicep: add the following parameters. If you want to override the default azd naming convention, provide your own values here. This is new since version 0.4.0-beta.1. 

    param applicationInsightsDashboardName string = ''
    param applicationInsightsName string = ''
    param logAnalyticsName string = ''​


  3. Add the call to monitoring.bicep in /core/monitor

    // Monitor application with Azure Monitor
    module monitoring './core/monitor/monitoring.bicep' = {
      name: 'monitoring'
      scope: rg
      params: {
        location: location
        tags: tags
        logAnalyticsName: !empty(logAnalyticsName) ? logAnalyticsName : '${abbrs.operationalInsightsWorkspaces}${resourceToken}'
        applicationInsightsName: !empty(applicationInsightsName) ? applicationInsightsName : '${abbrs.insightsComponents}${resourceToken}'
        applicationInsightsDashboardName: !empty(applicationInsightsDashboardName) ? applicationInsightsDashboardName : '${abbrs.portalDashboards}${resourceToken}'
      }
    }


  4. Pass the application insight name as a param to appservice.bicep in the web module: 

    applicationInsightsName: monitoring.outputs.applicationInsightsName


  5. Add output for the App Insight connection string to make sure it’s stored in the .env file:

    output APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString​


  6. Here’s the complete main.bicep

    targetScope = 'subscription'
    
    @minLength(1)
    @maxLength(64)
    @description('Name of the the environment which is used to generate a short unique hash used in all resources.')
    param environmentName string
    
    @minLength(1)
    @description('Primary location for all resources')
    param location string
    
    // Optional parameters to override the default azd resource naming conventions. Update the main.parameters.json file to provide values. e.g.,:
    // "resourceGroupName": {
    //      "value": "myGroupName"
    // }
    param appServicePlanName string = ''
    param resourceGroupName string = ''
    param webServiceName string = ''
    param applicationInsightsDashboardName string = ''
    param applicationInsightsName string = ''
    param logAnalyticsName string = ''
    // serviceName is used as value for the tag (azd-service-name) azd uses to identify
    param serviceName string = 'web'
    
    @description('Id of the user or app to assign application roles')
    param principalId string = ''
    
    var abbrs = loadJsonContent('./abbreviations.json')
    var resourceToken = toLower(uniqueString(subscription().id, environmentName, location))
    var tags = { 'azd-env-name': environmentName }
    
    // Organize resources in a resource group
    resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
      name: !empty(resourceGroupName) ? resourceGroupName : '${abbrs.resourcesResourceGroups}${environmentName}'
      location: location
      tags: tags
    }
    
    // The application frontend
    module web './core/host/appservice.bicep' = {
      name: serviceName
      scope: rg
      params: {
        name: !empty(webServiceName) ? webServiceName : '${abbrs.webSitesAppService}web-${resourceToken}'
        location: location
        tags: union(tags, { 'azd-service-name': serviceName })
        applicationInsightsName: monitoring.outputs.applicationInsightsName
        appServicePlanId: appServicePlan.outputs.id
        runtimeName: 'python'
        runtimeVersion: '3.8'
        scmDoBuildDuringDeployment: true
      }
    }
    
    // Create an App Service Plan to group applications under the same payment plan and SKU
    module appServicePlan './core/host/appserviceplan.bicep' = {
      name: 'appserviceplan'
      scope: rg
      params: {
        name: !empty(appServicePlanName) ? appServicePlanName : '${abbrs.webServerFarms}${resourceToken}'
        location: location
        tags: tags
        sku: {
          name: 'B1'
        }
      }
    }
    
    // Monitor application with Azure Monitor
    module monitoring './core/monitor/monitoring.bicep' = {
      name: 'monitoring'
      scope: rg
      params: {
        location: location
        tags: tags
        logAnalyticsName: !empty(logAnalyticsName) ? logAnalyticsName : '${abbrs.operationalInsightsWorkspaces}${resourceToken}'
        applicationInsightsName: !empty(applicationInsightsName) ? applicationInsightsName : '${abbrs.insightsComponents}${resourceToken}'
        applicationInsightsDashboardName: !empty(applicationInsightsDashboardName) ? applicationInsightsDashboardName : '${abbrs.portalDashboards}${resourceToken}'
      }
    }
    
    // App outputs
    output AZURE_LOCATION string = location
    output AZURE_TENANT_ID string = tenant().tenantId
    output REACT_APP_WEB_BASE_URL string = web.outputs.uri
    output APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString


  7. Run `azd provision` to provision the additional Azure resources

  8. Once provisioning is complete, run `azd monitor –overview` to open the Application Insight dashboard in the browser.

    The dashboard is not that exciting yet. Auto-instrumentation application monitoring is not yet available for Python appHowever, if you examine your code, you will see that:



    • APPLICATIONINSIGHTS_CONNECTION_STRING is added to the .env file for your current azd environment.

    • The same connection string is added to the application settings in the configuration of your web app in Azure Portal:web.png




 


Step 2 – manually instrumenting your app


Let’s track incoming requests with OpenCensus Python and instrument your application with the flask middleware so that incoming requests sent to your app is tracked. (To learn more about what Azure Monitor supports, refer to setting up Azure Monitor for your Python app.)


 


For this step, I recommend using Visual Studio Code and the following extensions:



Get Started Tutorial for Python in Visual Studio Code is a good reference if you are not familiar with Visual Studio Code.


 



  1. Add to requirements.txt

    python-dotenv
    opencensus-ext-azure >= 1.0.2
    opencensus-ext-flask >= 0.7.3
    opencensus-ext-requests >= 0.7.3​


  2. Modify app.py to: 

    import os
    
    from dotenv import load_dotenv
    from flask import Flask, render_template, send_from_directory
    from opencensus.ext.azure.trace_exporter import AzureExporter
    from opencensus.ext.flask.flask_middleware import FlaskMiddleware
    from opencensus.trace.samplers import ProbabilitySampler
    
    INSTRUMENTATION_KEY = os.environ.get("APPLICATIONINSIGHTS_CONNECTION_STRING")
    
    app = Flask(__name__)
    middleware = FlaskMiddleware(
        app,
        exporter=AzureExporter(connection_string=INSTRUMENTATION_KEY),
        sampler=ProbabilitySampler(rate=1.0),
    )
    
    
    @app.route("/favicon.ico")
    def favicon():
        return send_from_directory(
            os.path.join(app.root_path, "static"),
            "favicon.ico",
            mimetype="image/vnd.microsoft.icon",
        )
    
    
    @app.route("/")
    def home():
        return render_template("home.html")
    
    
    if __name__ == "__main__":
        app.run(debug=True)​


  3. To run locally, we need to read from the .env file to get the current azd environment context. The easiest is to customize run and debug in Visual Studio Code by creating a launch.json file:

    • Ctrl-Shift+D or click “Run and Debug” in the sidebar

    • Click “create a launch.json file” to customize a launch.json file

    • Select “Flask Launch and debug a Flask web application

    • Modify the generated file to: 

      {
          // Use IntelliSense to learn about possible attributes.
          // Hover to view descriptions of existing attributes.
          // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
          "version": "0.2.0",
          "configurations": [
              {
                  "name": "Python: Flask",
                  "type": "python",
                  "request": "launch",
                  "module": "flask",
                  "env": {
                      "FLASK_APP": "app.py",
                      "FLASK_DEBUG": "1"
                  },
                  "args": [
                      "run",
                      "--no-debugger",
                      "--no-reload"
                  ],
                  "jinja": true,
                  "justMyCode": true,
                  "envFile": "${input:dotEnvFilePath}"
              }
          ],
          "inputs": [
              {
                  "id": "dotEnvFilePath",
                  "type": "command",
                  "command": "azure-dev.commands.getDotEnvFilePath"
              }
          ]
      }​




  4. Create and activate a new virtual environment . I am using Windows. So: 

    py -m venv .venv
    .venvscriptsactivate
    pip3 install -r ./requirements.txt​


  5. Click the Run view in the sidebar and hit the play button for Python: Flask

    • Browse to http://localhost:5000 to launch the app.

    • Click the button a few times and/or reload the page to generate some traffic.


    Take a break; perhaps play with your cat or dog for real. The data will take a short while to show up in Application Insights.



  6. Run `azd monitor –overview` to open the dashboard and notice the change dashboard.png

  7. Run `azd deploy` to deploy your app to Azure and start monitoring your app!


 


Get the code for this blog post here. Next, we will explore how you can use `azd pipeline config` to set up GitHub action to deploy update upon code check in.


 


Feel free to run `azd down` to clean up all the Azure resources. As you saw, it’s easy to get things up and running again. Just `azd up`!


 


We love your feedback! If you have any comments or ideas, feel free to add a comment or submit an issue to the Azure Developer CLI Repo.

!! Announcement !! Public Preview of SWIFT message processing using Azure Logic Apps

!! Announcement !! Public Preview of SWIFT message processing using Azure Logic Apps

This article is contributed. See the original author and article here.

SWIFT message processing using Azure Logic Apps


 


We are very excited to announce the Public Preview of SWIFT MT encoder and decoder for Azure Logic Apps. This release will enable customers to process SWIFT based payment transactions with Logic Apps Standard and build cloud native applications with full security, isolation and VNET integration.


 


What is SWIFT


SWIFT is the Society for Worldwide Interbank Financial Telecommunication (SWIFT) is a global member-owned cooperative that provides a secure network that enables financial institutions worldwide to send and receive financial transactions in a safe, standardized, and reliable environment. The SWIFT group develops several message standards to support business transactions in the financial market. One of the longest established and widely used formats supported by the financial community is SWIFT MT and it is used by SWIFT proprietary FIN messaging service.


 


SWIFT network is used globally by more than 11,000 financial institutions in 200 regions/countries. These institutions pay SWIFT annual fees as well as based on the processing of financial transactions. Failures in the processing in SWIFT network create delays and result in penalties. This is where Logic Apps enables customers to send/receive these transactions as per the standard as well as proactively address these issues.


 


Azure Logic Apps enables you to easily create SWIFT workloads to automate their processing, thereby reducing errors and costs. With Logic Apps Standard, these workloads can run on cloud or in isolated environments within VNET. With built-in and Azure connectors, we offer 600+ connectors to a variety of applications, on-premises or on cloud. Logic Apps is gateway to Azure – with the rich AI and ML capabilities, customers can further create business insights to help their business.


 


SWIFT capabilities in Azure Logic Apps


The SWIFT connector has two actions – Encoder and Decoder for MT messages. There are two key capabilities of the connector – transformation of the message from flat file to XML and viceversa. Secondly, the connector performs message validation based on the SWIFT guidelines as described in the SRG (SWIFT Release Guide). The SWIFT MT actions support the processing of all categories of MT messages.


 


How to use SWIFT in Logic Apps


In this example we are listing the steps to receive an MT flat file message, decode to MT XML format, and then send it to downstream application


 



  1. SWIFT support is only available in the ‘Standard’ SKU of Azure Logic Apps. Create a standard Logic App

  2. Add a new workflow. You can choose stateful or stateless workflow.

  3. Create the first step of your workflow which is also the trigger, depending on the source of your MT message. We are using a Request based trigger.

  4. Choose the SWIFT connector under Built-in tab. Add the action ‘SWIFT Encode’ as a next step. This step will transform the MT XML message (sample is attached) to MT flat file format.


DivSwa_3-1667602325952.png


 


DivSwa_4-1667602382992.png


 


By default, the action does message validation based on the SWIFT Release Guide specification. It can be disabled via the Message validation drop-down



  1. For scenarios where you are receiving a SWIFT MT message as flat file (sample is attached) from SWIFT network, you can use SWIFT decode action to validate and transform the message to MT XML format


 


DivSwa_5-1667602443085.png


 


Advanced Scenarios


For now, you need to contact us if you have any scenarios described below. We plan to document them soon so this is a short term friction.



  • SWIFT processing within VNET

    • To perform message validation, Logic Apps runtime leverages artifacts that are hosted on a public endpoint. If you want to limit calls to the internet, and want to do all the processing within VNET, you need to override the location of those artifacts with an endpoint within your VNET. Please reach out to us and we can share instructions.




 



  • BIC (Bank Identifier Code) validation

    • By default, BIC validation is disabled. If you would like to enable BIC validation, please reach out to us and we can share instructions



Azure Sphere – Image signing certificate update coming soon

This article is contributed. See the original author and article here.

Summary


Azure Sphere is updating keys used in image signing, following best practises for security. The only impact on production devices is that they will experience two reboots instead of one during the 22.11 release cycle (or when they next connect to the Internet if they are offline). For certain manufacturing, development, or field servicing scenarios where the Azure Sphere OS is not up to date, you may need to take extra steps to ensure that newly signed images are trusted by the device; read on to learn more.


 


What is an image signing key used for, and why update it?


Azure Sphere devices only trust signed images, and the signature is verified every time software is loaded. Every production software image on the device – including the bootloader, the Linux kernel, the OS, and customer applications, as well as any capability file used to unlock development on or field servicing of devices – is signed by the Azure Sphere Security Service (AS3), based on image signing keys held by Microsoft.


 


As for any modern public/private key system, the keys are rotated periodically. The image signing keys have a 2-year validity. Note that once an image is signed, it generally remains trusted by the device. There is a separate mechanism based on one-time programmable fuses to revoke older OS software with known vulnerabilities such as DirtyPipe and prevent rollback attacks – we used this most recently in the 22.09 OS release.


 


When is this happening?


The next update to the image signing certificate will occur at the same time as 22.11 OS is broadly released in early December. When that happens, all uses of AS3 to generate new production-signed application images or capabilities will result in images signed using the new key.


 


Ahead of that, we will update the trusted key-store (TKS) of Azure Sphere devices, so that the TKS incorporates all existing keys and the new keys. This update will be automatically applied to every connected device over-the-air.  Note that device TKS updates happen ahead of any pending updates to OS or application images. In other words, if a device comes online that is due to receive a new-key-signed application or OS, it will first update the TKS so that it trusts that application or OS.


 


We will update the TKS at the same time as our 22.11 retail-evaluation release, which is targeted at 10 November. The next time that each Azure Sphere device checks for updates (or up to 24 hours later if using the update deferral feature), the device will apply the TKS update and reboot. The TKS update is independent of an OS update, and it will apply to devices using both the retail and retail-eval feeds.


 


Do I need to take any action?


No action is required for production-deployed devices. There are three non-production scenarios where you may need to take extra steps to ensure that newly signed images are trusted by the device.


 


The first is for manufacturing. If you update and re-sign the application image you use in manufacturing, but you are using an old OS image with an old TKS, then that OS will not trust the application. Follow these instructions to sideload the new TKS as part of manufacturing.


 


The second is during development. If you have a dev board that you are sideloading either a production-signed image or a capability to, and it has an old TKS, then it will not trust that capability or image. This may make the “enable-development” command fail with an error such as “The device did not accept the device capability configuration.” This can be remedied by connecting the device to a network and checking that the device is up-to-date. Another method is to recover the device – the recovery images always include the latest TKS.


 


The third is for field servicing. During field servicing you need to apply a capability to the device as it has been locked down after manufacturing using the DeviceComplete state. However, if that capability is signed using the new image signing key and the device has been offline – so it has not updated its TKS – then the OS will not trust the capability. Follow these instructions to sideload the new TKS before applying the field servicing capability.


 


Thanks for reading this blog post. I hope it has been informative in how Azure Sphere uses signed images and best practises such as key rotation to keep devices secured through their lifetime.