Skytap on Azure Simplifies Migration for Apps Running IBM Power

Skytap on Azure Simplifies Migration for Apps Running IBM Power

This article is contributed. See the original author and article here.

 


Cloud migration remains a crucial component for any organization in the transformation of their business, and Microsoft continues to focus on how best to support customers wherever they are in that journey. Microsoft works with partners like Skytap to unlock the power of Azure for customers relying on traditional on-premises application platforms.


 


Skytap on Azure is a cloud service purpose-built to natively run traditional IBM Power workloads in Azure. And we are excited to share that Skytap on Azure is available for purchase and provisioning directly through Azure Marketplace, further streamlining the experience for customers.


 


The migration of applications running on IBM Power to the cloud is often seen as a difficult and challenging move involving re-platforming. With Skytap on Azure, Microsoft brings the unique capabilities of IBM Power9 servers to Azure, directly integrating with Azure networking enabling Skytap to provide its platform with minimal connectivity latency to Azure native services.


 


Architecture.jpg


 


 


Skytap has more than a decade of experience working with customers, such as Delphix, Schneider Electric, and Okta,  and offering extensible application environments that are compatible with on-premises data centers; Skytap’s environments simplify migration and provide self-service access to develop, deploy, and accelerate innovation for complex applications.


 


“Until we started working with Skytap, we did not have a public cloud option for our IBM Power Systems customers that provided real value over their on-premise systems. Now, with Skytap on Azure we’re excited to offer true cloud capabilities like instant cloning, capacity on demand and pay-as-you-go options in the highly secure Azure environment,” said Daniel Magid, CEO of Eradani.


 


Image 2.jpg


 


 


Skytap on Azure offers consumption-based pricing, on-demand access to compute and storage resources, and extensibility through RESTful APIs. With Skytap availability on Azure Marketplace customers can get started quickly, and at a low cost. Learn more about Skytap on Azure here, additionally take a look at the latest video from our Microsoft Product team here


 


Skytap on Azure is available in the East US Azure region. Given the high level of interest we have seen already, we intend to expand availability to additional regions across Europe, the United States, and Asia Pacific. Stay tuned for more details on specific regional rollout availability.


 


Try Skytap on Azure today, available through the Azure Marketplace. Skytap on Azure is a Skytap first-party service delivered on Microsoft Azure’s global cloud infrastructure.


 


Published on behalf of the Microsoft Azure Dedicated and Skytap on Azure Team 


 

What's New: Azure Sentinel Logic Apps Connector improvements and new capabilities

What's New: Azure Sentinel Logic Apps Connector improvements and new capabilities

This article is contributed. See the original author and article here.

Azure Sentinel Logic Apps connector is the bridge between Sentinel and Playbooks, serving as the basis of incident automation scenarios. As we prepare for new Incident Trigger capabilities (coming soon), we have made some improvements to bring the most updated experience to playbooks users.


Note: existing playbooks should not be effected. For new playbooks, we recommend using the new actions.


 


Highlights:



  • Operate on most up-to-date Incidents API

  • New dynamic fields are now available 

  • Less work to accomplish incident changes

  • Assign Owner ability in playbooks

  • Format rich comments in a playbook


 


 


What’s new?


 


Update Incident: One action for all Incident properties configuration


Now it is simpler to update multiple properties at once for an incident. Identify the Incident you want to operate on and set new values for any field you want. 


 


Update Incident replaces the actions: Change Incident Severity, Change Incident Status, Change Incident Title, Change Incident Description, Add/Remove Labels. They will still work in old playbooks, but eventually will be removed from the actions gallery for future use.


 


image.png


 


Assign Owner in playbooks


As part of new Update Incident, it is now possible to assign an owner to an incident in a playbook. For example, based on incident creation time and SecOps staffing information (for example, from a you can assign the incident to the right shift owners:



  1. Set Assign/Unassign owner to Assign

  2. Set Owner with the Object ID or User Principal Name.


 


Post Rich Comments with HTML editor


Recently, Azure Sentinel added support for HTML and Markdown for Incident Comments. Now, we added an HTML editor to the Add comment to Incident so you can format your comments.


 


image.png


 


One identifier for working on Azure Sentinel Incidents


Previously, you had to supply 4 fields in order to identify the incident to update. New Update Incident and Add Comment to Incident require only one field (instead of 4) to identify the incident that meant to be changed: Incident ARM ID.


 


If your playbooks start with Azure Sentinel Alert trigger (When an Azure Sentinel Alert is Triggered), use Alert – Get Incident to retrieve this value.


 


Get Incident: Now with most updated Sentinel API


Alert – Get Incident allows playbooks that start with Azure Sentinel Alert Trigger (When an alert is triggered) to reach the incident that holds the alert.


 


Now, new fields are available and are aligned to the Incident API


For example, Incident URL can be included in an Email to the SOC shared mailbox or as part of a new ticket in Service Now, for easy access to the incident in the Azure Sentinel portal.


 


image.png


 


This action’s inputs have not changed, but the response became richer:


 


image.png


 


 


In addition, we supplied another action called Get Incident which allows you to identify incidents using the Incident ARM ID, so you can use any other Logic Apps trigger and still get Incident details. It returns the same Incident Schema. For example, if you work with another ticketing system which supports triggering Logic Apps, you can send the ARM ID as part of the request.


 


 image.png


 


Get Entities: names change


As we prepare for our new Incident Trigger experience, when entities will be received both from incidents an alerts, we changed the name of the actions Alert – Get IPs/Accounts/URLs/Hosts/Filehashes  to Entities – Get IPs/Accounts/URLs/Hosts/Filehashes. 


 


image.png


 


Learn more


Azure Sentinel Connector documentation


 

Ingestion, ETL, and Stream Processing with Azure Databricks

Ingestion, ETL, and Stream Processing with Azure Databricks

This article is contributed. See the original author and article here.

This post is part of a multi-part series titled “Patterns with Azure Databricks”.  Each highlighted pattern holds true to 3 principles for modern data analytics:


 


MikeCornell_0-1606336011284.png


 



  1. A Data Lake to store all data, with a curated layer in an open-source format.  The format should support ACID transactions for reliability and should also be optimized for efficient queries.

  2. A foundational compute layer built on open standards.  The foundational compute Layer should support most core use cases for the Data Lake.  This includes ETL, stream processing, data science and ML, and SQL analytics on the data lake.  Standardizing on a foundational compute service provides consistency across the majority of use cases.  Being built on open standards ensures rapid innovation and a non-locking, future-proof architecture.

  3. Easy integration for additional and/or new use cases.  No single service can do everything.  There are always going to be new or additional use cases that aren’t best handled by the foundational compute layer.  Both the open, curated data lake and the foundational compute layer should provide easy integration with other services to tackle these specialized use cases.



Pattern for Ingestion, ETL, and Stream Processing


Companies need to ingest data in any format, of any size, and at any speed into the cloud in a consistent and repeatable way. Once that data is ingested into the cloud, it needs to be moved into the open, curated data lake, where it can be processed further to be used by high value use cases such as SQL analytics, BI, reporting, and data science and machine learning.


 


MikeCornell_0-1606337288033.png


 


The diagram above demonstrates a common pattern used by many companies to ingest and process data of all types, sizes, and speed into a curated data lake.  Let’s look at the 3 major components of the pattern:


 



  1. There are several great tools in Azure for ingesting raw data from external sources into the cloud.  Azure Data Factory provides the standard for importing data on a schedule or trigger from almost any data source and landing it in its raw format into Azure Data Lake Storage/Blob Storage.  Other services such as Azure IoT Hub and Azure Event Hubs provide fully managed services for real time ingestion.  Using a mix of Azure Data Factory and Azure IoT/Event Hubs should allow a company to get data of just about any type, size, and speed into Azure. 


    MikeCornell_0-1606339083241.png



  2. After landing the raw data into Azure, companies typically move it into the raw, or Bronze, layer of the curated data lake.  This usually means just taking the data in its raw, source format, and converting it to the open, transactional Delta Lake format where it can be more efficiently and reliably queried and processed.  Ingesting the data into the Bronze curated layer can be done in a number of ways including: 
     

     


    MikeCornell_0-1606339528397.png


     


    1. Basic, open Apache Spark APIs in Azure Databricks for reading streaming events from Event/IoT Hubs and then writing those events or raw files to the Delta Lake format.

    2. The COPY INTO command to easily copy data from a source file/directory directly into Delta Lake.

    3. The Azure Databricks Auto Loader to efficiently grab files as they arrive in the data lake and write them to the Delta Lake format.

    4. The Azure Data Factory Copy Activity which supports copying data from any of its supported formats into the Delta Lake format.
       

       





  3. After the raw data has been ingested to the Bronze layer, companies perform additional ETL and stream processing tasks to filter, clean, transform, join, and aggregate the data into more curated Silver and Gold datasets. Using Azure Databricks as the foundational service for these processing tasks provides companies with a single, consistent compute engine (the Delta Engine) built on open standards with support for programming languages they are already familiar with (SQL, Python, R, Scala).  It also provides them with repeatable DevOps processes and ephemeral compute clusters sized to their individual workloads. 

    MikeCornell_0-1606340949593.png

     




The ingestion, ETL, and stream processing pattern discussed above has been used successfully with many different companies across many different industries and verticals.  It also holds true to the 3 principles discussed for modern data analytics: 1) using an open, curated data lake for all data (Delta Lake), 2) using a foundational compute layer built on open standards for the core ETL and stream processing (Azure Databricks), and 3) using easy integrations with other services like Azure Data Factory and IoT/Event Hubs which specialize in ingesting data into the cloud.


 


If you are interested learning more about Azure Databricks, attend an event, and check back soon for additional blogs in the “Patterns with Azure Databricks” series.

Securing a Windows Server VM in Azure

This article is contributed. See the original author and article here.

If you’ve built and managed Windows Servers in an on-premises environment, you may have a set of configuration steps as well as regular process and monitoring alerts, to ensure that server is as secure as possible. But if you run a Windows Server VM in Azure, apart from not having to manage the physical security of the underlying compute hardware, what on-premises concepts still apply, what may you need to alter and what capabilities of Azure should you include?


 


Windows Security Baselines – Most server administrators would start by configuring the default Group Policy settings to meet their organization’s security requirements, and would search for guidance on other settings that could be tweak to make the environment more restrictive. Traditional Windows Server hardening guidance can now get out of date easily, as we ship more frequent updates and changes to the operating system, though some practices are universally good to apply. In addition, security guidance can change, especially as we learn from the latest threats.


 


To keep up with the current advice, relevant to your server’s current patch levels, we recommend the use of the Windows Security Baselines. Provided inside the Security Compliance Toolkit, the baselines bring together feedback from Microsoft security engineering teams, product groups, partner and customers into a set of Microsoft-recommended configuration settings and their security impact. On the Microsoft Security Baselines blog, you can keep track of changes to the baselines through the Draft and Final stages, for example as they relate to the Windows Server version 20H2 release
This guidance applies to Windows Server whether it’s on-premises or in the Cloud.


 


Hardening your Windows Server – In addition, my colleague Orin Thomas does a great presentation on Hardening your Windows Server environment. It includes things like Credential Guard, Privileged Administration Workstations, Shielded VMs and more. Download the presentation deck and the demo videos here: Orin-Thomas/HardenWinSvr: Hardening Windows Server presentation (github.com)


 


Server Roles and applications
You also need to pay attention to the role that your server is performing, which will install additional files and settings to the base operating system, for example if it’s running IIS or SQL Server. These components come with their own security guidance, and Orin has written up advice on hardening IIS here: Windows Server 101: Hardening IIS via Security Control Configuration


 


And then there’s the configuration of any applications you are hosting on the server. Have you custom applications been developed to protect against attacks or exploits? Are any third-party applications secure or do they require you to “relax” your security configurations for them to function properly (for example, turning off UAC)? Do you restrict who can install applications onto your server and which applications can be installed or run?


 


Microsoft Azure considerations
With some of the Windows Server considerations covered, let’s explore the Azure considerations and capabilities.


 


Networking
One of the biggest differences to running an on-premises server is how you manage the network configuration. IaaS VMs should always be managed through Azure, not via their network settings inside the operating system.


 


RDP – It’s still not a good idea to leave open the default RDP port, due to the high number of malicious attempts at taking servers down by flooding this port with invalid authentication attempts. Instead, for a secure connection to a remote server session for administration, check out Azure Bastion instead which is instigated through the Azure Portal. 


 


Network security groupsNetwork security groups allow granular control of traffic to and from Azure resources, including traffic between different resources in Azure. Plan your routing requirements and configure these virtual firewalls to only allow necessary traffic. 


 


Just-in-time VM access – If you do have a requirement to open ports sometimes, consider implementing just-in-time (JIT) VM access. This allows Azure Security Center to change networking settings for a specified period of time only, for approved user requests.


 


VPN Gateways – Implement a virtual network gateway for encrypted traffic between your on-premises location and your Azure resources. This can be from physical sites (such as branch offices), individual devices (via Point to Site gateways) or through private Express Route connections which don’t traverse the public internet. Learn more at What is a VPN Gateway? 


 


Identity
Role Based Access Control – Specific to Azure, Role Based Access Control (RBAC) lets you control who has access to the properties and configuration settings of your Azure resources via the Azure Resource Manager (including the Azure Portal, PowerShell, the Azure CLI and Cloud Shell). These permissions are packaged by common roles, so you could assign someone as a Backup Operator and they’d get the necessary rights to manage Azure Backup for the VM, for example. This identity capability helps you implement a “least privilege” model, with the right people having only the access that they need to perform their roles. 


 


Privileged Identity Management – Similar to JIT VM access, Privileged Identity Management enables an approved user to elevate to a higher level of permissions for a limited time, usually to perform administration tasks.


 


Other advanced Identity features – With the Cloud, you can take advantage of additional advanced security features for securing authentication requests, including Conditional Access and Multi-Factor Authentication. Check out Phase 1:Build a foundation of security in the Azure Active Directory feature deployment guide. 


 


Security Compliance & Monitoring
Azure Security Benchmarks – Similar to the Windows Security Benchmarks, the Azure Security Benchmarks help you baseline your configuration against Microsoft recommended security practices. These include how security recommendations map to security controls from industry sources like NIST and CIS, and include Azure configuration settings for your VM (such as privileged access, logging and governance). 


 


Azure Defender for Servers – Azure Security Center allows for advanced security capabilities and monitoring of server VMs with Azure Defender for Servers. This is my “if you only do one thing in this article, do this” recommendation. It’s needed for JIT access and also includes things like file integrity monitoring, adaptive network hardening and fileless attack detection. 


 


Azure Policy – Other things can fall under the security umbrella, like staying compliant with the Payment Card Industry’s Data Security Standard (PCI DSS), or ensuring that Cloud resources can only be created in an approved list of countries (with corresponding Azure regions) for your organization. Investigate how Azure Policy can help enforce these requirements when a new VM is created or can alert you if an existing VM has it’s configuration changed so it’s now non-compliant. 


 


 


Conclusion
While it’s easy to imagine a security scenario of an open application database or a hacking attempt to exploit application code, there are a significant number of security aspects to running a Windows Server VM in the cloud too. Start with this list and you’re going in the right direction to make your cloud servers as secure as possible, aligned with the specific requirements for your organization.


 


 


 

Using outputs() function and JSON Parse to read data from missing dynamic value in a Flow | Power Automate

Using outputs() function and JSON Parse to read data from missing dynamic value in a Flow | Power Automate

I faced this issue lately and not sure if it’s a bug or something I might be missing. But, I couldn’t find anything in Dynamic Content in a Flow and I was not able to pick fields to use further in a Flow.

Not sure how many of you faced this since most fields you need are available in a Flow’s Dynamic Content part.

Scenario – Adaptive Cards for Teams issue

I had this one scenario in particular where the Adaptive Card I created for Microsoft Teams’ User sends back Response but the Dynamic Content doesn’t appear in the steps after the Card Step.

  1. See below that I’ve declared a Variable just to show that the Dynamic Content that should appear after the Adaptive Card.
  2. And if I press the Dynamic Content as shown above in Step #1, and minimize all the content, I don’t see the Teams’ Dynamic Content variables at all
  3. And the Adaptive Card didn’t return the below Outputs

    That’s when we should use outputs() function to read this data.

outputs() function

Here’s how you can use the Parse JSON action and outputs() method to read the Outputs of the step you want and then Parse JSON so that these can be picked as variables/dynamic values in steps following this –

  1. Take Parse JSON action from Data Operations in a Flow
  2. In that in Inputs, you can use Function on the Content field.
  3. And write outputs function as shown below –

    And the complete the function as below

    Explanation:
    MyCard is the name of the step of my AdaptiveCard I used. If the name of you step has spaces like “My User Adaptive Card”, then the function will look like outputs(‘My_User_Adaptive_Card’)[‘body’]

    body is written because if you see in the Outputs originally in the Scenario section above, all results are sent in body field of Outputs.

  4. Now, since you don’t know the Schema, just put a “{}” so that you can Save the step. (This is required)
  5. Run the Flow once and collect the Outputs from this ‘Parse JSON 2’ step as shown above.
    Copy the Outputs
  6. Now, open the same Parse JSON 2 step which you created. And click on Generate from sample
  7. And paste the schema in the box.
  8. Once done, schema will be generated like this.
  9. Now, this Parsed Outputs can be further used which will have the data from the Step which didn’t yield Dynamic Content
    Example, I’ll create a variable to show Dynamic Content that can pop-up
  10. It’ll show all the fields from the Card in the Parse JSON 2 outputs

    And that solves the problem!!

    Original Microsoft Documentation on the same is: https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference#outputs?WT.mc_id=DX-MVP-5003911

Hope this was helpful.

Here are some more Power Automate / Adaptive Card content you might want to look at –

  1. Adaptive Cards for Outlook Actionable Messages using Power Automate | Power Platform
  2. Make On-Demand Flow to show up in Dynamics 365 | Power Automate
  3. Save Adaptive Cards work using VS Code Extension – Adaptive Cards Studio | Quick Tip
  4. Adaptive Cards for Teams to collect data from users using Power Automate | SharePoint Lists
  5. Task Completion reminder using Flow Bot in Microsoft Teams | Power Automate
  6. Run As context in CDS (Current Environment) Flow Trigger | Power Automate
  7. Using triggerBody() / triggerOutput() to read CDS trigger metadata attributes in a Flow | Power Automate
  8. Run As context in CDS (Current Environment) Flow Trigger | Power Automate
  9. Terminate a Flow with Failed/Cancelled status | Power Automate
  10. Pause a Flow using Delay and Delay Until | Power Automate

Thank you!!

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Experiencing Latency and Data Loss issue in Azure Portal for Many Data Types – 11/28 – Investigating

This article is contributed. See the original author and article here.

Initial Update: Saturday, 28 November 2020 05:02 UTC

We are aware of issues within Application Insights and are actively investigating. Due to power outage in data center, some customers may experience delayed or missed Log Search Alerts, Latency and Data Loss in South Africa North region.
  • Work Around: none
  • Next Update: Before 11/28 17:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Vyom