Data Protection for SAP Solutions

Data Protection for SAP Solutions

This article is contributed. See the original author and article here.

Data Protection for SAP Solutions


 


Introduction


Data protection is key criteria for all customers. You need to find an optimal way to protect against data loss or data inconsistencies caused by hardware or software defects, accidentally deletion of data, external and internal data fraud.


Other important criteria are the architecture around high availability and disaster recovery to fulfill the requirements around RPO in a typical HA case (usually RPO=0) or in a disaster recovery case (usually RPO!=0).


RalfKlahr_1-1696238408273.png


 


How soon is the system required to be back in “normal” operations after an HA or DR situation.


Recovery times can be in a wide range depending on the ways to recover the data. E.g. the times can be short if you could use Snapshots or a clone from a Snapshot or it could take hours to bring back the data to the file system (Streaming backup/recovery) before we even can start the database recovery process.


The main question is “what is your requirement?”


What is nice to have and what is really required in cases of high availability and disaster recovery?


 


Backup Runtime with different HANA Database Sizes


Database size on file system


Backup throughput: 250MB/s


RalfKlahr_2-1696238463935.png


For very large databases the backup process will take many hours if you are using streaming based backup. With snapshot based backups it could take only a minute, regardless of the size of the database. Remember, a Snapshot, at least with Azure NetApp Files, remains in the same volume where your data is. Therefore, consider offloading (at least) one Snapshot a day using e.g. ANF backup to a ANF backup Vault.


SAP HANA on Azure NetApp Files – Data protection with BlueXP backup and recovery (microsoft.com)


 


Restore and recovery times of a 4TB HANA database


RalfKlahr_3-1696238554237.png



  • Database size: 4TB on file system

  • Restore throughput: 250MB/s

  • Log backups: 50% of db size per day

  • Read troughput during db start: 1000MB/s

  • Throughput during recovery: 250MB/s


Conclusion:


For smaller databases it can be absolutely sufficient to use streaming backups to fulfil your requirements. For larger or very large databases getting to low RTO times with streaming backups can be difficult. Since it can take hours to restore the data to the original location. This could enlarge the RTO significantly. Although, specifically for the high availability case, we would recommend using HSR (HANA System Replication) to reach an acceptable RTO. But even than the failing system may need to be rebuild or recovered which might take many hours. To reduce the time for a complete system rebuild, customers are using Snapshot based backup/restore scenarios to lower the RTO significantly.


 


Azure Backup (Streaming Backup)


Azure Backup delivers these key benefits:



  • Offload on-premises backup – Azure Backup offers a simple solution for backing up your on-premises resources to the cloud. Get short and long-term backup without the need to deploy complex on-premises backup solutions.

  • Back up Azure IaaS VMs – Azure Backup provides independent and isolated backups to guard against accidental destruction of original data. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scalability are simple, backups are optimized, and you can easily restore as needed.

  • Scale easily – Azure Backup uses the underlying power and unlimited scale of the Azure cloud to deliver high-availability with no maintenance or monitoring overhead.

  • Get unlimited data transfer – Azure Backup doesn’t limit the amount of inbound or outbound data you transfer, or charge for the data that’s transferred. Outbound data refers to data transferred from a Recovery Services vault during a restore operation. If you perform an offline initial backup using the Azure Import/Export service to import large amounts of data, there’s a cost associated with inbound data. Learn more.

  • Keep data secure – Azure Backup provides solutions for securing data in transit and at rest.

  • Centralized monitoring and management – Azure Backup provides built-in monitoring and alerting capabilities in a Recovery Services vault. These capabilities are available without any additional management infrastructure. You can also increase the scale of your monitoring and reporting by using Azure Monitor.

  • Get app-consistent backups – An application-consistent backup means a recovery point has all required data to restore the backup copy. Azure Backup provides application-consistent backups, which ensure additional fixes aren’t required to restore the data. Restoring application-consistent data reduces the restoration time, allowing you to quickly return to a running state.

  • Retain short and long-term data – You can use Recovery Services vaults for short-term and long-term data retention.

  • Automatic storage management – Hybrid environments often require heterogeneous storage – some on-premises and some in the cloud. With Azure Backup, there’s no cost for using on-premises storage devices. Azure Backup automatically allocates and manages backup storage, and it uses a pay-as-you-use model. So, you only pay for the storage you consume. Learn more about pricing.

  • Multiple storage options – Azure Backup offers three types of replication to keep your storage/data highly available.

    • Locally redundant storage (LRS) replicates your data three times (it creates three copies of your data) in a storage scale unit in a datacenter. All copies of the data exist within the same region. LRS is a low-cost option for protecting your data from local hardware failures.

    • Geo-redundant storage (GRS) is the default and recommended replication option. GRS replicates your data to a secondary region (hundreds of miles away from the primary location of the source data). GRS costs more than LRS, but GRS provides a higher level of durability for your data, even if there’s a regional outage.

    • Zone-redundant storage (ZRS) replicates your data in availability zones, guaranteeing data residency and resiliency in the same region. ZRS has no downtime. So your critical workloads that require data residency, and must have no downtime, can be backed up in ZRS.




What is Azure Backup? – Azure Backup | Microsoft Learn


SAP HANA Backup support matrix – Azure Backup | Microsoft Learn


 


ANF how does a SnapShot work


How Azure NetApp Files snapshots work | Microsoft Learn


What volume snapshots are


An Azure NetApp Files snapshot is a point-in-time file system (volume) image. It is ideal to serve as an online backup. You can use a snapshot to create a new volume (clone), restore a file, or revert a volume. In specific application data stored on Azure NetApp Files volumes, extra steps might be required to ensure application consistency.


Low-overhead snapshots are made possible by the unique features of the underlying volume virtualization technology that is part of Azure NetApp Files. Like a database, this layer uses pointers to the actual data blocks on disk. But, unlike a database, it doesn’t rewrite existing blocks; it writes updated data to new blocks and changes the pointers, thus maintaining the new and the old data. An Azure NetApp Files snapshot simply manipulates block pointers, creating a “frozen”, read-only view of a volume that lets applications access older versions of files and directory hierarchies without special programming. Actual data blocks aren’t copied. As such, snapshots are efficient in the time needed to create them; they are near-instantaneous, regardless of volume size. Snapshots are also efficient in storage space; only delta blocks between snapshots and the active volume are kept.


 


Files consist of metadata and data blocks written to a volume. In this illustration, there are three files, each consisting of three blocks: file 1, file 2, and file 3.


 


RalfKlahr_4-1696238658628.png


A snapshot Snapshot1 is taken, which copies the metadata and only the pointers to the blocks that represent the files:


RalfKlahr_5-1696238699901.png


Files on the volume continue to change, and new files are added. Modified data blocks are written as new data blocks on the volume. The blocks that were previously captured in Snapshot1 remain unchanged:


RalfKlahr_6-1696238750609.png


A new snapshot Snapshot2 is taken to capture the changes and additions:


RalfKlahr_7-1696238793205.png


 


ANF Backup (SnapShot – SnapVault based)


 Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance. Backups created by the service are stored in Azure storage, independent of volume snapshots that are available for near-term recovery or cloning. Backups taken by the service can be restored to new Azure NetApp Files volumes within the same Azure region. Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. For additional information, see https://learn.microsoft.com/en-us/azure/azure-netapp-files/snapshots-introduction


To start with please read: Understand Azure NetApp Files backup | Microsoft Learn


ANF Resource limits: Resource limits for Azure NetApp Files | Microsoft Learn


 


Design


The four big benefits of ANF backup are:



  • Inline compression when taking a backup.

  • De-Duplication – this will reduce the amount of storage needed in the Blob space. Be aware that using Transparent Data Encryption functionality as offered by the different DBMS are prohibiting efficiency gains by De-Duplication

  • Block level Delta copy of the blocks – this will the time and the space for each backup

  • The database server is not impacted when taking the backup. All traffic will go directly from the storage to the blob space using the Microsoft backbone and NOT the client network. The backup will also NOT impact the storage volume quota. The database server will have the full bandwidth available for normal operation.


RalfKlahr_8-1696238840726.png


 


How this is all working


We are going to split the backup features in two parts. The data volume will be snapshotted with azacsnap. Creating this snapshot, it is important that the data volume is in a consistent state before the snapshot is triggered. Creating the application consistency is managed with azacsnap in the case of e.g. SAP HANA Oracle (with Oracle Linux), and Db2 (Linux only).


The SAP HANA log backup area is a “offline” volume and can be backed up anytime without talking to the database. We also need a much higher backup frequency to reduce the RPO as for the data volume. The database can be “rolled forward” with any data snapshot if you have all the logs created after this data volume snapshot. Therefore, the frequency of how often we backup the log backup folder is very important to reduce the RPO. For the log backup volume we do not need a snapshot at all because, as I mentioned, all the files there are offline files.


RalfKlahr_9-1696238879049.png


This displays the “one AV Zone scenario”. It will also be possible to use ANF backup in a peered region (DR) but then the restore process will be different (later in this document)


 


ANF Backup using an DR Region


It is also an option to leverage ANF backup from a DR Azure region. In this scenario the backups will be created from the ANF DR volumes. In our example, we are using both. CRR (Cross Region Replication) in a region ANF can replicate to and ANF backup to store the backups for many days, weeks or even months.


For a recovery you will primarily use the snapshots in the production ANF volume. If you have lost the primary zone or ANF you might have an HA system before you even recover the DB. If you don’t have an HA system, you still have a copy of the data in your DR region. In the DR region, you simply could activate the volumes or create a clone out of the volumes. Both are very fast methods to get your data back. You would need to recover the database using the clone or the DR volume. In most cases you will lose some data because in the DR region usually is a gap of available log backups.


 

RalfKlahr_1-1696240337417.png


 


ANF Volume Lock


One other data protection method is to lock the ANF volume from deletion.


When you create a lock you will protect the ANF volume from accidently deletion.


RalfKlahr_11-1696238982965.png


If you or someone else tries to delete the ANF volume, or the resource group the ANF volume belongs to, Azure will return an error.


RalfKlahr_12-1696239016786.png


Result in:


RalfKlahr_13-1696239048353.png


However, there is a limitation to consider. If you set a lock on an ANF volume that vlocks deletion of the volume, you also can’t delete any snapshots created of this volume. This presents a limitation when you work with consistent backups using AzAcSnap. AzAcSnap. As those are not going to be able to delete any snapshots of a volume where the lock is configured. The consequence is that the retention management of azacsnap or BlueXP is not able to delete the snapshots that are out of the retention period anymore.


But for a time where you start with your SAP deployment in Azure this might is a workable way to protect your volumes for accidently deletion.


 


Repair system 


There are many reasons why you might find yourself in a situation to repair a HANA database to s specific point in time. the most common are:



  • Accidental deletion of data within a table or deletion of a complete table during administration or operations causing a logical inconsistency in the database.

  • Issues in hardware of software stack causing corruption of page/block content in the database.


In both of these it might take hours, days or even weeks until the impacted data is accessed the next time. The more time passes between the introduction of such an inconsistency and the repair, the more difficult is the root cause analysis and correction. Especially in cases of logical inconsistencies, an HA system will not help since the logical inconsistency cause by a ‘delete command’ got “transferred” to the database of the HA system through HANA System Replication as well. 


The most common method of solving these logical inconsistency problems is to “quickly” build an, so called, repair system to extract deleted and now “missing” data.


 


To detect physical inconsistencies, executing regular consistency checks are highly recommended to detect problems as early as possible.


 


For SAP HANA, the following main consistency checks exist:


























CHECK_CATALOG



Metadata



Procedure to check SAP HANA metadata for consistency



CHECK_TABLE_CONSISTENCY



Column store
Row store



Procedure to check tables in column store and row store for consistency



Backup



Persistence



During (non-snapshot) backups the physical page structure (e.g. checksum) is checked



hdbpersdiag



Persistence



Starting with SAP HANA 2.0 SPS 05 the hdbpersdiag tool is officially supported to check the consistency of the persistence level. see Persistence Consistency Check for more information.




2116157 – FAQ: SAP HANA Consistency Checks and Corruptions – SAP for Me


Persistence Check Tool


SAP Note 1977584 provides details about these consistency check tools. See also for related information in the SAP HANA Administration Guide.


To create an “repair System” we can select an older snapshot, which was created with e.g. azacsnap, and recover the database where we assume the deleted table was still available. Then export the table and import the table into the original PRD database. Of course,
we recommend that SAP support personnel guides you through this recovery process and potential additional repairs in the database.


The process of creating a ‘repair system’ can look as the following graphic:


RalfKlahr_14-1696239099129.png


 


 

Unlocking industrial data for AI: Microsoft partners with leading connectivity providers

Unlocking industrial data for AI: Microsoft partners with leading connectivity providers

This article is contributed. See the original author and article here.

Microsoft is partnering with leading industrial connectivity partners Advantech, PTC, and Softing to unlock industrial data for AI and accelerate digital transformation for industrial customers leveraging Azure IoT Operations through Azure’s adaptive cloud approach.


 


We are committed to empowering customers to achieve more with their data and unlocking new insights and opportunities across the industrial ecosystem. This includes overcoming the challenges of proprietary interfaces and data models from an array of industrial assets on the factory floor. We believe that the key to addressing those challenges is enabling data to flow consistently and securely to the people and places where it’s needed to drive collaboration and better decision-making, leveraging open standards like OPC UA and MQTT. This is why we are working closely with our connectivity partners, who play a vital role in bridging the gap between legacy or proprietary assets and our standardized edge interfaces. They provide data translation and normalization to open, standardized data models across heterogeneous environments.


 


The adaptive cloud approach brings just enough Azure and its capabilities to any environment, from the factory floor to both 1st party and 3rd party cloud infrastructure, using Kubernetes and other open technologies. We enable interoperability and integration across diverse edge devices and applications, providing a single control and management plane using Azure IoT Operations (Preview), enabled by Azure Arc. We aim to unify siloed teams, distributed sites, and sprawling systems and to provide our customers with an open, interoperable, and secure Industrial IoT platform that can scale to meet their current and future needs quickly.


 


We are leveraging leading solutions from our connectivity partners Advantech, PTC, and Softing to achieve the necessary frictionless data flow that our customers need. Each connectivity partner is integrated with Azure IoT Operations to enable data interoperability and management across the industrial edge and cloud.


 


 


Skip_C_0-1713471963962.png


Advantech is a leader in Industrial IoT, providing comprehensive system integration, hardware, software, and customer-centric design services. Their iFactory solution offers device-to-cloud visibility and a strong hardware-led connectivity story. Advantech has integrated their iFactory solution with Azure IoT Operations, enabling data flow from their edge devices and applications to Azure. They are also exploring to build an Akri connector for LoRaWAN, enabling an integration with Azure Resource Manager and benefiting from Azure’s security, monitoring, and management features.


 


“Azure IoT Operations offers a highly flexible approach to swiftly onboard IoT assets within the same network hierarchy. Data from devices can be easily captured using Akri discovery plugins and visualized in Grafana for user consumption. With the Azure AIO solution stack, our customers can seamlessly transition to a digital operational environment with success.”


– Ihen Tsai, Product Manager of WISE-iFactory, Advantech



Skip_C_1-1713471964034.png


 


 


 


 


Skip_C_2-1713471964035.png


PTC Kepware is a premier provider of industrial connectivity software, and their solutions access data from virtually any device – legacy or modern – and seamlessly and securely move the data to other OT and IT software applications. Their flagship product, Kepware+, enables secure and reliable data transfer between industrial assets and Azure IoT Operations, leveraging MQTT and OPC UA. Customers can ingest, process, and publish OPC UA data to services such as Azure Data Explorer or Microsoft Fabric, and can leverage Microsoft’s AI capabilities.


 


“PTC is proud of our long collaboration with Microsoft to accelerate digital transformation for industrial companies with our portfolio of manufacturing solutions. The announcement of Azure IoT Operations marks a significant milestone in empowering companies to leverage data for innovation and heightened efficiency. Together, PTC Kepware+ and Azure IoT Operations seamlessly and securely integrate to access, normalize, and process asset data at the edge and derive insights in the cloud.”


– Ted Kerkam, Senior Director of Product Strategy, PTC Kepware


 


Skip_C_3-1713471964050.png


 


 


 


 


Skip_C_4-1713471964052.png


Softing is a leading provider of industrial connectivity. Their edgeConnector, edgeAggregator and dataFEED OPC Suite family of products offer access to process and machine data in PLCs from various vendors such as Siemens, Rockwell, Schneider, Beckhoff, Mitsubishi, Fanuc, Omron, and more. Softing has integrated their connectivity product portfolio with Azure IoT Operations, enabling data flow from OT assets via open standards OPC UA and MQTT. This allows customers to send their asset data to services such as Azure Data Explorer or Microsoft Fabric, and they can leverage Microsoft’s AI capabilities.


 


Our customers require standards-based and scalable machine connectivity for their Industrial IoT solutions. Microsoft’s adaptive cloud approach supports Kubernetes, MQTT and OPC UA on edge level, so we can offer a seamless integration of our dataFEED products into the Azure platform meeting our customers’ critical requirements regarding connectivity and efficient operation.”


– Thomas Hilz, Managing Director at Softing Industrial Automation GmbH


 


 


Skip_C_5-1713471964062.png


 


 


Engage with Microsoft on our adaptive cloud approach and data connectivity


We believe when working together across a robust partner ecosystem, Microsoft can deliver the best possible solutions to our customers and help them realize the full potential of their data across the industrial edge and cloud. We are also committed to supporting open standards and protocols and providing a single management and control plane, to enable a seamless and secure data flow from assets to the cloud.


 


To learn more, visit the Microsoft booth at Hannover Messe (Hall 17, Stand G06) from April 22-26.  We invite you to come see us and our partners, and we will be showcasing our connectivity partner solutions. You can learn more about our adaptive cloud approach and discuss your Industrial IoT opportunities with our experts.


 


We hope to see you there!

Announcing the Top Three Teams of the 2024 Imagine Cup!

Announcing the Top Three Teams of the 2024 Imagine Cup!

This article is contributed. See the original author and article here.

Today marks a pivotal moment in the 2024 Imagine Cup as we reveal the top three teams selected to progress from the semifinals to the highly anticipated Imagine Cup World Championship, live at Microsoft Build!  


MaddyEpstein_0-1713455779299.png


 


The Imagine Cup, the premier student technology startup competition, has attracted thousands of visionary student entrepreneurs worldwide. Each team has developed an AI-driven solution to tackle pressing challenges including accessibility, sustainability, productivity, and healthcare.


This year’s semifinalists have demonstrated exceptional innovation with Azure AI services and OpenAI, showcasing their innovation, grit and ability make a positive impact through entrepreneurship. Congratulations to all the semifinalists for their remarkable achievements!


However, only three teams have been selected to progress to the World Championship where they will live on the global stage as they vie for the Imagine Cup Trophy, USD100,000, and a mentorship session with Microsoft Chairman and CEO, Satya Nadella! You can watch these startups live at Microsoft Build on May 21 to see who wins.


Drumroll, please, as we unveil FROM YOUR EYES, JRE, and PlanRoadmap! These startups represent the pinnacle of creativity and resilience, embodying the spirit of innovation that defines Imagine Cup.


Meet the Teams! Listed in alphabetical order.















FROM YOUR EYES


Turkey


About: Using Azure Computer Vision and Text Translator, FROM YOUR EYES has built a mobile application that offers both fast and qualified visual explanations to visually impaired users.


 


MaddyEpstein_1-1713455816774.png

 


In their own words…


 


Who/what inspires you? “After being selected as one of Microsoft’s leading women in technology in 2020, I was invited to join the experience team of Microsoft’s Seeing AI program. It was there that I took on responsibilities and crossed paths with visually impaired developers worldwide who held significant roles. They encouraged me to delve into coding. In addition, Onur Koç, Microsoft Turkey’s CTO, also greatly inspired us, he addressed all student ambassadors saying, ‘Software is magic. You can change the life of someone you’ve never met on the other side of the world.’ We were deeply moved by this, and with this motivation, we worked to reach people…with our developed technology, and we succeeded.”

How do you want to make an impact with AI? “The issue of blindness directly affects 330 million people worldwide and indirectly impacts over a billion individuals. For a visually impaired person, using image processing solutions means freedom. With the technology we have developed, our goal is to enable visually impaired individuals to live freely, remove barriers to their dreams, and solve the problem of blindness through technology. This competition will provide us with the opportunity to promote our technology to millions of visually impaired individuals worldwide. They do not have time to waste. We also want to quickly deliver our technology to those in need.



JRE


United Kingdom


About: Using Azure Machine Learning, Microsoft Fabric, and Copilot, JRE has built a slag detection system used in the continuous casting process of steel. Accurately detecting slag optimizes yield while improving quality.


MaddyEpstein_2-1713455816787.png

 


 


In their own words…


Who/what inspires you? Jorge: “I learned how to code out of necessity. Even though I took courses as an undergrad, I never really liked the type of projects we did because they were primarily simulations about atomic interactions and molecular optimizations. I found these problems beautiful but very abstract.  After college, many people wanted to create businesses around apps, and I learned how to code front and back-end applications to sell these apps. Later on, when I started working in the steel industry, I was frustrated by the lack of automation and unsafe and repetitive processes, so I started creating more complex integrated systems in this space.” 

How do you want to make an impact with AI? Our aim is to redefine manufacturing for the 21st century—making it smarter, more efficient, and sustainable. The Imagine Cup represents a unique opportunity to showcase our solution to a global audience, garnering support and resources necessary to scale our impact. We’re driven by the challenge of solving real-world problems and believe that through this competition, we can take a significant step towards achieving our vision.”


 



PlanRoadmap


United States


About: Using Azure OpenAI Service, PlanRoadmap has built an AI-powered productivity coach to help people with ADHD who are struggling with task paralysis get their tasks done. Their coach asks questions to identify the user’s obstacles, suggests strategies, and teaches the user about their work style.


MaddyEpstein_3-1713455816803.png

 


In their own words…


Who/what inspires you? Aaliya: One of my biggest inspirations has been my father… as he helped guide my direction within computer science. He has always been an advocate for women in STEM, and at a young age, that was incredibly powerful to be supported on. It enabled me to overcome feelings of imposter syndrome and have confidence in myself. He has always painted a vision of who I could be before I really believed in myself, and he inspires me to be dedicated, passionate, and ambitious.”


 


Clay: “At a young age, I was diagnosed with dysgraphia, a condition that impairs writing ability and fine motor skills. Even if not explicitly stated, when everything in school is handwriting, you are at a pretty severe disadvantage when you struggle to even write a few sentences.”


 


Ever: “Some of my biggest inspiration in pursuing computer science and engineering has been from cinema. I didn’t really have many people in my life who were in the tech field growing up, so I got a lot of inspiration from seeing tech in movies. In cinema you can see tech exactly as the artist imagined it, without the restrictions of the real world.”  

How do you want to make an impact with AI?  Clay: “As I became increasingly proficient in programming, I realized that not only did I want to do something big, but that I had the potential to make it happen. We are unified under the mission to help people with ADHD achieve their dreams. Our customer discovery efforts have revealed that despite significant increases in technological tools, there are still millions of people facing barriers caused by their ADHD symptoms and related executive function deficits. We want to change that. The mentorship from Microsoft will help us with the technical innovation and provide that frictionless experience to provide a novel approach towards supporting neurodivergent people.”



 


Up Next…


 


These top three teams will be live on the global stage at Microsoft Build on May 21 for the Imagine Cup World Championship, showcasing the depth and promise of their startups. Follow the journey on Instagram and X to stay up to date with all the competition action – and join us live to find out who is crowned champion!  

MaddyEpstein_4-1713455901044.png

Azure AI Translator announces new features as container offering.

This article is contributed. See the original author and article here.

Seattle—April 17, 2024—Today, we are pleased to announce the release of document translation (preview) and transliteration features for Azure AI Translator containers. All Translator container customers will get these new features automatically as part of the update.


 


Translator containers provide users with the capability to host the Azure AI Translator API on their own infrastructure and include all libraries, tools, and dependencies needed to run the service in any private, public, or personal computing environment. They are isolated, lightweight, portable, and are great for implementing specific security or data governance requirements.


 


As of today’s release, the following operations are now supported when using Azure AI Translator containers:



  • Text translation: Translate the text phrases between supported source and target language(s) in real-time.

  • Text transliteration: Converts text in a language from one script to another script in real-time. E.g. converting Russian language text written in Cyrillic script to Latin script.

  • Document translation (Preview): Translate a document between supported source and target language while preserving the original document’s content structure and format.


 


When to consider using Azure AI Translator containers?


You may want to consider Azure AI Translator containers in cases where:



  • there are strict data residency requirements to ensure that sensitive information remains within the company’s security boundary.

  • you reside in industries such as government, military, banking, and security enforcement where the ability to translate data without exposing it to external networks is a must.

  • you require the ability to maintain continuous translation capabilities while operating in disconnected environments or with limited internet access.

  • optimization, cost management, and flexibility to run on-premises with existing infrastructure is a priority.


 


Getting started with Translator container.


Translator containers are a gated offering. You need to request container access and get approved. Refer to the prerequisites for a more detailed breakdown.


 


How do I get charged?


The document translation and transliteration features would be charged at different rates similar to the cloud offering. 


 


Connected container: You’re billed monthly at the pricing tier of the Azure AI Translator resource, based on the usage and consumption. Below is an example of document translation billing metadata transmitted by Translator connected container to Azure for billing.


 

{
    "apiType": "texttranslation",
    "id": "f78748d7-b3a4-4aef-8f29-ddb394832219",
    "containerType": "texttranslation",
    "containerVersion": "1.0.0+2d844d094c930dc12326331b3e49515afa3635cb",
    "containerId": "4e2948413cff",
    "meter": {
        "name": "CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters",
        "quantity": 27.0
    },
    "requestTime": 638470710053653614,
    "customerId": "c2ab4101985142b284217b86848ff5db"
}

 


 


Disconnected container: As shown in the below usage records example, the aggregated value of ‘Billed Unit’ corresponding to the meters ‘One Document Translated Characters’ and ‘Translated Characters’ is counted towards the characters you licensed for your disconnected container usage.


 

{
    "type": "CommerceUsageResponse",
    "meters": [
        {
            "name": "CognitiveServices.TextTranslation.Container.OneDocumentTranslatedCharacters",
            "quantity": 1250000,
            "billedUnit": 1875000
        },
        {
            "name": "CognitiveServices.TextTranslation.Container.TranslatedCharacters",
            "quantity": 1250000,
            "billedUnit": 1250000
        }
    ],
    "apiType": "texttranslation",
    "serviceName": "texttranslation"
}

 


References


Linux and Open Source on Azure Quarterly Update – April 2024

Linux and Open Source on Azure Quarterly Update – April 2024

This article is contributed. See the original author and article here.

The beginning of 2024 was filled with exciting new announcements and events all over the world. Coming up, we’ll be at Open Source Summit North America (16-18), LinuxFest Northwest (April 26-28), and Red Hat Summit (May 6-9). If you’ll be at any of these events, we’d love to meet you!


 


What’s new with Linux on Azure


henryyan_0-1713282006539.png


 


Red Hat Enterprise Linux pricing update


Red Hat announced that it is updating its Red Hat Enterprise Linux (RHEL) royalty model to a scalable model, which is expected to affect all Red Hat resellers across the market. In response to Red Hat’s price changes, Azure will also be rolling out price changes for all Red Hat Enterprise Linux instances. These changes started occurring on April 1, 2024, with price decreases for Red Hat Enterprise Linux and RHEL for SAP Business Applications licenses for vCPU sizes less than 12. All other price updates will be effective July 1, 2024. Read the blog for more details.


 


Azure Red Hat OpenShift updates


Azure Red Hat OpenShift now offers Resource Health for clusters with integration with Azure Monitor via Signals and Alerts. Check out the following documentation to learn more: Azure Resources Health overview, Azure Red Hat OpenShift documentation, and Create Azure Monitor alert rules.


 


Azure Monitor VM Insights Dependency Agent support for RHEL 8.6 Linux VMs


Azure Monitor VM Insights now supports Dependency Agent for RHEL 8.6 Linux VMs, enabling you to monitor their network connections and processes in the Azure portal.


 


 


henryyan_1-1713282006541.jpeg


 


Azure IoT Edge supports Ubuntu Core Snaps


We announced that, in collaboration with Canonical, we addressed a longstanding request from our shared customers to support Ubuntu Core Snaps in Azure IoT Edge. The Tier 1 supported operating systems for Azure IoT Edge were expanded to include Ubuntu Core Snaps on AMD 64 and ARM 64. This expansion not only broadened the horizons for Azure IoT Edge applications, but also ensured seamless integration and development across a wider range of devices and systems. For more details, see documentation on Azure IoT Edge documentation.


 


End of support for Ubuntu 20.04 LTS for Batch pools


Batch pools with Ubuntu 20.04 LTS VM images and the Batch node agent SKU batch.node.ubuntu 20.04 will no longer be supported in Batch after 23 April 2025. If you are impacted, learn what required action you need to take and how you can get help and support if needed.


 


New features for Azure Linux


We’ve recently released new features for Azure Linux, including OSsku in-place migration which enables you to trigger a node image upgrade from one Linux distro to another on an existing nodepool. Read the blog from to learn about the latest Azure Linux features, upcoming feature features, and ways to stay connected with the Azure Linux team.


 


What’s new with Azure and open source


New documentation for capturing real-time insights (in just one click!) from your AKS cluster using Inspektor Gadget


We recently published new documentation outlining common use cases of how you can troubleshoot and debug your AKS cluster using the CNCF sandbox project, Inspektor Gadget. The documentation features a one-click experience where you can deploy Inspektor Gadget on an AKS cluster and easily experiment with the gadgets detailed in the documentation.


 


Microsoft open sources Retina


We released Retina as an open-source repository that helps with DevOps and SecOps related networking cases for your Kubernetes clusters. Retina is a cloud-agnostic, open-source Kubernetes Network Observability platform which helps with DevOps, SecOps and compliance use cases. It provides a centralized hub for monitoring application and network health and security, catering to Cluster Network Administrators, Cluster Security Administrators and DevOps Engineers. To learn more, visit our Retina page and read the announcement blog.


henryyan_3-1713282006544.jpeg


 


Linux and open source events


henryyan_1-1713282859218.png


The last few months have been a busy time for events! Microsoft was speaking and attending events across the world, including FOSDEM, SCaLE 21x (Microsoft was a Gold sponsor), WASM I/O, Cloud Native Rejekts, and KubeCon Europe. Check out Brendan Burns’ blog to learn more about important enhancements and innovations in Azure, Azure Kubernetes Service (AKS), and our open-source projects. You’ll also find us at more events soon, we hope to see you there!


 


What’s coming up next


Open Source Summit North America (April 16-18)


Join Microsoft at Open Source North America in Seattle! Come meet us at booth P6 to connect with Microsoft experts and see the latest open-source technologies in action. Also, be sure to check out all the exciting Microsoft sessions to learn more about Microsoft’s contributions to the open source community, best practices for using open source technologies, and insights into emerging trends in open source. Read this blog to learn more.


 


LinuxFest Northwest (April 26-28)


Attending LinuxFest Northwest? Be sure to check out the session from Sudhanva Huruli, Senior Product Manager at Microsoft, who will share learnings from releasing Azure Linux, Microsoft’s open source Linux distribution.


 


Red Hat Summit (May 6-9)


Microsoft will be a Platinum sponsor at Red Hat Summit, which is taking place in Denver, Colorado on May 6-9. Visit the us at booth #202 to connect with experts and attend the Microsoft sessions to discover best practices for running Red Hat workloads on Azure.


 


Upcoming End of Life (EOL), End of Support (ES), and/or End of Maintenance (EOM)



  • CentOS Linux 7: CentOS 7 will reach EOL on June 30, 2024. Customers will need to migrate to a new operating system to continue receiving updates, security patches, and new features.  Read the documentation for CentOS migration options and paths in Azure.

  • RHEL 7: RHEL 7 will reach EOM on June 30, 2024. Customers will need to upgrade to a newer version of RHEL or purchase Extended Lifecycle Support (ELS) from Red Hat to continue to receive security updates and bug fixes. We recommend upgrading to the latest version of RHEL if possible to take full advantage of new features, ongoing support and more.

  • RHEL 6: RHEL 6 Extended Life Cycle Support (ELS) will end on June 30,2024. Customers will need to migrate to a newer version of RHEL to take full advantage of new features, security enhancements, bug fixes, ongoing support and more.


 


Bonus content



  • Watch the Red Hat on Azure Microsoft Mechanics video to learn why Azure is the right place to run your Red Hat workloads

  • Watch the Linux on Azure Mechanics Video discover why you should run your Linux workloads on Azure.

  • Are you currently using CentOS and looking to migrate to a new operating system on Azure? Watch the on-demand webinar on navigating the end of CentOS with Ubuntu on Azure.

  • Did you know you can use Inspektor Gadget on AKS clusters via VS Code? Check out the demo here.


If you have any feedback or questions, please drop them in the comments.