Extracting SAP data using OData – Part 2 – All About Parameters

Extracting SAP data using OData – Part 2 – All About Parameters

This article is contributed. See the original author and article here.







Before implementing data extraction from SAP systems please always verify your licensing agreement.

 


OData services have become one the most powerful interfaces in SAP systems. In the last episode, we’ve built a simple pipeline that extracts business information from an OData service to a data lake and makes them available for further processing and analytics. We’ve created all required resources, including linked services and datasets, and we’ve used them to define the Copy Data activity. The extraction process run without any issues, and we were able to display data from the lake.


 


But imagine you’d like to change the data source. Instead of Sales Orders, you’d like to get information about Business Partners. To make such a change, you’d have to go through all resources and modify them. You’d have to alter the URL of the OData service, target location and entity. Quite a few changes! Alternatively, you could create a new set of objects, including the Copy Data activity. Both solutions are not ideal. As your project grows, maintaining a large set of resources can become a tremendous job. Not to mention the likelihood of making a mistake!


 


Fortunately, there is a solution! The Synapse Pipelines are highly customizable, and we can use dynamic parameters. Instead of hardcoding the URL of the OData service, we can use a parameter and provide the value before the pipeline starts. You can use the same approach also to customize the target directory or entity name. Pretty much everything can be parametrized, and it’s only up to you how flexible the pipeline will be.


 


Today I’ll show you how to use parameters to customize some of the resources. It is the first step towards making the pipeline metadata-driven. In the next episode, we’ll expand the solution even further and describe how to read parameters from an external service. This way you’ll be able to add or modify the OData service without making any changes to the pipeline.


 


DEFINING PARAMETERS


 









There is a GitHub repository with source code for each episode. Learn more:


https://github.com/BJarkowski/synapse-pipelines-sap-odata-public



 


Parameters are external values that you use to replace hardcoded text. You can define them for every resource – a pipeline, a dataset or a linked service – they all accept external values. To assign parameters at the runtime, you can use expressions that I find very similar to Excel formulas. We will use this feature quite often in this blog series.


 


Let’s start by defining the initial set of parameters at the pipeline level. We will use them to set the URL, name and entity of the OData service. Open the pipeline we’ve built last time. At the bottom of the screen, you’ll notice four tabs. Click on the one named Parameters.


 


image001.png


 


When you click the Parameters tab, the window enlarges, revealing the “New” button to define parameters. Add three entries:





















Name Type
URL String
ODataService String
Entity String

 


image003.png


You can use values that are passed to the pipeline as parameters in datasets and linked services. I want all files with extracted data to be saved in the directory named after the OData service. The directory structure should look as follows:


 


 


 

/odata/<OData_Service>/<Entity>

 


 


 


For example:


 


 


 

/data/API_SALES_ORDER_SRV/A_SalesOrder

 


 


 


That’s quite easy. You define the target file location in the data lake dataset – therefore the first step is to modify it to accept external parameters. Open the resource definition and go to the Parameters tab. Create a new entry:













Name Type
Path String

image005.png


 


Now we need to define the target location using the parameter. Open the Connection tab. Replace the current value in the directory field with the following expression to reference the Path parameter. The value that we pass to this parameter will be used as the directory name.


 


 


 

@dataset().Path

 


 


 


image007.png


 


Dataset knows what to do with the value passed to the Path parameter. Now we have to maintain it. Open the Copy Data activity and go to the Sink tab. You’ll notice an additional field under Dataset properties that wasn’t there before. The parameter defined at the dataset level is now waiting for a value.


 


As my directory hierarchy should be <ODataService>/<Entity> I use the Concat expression to combine parameters defined at the pipeline level:


 


 


 

@concat(pipeline().parameters.ODataService, '/', pipeline().parameters.Entity)

 


 


 


image009.png


 


The target dataset now accepts values passed from the pipeline. We can switch to the source dataset that points to the SAP system. Similarly as before, open the OData dataset and define two parameters. They will tell which OData service and Entity should be extracted.


 

















Name Type
ODataURL String
Entity String

 


image011.png


 


The dataset can use the Entity parameter, which can replace the value in the Path field. But the OData URL parameter has to be passed to the underlying Linked Service. Provide the following expression to the Path field on the Connection tab:


 


 


 

@dataset().Entity

 


 


 


 013b.png


 


 


Adding parameters to the Linked Service is slightly more difficult, and it requires modifying the JSON definition of the resource. So far, we’ve only used the user interface. Choose Manage from the left menu and choose Linked Services. To edit the source code of the Linked Service, click the {} icon next to its name:


 


image017.png


 


There are two changes to make. The first one is to define the parameter. Enter the following piece of code just under “annotations”:


 


 


 

"parameters": {
    "ODataURL": {
        "type": "String"
    }
},

 


 


 


The second change tells the Linked Service to substitute the URL of the OData service with the value of the ODataURL parameter. Change the value of “url” property with the following expression:


 


 


 

"@{linkedService().ODataURL}"

 


 


 


For reference here is the full definition of the Linked Service:


 


 


 

{
    "name": "ls_odata_sap",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "annotations": [],
        "parameters": {
            "ODataURL": {
                "type": "String"
            }
        },
        "type": "OData",
        "typeProperties": {
            "url": "@{linkedService().ODataURL}",
            "authenticationType": "Basic",
            "userName": "bjarkowski",
            "password": {
                "type": "AzureKeyVaultSecret",
                "store": {
                    "referenceName": "ls_keyvault",
                    "type": "LinkedServiceReference"
                },
                "secretName": "s4hana"
            }
        },
        "connectVia": {
            "referenceName": "SH-IR",
            "type": "IntegrationRuntimeReference"
        }
    }
}

 


 


 


image019.png


Click Apply to save the settings. When you open the OData dataset you’ll notice the ODataURL parameter waiting for a value. Reference the dataset parameter of the same name:


 


 


 

@dataset().ODataURL

 


 


 


image021.png


The only thing left to do is to pass a value to both of the dataset parameters. Open the Copy Data activity and go to the Source tab. There are two new fields that we use to pass a value to a dataset. To pass the address of the OData service I concatenate URL and ODataService parameters defined at the pipeline level. The Entity doesn’t require any transformation.


 


 


 

ODataURL: @concat(pipeline().parameters.URL, pipeline().parameters.ODataService)
Entity: @pipeline().parameters.Entity

 


 


 


image015.png


 


Publish your changes. We’re ready for the test run! We’ve replaced three hardcoded values, and now we don’t have to modify the pipeline, or any of the resources, whenever we want to extract data from another OData service. It is a great improvement as it makes the process more generic and easy to scale.


 


EXECUTION AND MONITORING


 


To verify changes, run the pipeline twice, extracting data from two OData services. Previously, it would require us to make changes inside the pipeline. Now, whenever we start the extraction process, Synapse Studio asks us to provide the URL, OData service name and Entity. We’re not making any changes for the first run, and as before, we extract sales orders. For the second execution, use the API_BUSINESS_PARTNER OData service to get the full list of my customers.


 


image023.png


A moment of uncertainty. Have we made all the required changes?
No surprise this time, everything works as expected. We were able to extract data from both OData services. The target directory structure looks correct, and as planned, it consists of the OData and Entity name.


 


image025.png


 


The final test is to display extracted data.


 


image027.png


 


Today you’ve learnt how to use parameters to avoid hardcoding values in the pipeline. We’ve used three parameters that allow us to customize the URL, OData name and the Entity. We will build on top of this next week to make the pipeline even more agile by creating a metadata database that stores all information about OData services to fetch.

Updated: APT Exploitation of ManageEngine ADSelfService Plus Vulnerability

This article is contributed. See the original author and article here.

The Federal Bureau of Investigation (FBI), CISA, and Coast Guard Cyber Command (CGCYBER) have updated the Joint Cybersecurity Advisory (CSA) published on September 16, 2021, which details the active exploitation of an authentication bypass vulnerability (CVE-2021-40539) in Zoho ManageEngine ADSelfService Plus—a self-service password management and single sign-on solution.

The update provides details on a suite of tools APT actors are using to enable this campaign: 

  • Dropper: a dropper trojan that drops Godzilla webshell on a system 
  • Godzilla: a Chinese language web shell 
  • NGLite: a backdoor trojan written in Go 
  • KdcSponge: a tool that targets undocumented APIs in Microsoft’s implementation of Kerberos for credential exfiltration  

Note: FBI, CISA, and CGCYBER cannot confirm the CVE-2021-40539 is the only vulnerability APT actors are leveraging as part of this activity, so it is key that network defenders focus on detecting the tools listed above in addition to initial access vector.

CISA encourages organizations to review the update the November 19 update and apply the recommended mitigations. CISA also recommends reviewing the relevant blog posts from Palo Alto Networks, Microsoft, and IBM Security Intelligence

NSA and CISA Release Guidance on Securing 5G Cloud Infrastructures

This article is contributed. See the original author and article here.

CISA has announced the joint National Security Agency (NSA) and CISA publication of the second of a four-part series, Security Guidance for 5G Cloud Infrastructures. Part II: Securely Isolate Network Resources examines threats to 5G container-centric or hybrid container/virtual network, also known as Pods. The guidance provides several aspects of pod security including limiting permissions on deployed containers, avoiding resource contention and denial-of-service attacks, and implementing real-time threat detection.

This series is being published under the Enduring Security Framework (ESF), a public-private cross-sector working group led by NSA and CISA.

CISA encourages 5G providers, integrators, and network operators to review the guidance and consider the recommendations.

Azure App Service Automatic Scaling

This article is contributed. See the original author and article here.

Azure App Services currently provides two workflows for scaling: scale up and scale out.



  • Scale up: Get more CPU, memory, disk space, and extra features. You scale up by changing the pricing tier of the App Service plan that your app belongs to.

  • Scale out: Increase the number of VM instances that run your app. You can scale out to as many as 30 instances, depending on your pricing tier. App Service Environments in Isolated tier further increases your scale-out count to 100 instances. You can scale manually or automatically based on predefined rules and schedules.


 


These existing scaling workflows work well, but you may want to instead have the App Service platform automatically scale your web app without the hassle of defining auto-scaling rules & schedules.


 


We are introducing a new platform-managed automatic scaling feature in Azure App Services. Below is a list of key features provided by App Service’s built-in automatic scaling feature:


 



  • The App Service platform will automatically scale out the number of running instances of your application to keep up with the flow of incoming HTTP requests, and automatically scale in your application by reducing the number of running instances when incoming request traffic slows down.

  • Developers can define per web app scaling and control the minimum number of running instances per web app.

  • Developers can control the maximum number of instances that an underlying app service plan can scale out to. This ensures that connected resources like databases do not become a bottleneck once automatic scaling is triggered.

  • Enable or disable automatic scaling for existing app service plans, as well as apps within these plans.

  • Address cold start issues for your web apps with pre-warmed instances. These instances act as a buffer when scaling out your web apps.

  • Automatic scaling works with the existing Premium Pv2 and Pv3 SKUs.

  • Automatic scaling is billed on per second basis and uses the existing Pv2 and Pv3 billing meters.


  • Pre-warmed instances are also charged on per second basis using the existing Pv2 and Pv3 billing meters once it’s allocated for use by your web app [For additional details about pre-warmed instances refer to AZ Cli section below ]




  • Use Azure CLI or ARM templates to enable automatic scaling.


Suggested scenarios for automatic scaling:



  • You want your web app to scale automatically without setting up an auto-scale schedule or set of auto-scale rules based on various resource metrics.

  • You want your web apps within the same app service plan to scale differently and independently of each other.

  • A web app is connected to backend data sources like databases or legacy systems which may not be able to scale as fast as the web app. Automatic scaling allows you to set the maximum number of instances your app service plan can scale to. This helps avoid scenarios where a backend is a bottleneck to scaling and is overwhelmed by the web app.


Enable Automatic scaling using Azure CLI:


 


Step 1:


 


This step enables automatic scaling for your existing app service plan and web apps within this plan


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms [This enables automatic scaling for the app service plan named “sampleAppServicePlan”]


 


*** In some scenarios while setting the value of ElasticScaleEnabled=1 for an existing app service plan for App service Linux you may receive an error message (“Operation returned an invalid status ‘Bad Request’”). In such scenarios follow below mentioned steps:


 



  • Execute above step using the — debug flag to return details about the error


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=1 –resource-type Microsoft.Web/serverfarms — debug (You can now view detailed error message which should be similar to “Message”:”Requested feature is not available in resource group <<Your Resource Group Name>>. Please try using a different resource group or create a new one.”)


 



  • You should now create a new resource group and an app service plan (It is recommended to use PV3 SKU for the new app service plan) and then set ElasticScaleEnabled=1


Step 2:


 


This step defines maximum number of instances that your app service plan can scale to


 


az resource update -g <<resource group name>> -n <<app service plan name>> — properties.maximumElasticWorkerCount=** –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan — properties.maximumElasticWorkerCount=10 –resource-type Microsoft.Web/serverfarms  [This sets the max scale out limit of app service plan named “sampleAppServicePlan”  to 10 instances]


 


*** Value of maximumElasticWorkerCount should be less than or equal to 30 (Maximum instances that a premium SKU app service plan can scale out)


*** Value of maximumElasticWorkerCount should be greater than or equal to current instance count (NumberOfWorkers) for your app service plan


 


Step 3:


 


This step enables minimum number of instances that your web app will always be available on (per app scaling)


 


az resource update -g <<resource group name>> -n <<web app name>>/config/web –set properties.minimumElasticInstanceCount=** –resource-type Microsoft.Web/sites


 


az resource update -g sampleResourceGroup -n sampleWebApp/config/web –set properties.minimumElasticInstanceCount=5 –resource-type Microsoft.Web/sites[This sets the minimum number of instances for the web app named “sampleWebApp”  to 5. In this example Web app named “sampleWebApp” is deployed to app service plan named “sampleAppServicePlan “]


 


Step 4:


 


This step enables the number of pre-warmed instances readily available for your web app to scale (buffer instances).


 


*** Default value of “preWarmedInstanceCount” is set as 1 and for most scenarios this value should remain as 1


 


az resource update -g <<resource group name>> -n <<web app name>>/config/web –set properties.preWarmedInstanceCount=** –resource-type Microsoft.Web/sites


 


az resource update -g sampleResourceGroup -n sampleWebApp/config/web –set properties.preWarmedInstanceCount=2 –resource-type Microsoft.Web/sites[This sets the number of buffer instances available for automatic scaling for the web app named “sampleWebApp” to 2]


 


*** Assuming that your web app has five always ready instances (minimumElasticInstanceCount=5) and the default of one pre-warmed instance. When your web app is idle and no HTTP requests are received, the app is provisioned and running with five instances. At this time, you aren’t billed for a pre-warmed instance as the always-ready instances aren’t used, and no pre-warmed instance is allocated. Once your web app starts receiving HTTP Requests and the five always-ready instances become active, and a pre-warmed instance is allocated and the billing for it starts. If the rate of HTTP Requests received by your web app continues to increase, the five active instances are eventually used and when App services decides to scale beyond five instances, it scales into the pre-warmed instance. When that happens, there are now six active instances, and a seventh instance is instantly provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached. No instances are pre-warmed or activated beyond the maximum.


 


Step 5:


 


This step disables automatic scaling for your existing app service plan and web apps within this plan


 


az resource update -g <<resource group name>> -n <<app service plan name>> –set properties.ElasticScaleEnabled=0 –resource-type Microsoft.Web/serverfarms


 


az resource update -g sampleResourceGroup -n sampleAppServicePlan –set properties.ElasticScaleEnabled=0 –resource-type Microsoft.Web/serverfarms [This disables automatic scaling for the app service plan named “sampleAppServicePlan”]


 


FAQ:



  • The App Service automatic scaling feature is currently in early preview.

  • Automatic scaling is currently supported for Azure App Service for Windows and Linux.(App service for Windows containers and App Service Environments do not support automatic scaling)

  • Automatic scaling can be configured via Azure CLI and ARM templates only. Azure Portal (UX) support for this feature will be enabled in a future release.

  • Automatic scaling is available only for Azure App Services Premium Pv2 and Pv3 SKUs



  • App Service’s automatic scaling feature is different than Azure Autoscale.  Automatic scaling is a new built-in feature of the App Service platform that automatically handles web app scaling decisions for you. Azure Autoscale is a pre-existing Azure feature for defining schedule-based and resource-based scaling rules for your app service plans. for your app service plans.

  • Once automatic scaling is configured, existing Azure Autoscale rules and schedules (if any) will not be honored. Applications can use either automatic scaling, or Azure Autoscale, but not both. If you disable automatic scaling for your app service plan by setting ElasticScaleEnabled=0; existing Autoscale rules if any, will be applicable once again

  • Health check should not be enabled on web apps with this automatic scaling feature turned on. Due to the rapid scaling provided by this feature, the health check requests can cause unnecessary fluctuations in HTTP traffic. Automatic scaling has its own internal health probes that are used to make informed scaling decisions.

  • You can only have Azure App Service web apps in the app service plan where you wish to enable automatic scaling. If you have existing Azure Functions apps in the same app service plan, or if you create new Azure Functions apps, then automatic scaling will be disabled. For Functions it is advised to use the Azure Functions Premium plan instead.

Unsubstantiated COVID-19 treatment claims appear on social media platforms

Unsubstantiated COVID-19 treatment claims appear on social media platforms

This article was originally posted by the FTC. See the original article here.

Since the pandemic began, the Federal Trade Commission has sent hundreds of cease and desist letters to companies that claimed their products and therapies can prevent, treat, or cure COVID-19. The sellers promoted their products and services through a variety of outlets, including social media.

Social media platforms have played a major role in conveying information about how to help stop the spread of COVID-19. But just because the information is running on a platform you use doesn’t mean it’s accurate or truthful. Right now, no one can afford to take information at face value. Before you act on a message you’ve seen or before you share it, ask — and answer — these critical questions:

  • Who is the message from? Do I know them? Do I trust them? Am I positive they are who they say they are?
  • What do they want me to do? Just know something — or are they trying to get me to act in some way? Do they want me to buy something, download something, or give up personal info?
  • What evidence supports the message? Use some independent sources to fact-check it — or debunk it. Maybe talk to someone you trust. But always verify, using a few additional sources. Once you’ve done that, does the message still seem accurate? Approaching information by asking and answering these questions can help you sort out what’s helpful…and what’s a scam. So, for example, if the message is about a treatment or cure, you know where to go: Coronavirus.gov.

Bottom line: when you come across information, stop. Talk to someone else. Focus on whether the facts back up the information you’re hearing. Good, solid evidence will point you in the right direction. Then decide what you think and what you want to do with the message – pass it on, act on it, ignore it, or roll your eyes at it. And if you suspect a scam, tell the FTC at ReportFraud.ftc.gov so we can shut the scammers down.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Before you join that crowdfunding campaign, read this

Before you join that crowdfunding campaign, read this

This article was originally posted by the FTC. See the original article here.

Thinking of joining a crowdfunding campaign? Spot the scam: ftc.gov/crowdfunding. Image of multiple hands handing cash toward a lit-up light bulb signifying an idea.

If it takes a village to raise a child, crowdfunding may be what it takes to make that invention a reality. But scammers could be behind those crowdfunding efforts and take your money without delivering what they promise.

Crowdfunding can help raise money to develop a new product or invention. To get investors, the organizer may promise something in exchange for contributions. Investors might get a payout once the invention is profitable, be the first to get the new product, or get the new product at a discount later on.

When you give money to a crowdfunding campaign, it goes directly to the campaign organizer. But a dishonest businessperson might lie about the project, product, and timeline. And they might lie about the rewards you’ll get once the product is finished.

So before you pledge funds to any crowdfunding campaign, check on a few things first:

  • Who created the campaign? Find the name of the organizer on the crowdfunding page and do your own vetting. If you can’t find anything about that person, or the details don’t match what they’re telling you, that’s a sign of a scam. Search for the name of the organizer and project with the words “complaint,” “review,” or “scam” to see if anyone has already had a negative experience.
  • What’s the purpose of the campaign? Be clear what the funds are for and what you should expect from your contribution. Not all campaigns promise you’ll get anything in return.
  • What happens if the project doesn’t get off the ground? There’s no guarantee that the project will be successful and completed. Find out what happens to your money if the project doesn’t get going. Can you expect a refund? How you will get it?

If you come across a crowdfunding scam, report it to ReportFraud.ftc.gov, your state Attorney General, and the crowdfunding platform.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Increase app availability with auto-scaling | Azure Virtual Machine Scale Sets

Increase app availability with auto-scaling | Azure Virtual Machine Scale Sets

This article is contributed. See the original author and article here.

Screen Shot 2021-11-18 at 12.11.24 PM.png


 


 


Azure Virtual Machine Scale Sets lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. This is a critical service for creating and dynamically managing thousands of VMs in your environment. If you are new to the service this show will get you up to speed or if you haven’t looked at VM Scale Sets in a while we’ll show you how the service has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more.



 

 


 

 



QUICK LINKS:


00:32 — What is a virtual machine scale set?


00:47 — Centralized configuration options


02:30 — How do scale sets increase availability?


03:54 — How does autoscaling work?


04:58 — Keeping costs down with VM scale sets


05:47 — Building security into your scale set configurations


06:28 — Where you can learn more about VM scale sets


 


Link References:


To learn more, check out https://aka.ms/VMSSOverview


Watch our episode about Azure Spot VMs at https://aka.ms/EssentialsSpotVMs


 


Unfamiliar with Microsoft Mechanics?


We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


Keep getting this insider knowledge, join us on social:






Video Transcript:


-Welcome to Azure Essentials. I’m Matt McSpirit, and in the next few minutes, I’ll give you an overview of Azure virtual machine scale sets, a critical service for creating and dynamically managing thousands of VMs in your environment. Now if you are new to the service this will get you up to speed, or if you haven’t looked at VM scale sets in a while we’ll show you how it has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more.


 


-So, let’s start by addressing what is a Virtual Machine Scale Set in Azure? Well as the name implies, this Azure service lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. VM Scale Sets lays the foundation for centralized and consistent configuration of VMs in your environment. One of the primary functions is to specify a VM template with the characteristics that you need for your apps and workloads to run reliably. This includes: the VM image, with support for Windows and Linux platform images as well as your own custom images, the VM size, your networking parameters, the number of VM instances in the group, and with virtual machine extensions you can also add post-deployment configuration like monitoring, anti-malware and automation.


 


-As you set them up, there are two management modes to deploy your scale sets: Uniform Orchestration, which is optimized for large stateless workloads where your VM instances are identical. Or the newer Flexible orchestration mode, which adds more options: from running workloads with different VM types; or changing your VM sizes without redeploying your scale set; to architecting your scale sets for high availability. And the good news is, they are all easy to set up. You can define your Virtual Machine Scale Set in the Azure Portal as you just saw or with an Azure Resource Manager Template. Of course, if you prefer you can use scripting tools like Azure CLI, PowerShell, and even infrastructure as code tools like Terraform.


 


-Once set up, any new VM added to the scale set will inherit the configurations that you have defined. And it’s easy to make changes across your scale set. For example, with image-based upgrades, when a new version of a custom or marketplace image is made available, Virtual Machine Scale Sets will detect that and start upgrading the VM instances in batches, and you can use protection policies to exclude VMs that you don’t want to upgrade. Or another example of what you can do is to upgrade your existing VMs in one-go to take advantage of the latest and greatest VMs in Azure.


 


-That said, beyond consistent configurations, scale sets are used to distribute your business-critical application across multiple instances to provide high availability. And this is achieved in a number of ways. For example, you can automatically distribute up to 1,000 VM instances between availability zones in minutes. This gives you utmost availability, up to 99.99%, and helps you to mitigate any possible datacenter wide issues. Availability zones are offered in most geographies and represent physically separate locations in an Azure region composed of one or more datacenters with independent power, cooling, and networking. VMs can be automatically spread across fault domains in a region, or you can specify a fault domain as part of your VM deployment, which makes it easier to replace VMs. Now this is especially relevant for open-source databases like Cassandra or other quorum-based applications.


 


-Of course, you also have the option to replicate your VM instances to another Azure region for failover compute. And for storage redundancy, you can also back up data disks using Azure Backup. Beyond hardware failure resilience measures, to get ahead of issues before they impact your operations, you can install the application health extension on each VM instance, so that your app or workload can report application-specific health metrics to Azure. And once you enable automatic instance repair, Azure will automatically remove and replace instances in an unhealthy state, to maintain high availability.


 


-As you architect for availability with Azure VM Scale Sets you can of course also scale your applications on demand while increasing performance. Scale sets integrate with Azure load balancer for basic layer-four traffic distribution and Azure Application Gateway for more advanced layer-seven traffic distribution. This helps you to easily spread your incoming network traffic across the VMs in your scale sets. Which in turn helps you build scalable solutions while maintaining high levels of performance.


 


-You can also configure your VM scale set to auto-scale. For example, if you’re running an e-commerce site you may need to scale your front end in response to some event, like a holiday sales spike. Azure will automatically add and subtract VM instances in response to demand so that there is no decline in your app or workload experience. Under scaling, you can use metric-based auto-scaling rules and define thresholds that trigger an increase in VM instances to scale out. And likewise, you can set similar thresholds for when to scale in, taking into account a specified cool down period which allows for a buffer of time before the scale in action is triggered.


 


-And of course, you can manually scale out and in as you need to. The ability to dynamically scale your VM pool also brings numerous efficiencies as you run your workloads on Azure, because instead of pre-provisioning VMs you’re only paying for the compute resources your application needs. And for even more savings, for your interruptible workloads, you also have the flexibility of using Azure Spot VMs that take advantage of spare compute capacity in Azure as and when it’s available.


 


– You can also mix and match Azure Spot VMs with regular on-demand VMs. And if you’re worried about Spot VM evictions, the try to restore feature in Azure Virtual Machine Scale Sets, will automatically try to restore an evicted Spot VM and maintain the target VM instance count in your scale set. In fact, we covered Spot VMs as part of your cost optimization strategy, in our last Essentials overview which you can watch at aka.ms/EssentialsSpotVMs.


 


-Next, Virtual Machine Scale Sets help you improve the security posture of your applications by keeping them up-to-date. Upgrades can be performed automatically, in random order, manually, or using rolling upgrades in defined batches. In addition to image upgrades, you can also do automatic VM guest patching for critical and security updates, and this helps to ease management by safely and automatically patching virtual machines to maintain security compliance. Patch orchestration is managed by Azure and updates are rolled out sequentially across VMs in the scale set to avoid application downtime. You can also force updates on-demand. And with Automatic Extension Upgrades, critical updates are applied as they become available from publishers.


 


-So that was a quick overview of Azure Virtual Machine Scale Sets and how they can help you to create and deploy thousands of VMs in minutes. The metrics and template-based approach helps you to consistently architect your apps and workloads for auto-scaling, availability, and performance, giving you the control that you need. This lets you focus on your app instead of the complexities of managing your infrastructure. And to learn more visit aka.ms/VMSSOverview and keep watching Microsoft Mechanics for more in the series, bye for now!




Drupal Releases Security Updates

This article is contributed. See the original author and article here.

Drupal has released security updates to address vulnerabilities that could affect versions 8.9, 9.1, and 9.2. An attacker could exploit these vulnerabilities to take control of an affected system.

CISA encourages users and administrators to review Drupal Security Advisory SA-CORE-2021-011 and apply the necessary updates.

Exploring the Intel manufacturing environment through mixed reality

Exploring the Intel manufacturing environment through mixed reality

This article is contributed. See the original author and article here.

Today’s organizations have seen tremendous value in using mixed reality, as it rapidly changes how employees learn, work, and understand the world around them. With the unique value of mixed reality solutions, such as Microsoft HoloLens 2, Microsoft Dynamics 365 Guides, and Microsoft Dynamics 365 Remote Assist, organizations can drive workforce transformation with on-the-job guidance, hands-on training, and collaboration that is seamless, intuitive, and embedded into everyday workflows.

Man taking an interactive training in an office room using Microsoft HoloLens 2 and Guides.

Intel technicians using HoloLens 2, Dynamics 365 Guides, and Remote Assist to resolve complex issues

Today, we’ll look at how Intel manufacturing facilities are using mixed reality solutions such as HoloLens 2, Dynamics 365 Guides, and Dynamics 365 Remote Assist globally. In some of the world’s most advanced manufacturing facilities, technicians are responsible for building, maintaining, and troubleshooting some of the most complex manufacturing products made by humans. Working at some of the smallest known geometries, every piece of maintenance must be performed precisely by continuously improving processes to ensure the production of smarter, faster, and more energy-efficient computer chips. With six wafer fabrication sites and four assembly test manufacturing locations worldwide, Intel must maintain a global, virtual network.

In Intel’s Israel manufacturing facility, HoloLens 2 and Dynamics 365 Guides have become integral to its manufacturing processes, playing a key role in the following scenarios:

  • Maintenance and repair tasks: Intel employees “learn by doing” with step-by-step instructions for conducting inspections and audits, deploying new equipment, fixing machine breaks, addressing issues faster, and increasing efficiency. Additionally, Dynamics 365 Guides allows Intel to proactively manage their assets to avoid costly downtime due to unpredicted failure. This includes conducting preventative maintenance, defining new intelligent workflows, and thoroughly completing maintenance tasks using checklists in Dynamics 365 Guides.
  • Troubleshooting: Dynamics 365 Guides brings critical information into view to help Intel technicians troubleshoot, audit, or support difficult and delicate procedures, improving first-time fix rate for urgent repairs with guidance.
  • Remote communication: Dynamics 365 Remote Assist seamlessly connects Intel experts and technicians through the calling feature to collaborate and solve problems without disrupting the flow of work. Dynamics 365 Remote Assist has also helped maintain the new normal to everyday routinewith advanced collaboration features, Intel has made it easy for their expert engineers to work from home to perform remote inspections that share video, screenshots, and annotations across devices. By avoiding unnecessary travel, Intel has helped increase safety and wellbeing during COVID-19 on a global scale.

Remote assist calling and collaboration features show real-time view of inspection in work environment.

  • Preparing interactive training materials: Intel employees can train from home, at their desk, or on the shop floor. Dynamics 365 Guides enables authors to build digital, interactive trainings that can be viewed from anywhere and easily scale any updates to keep up with real-time changes. These trainings can be produced by anyone on a PC or HoloLens device with simple 2D and 3D creation in the real-world environment.
  • Facility tour: With the power of HoloLens 2, employees can provide hands-free, digital facility tours to virtually show the inner workings of Intel’s cutting-edge facilities.

We are thrilled to see what the future holds and how mixed reality will continue to innovate manufacturing processes at Intel. To learn more, watch the video below to discover how Intel Israel is using Dynamics 365 Guides, Dynamics 365 Remote Assist, and HoloLens 2 today.

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

Get started with Dynamics 365 Guides

The post Exploring the Intel manufacturing environment through mixed reality appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

NCSC Releases 2021 Annual Review

NCSC Releases 2021 Annual Review

This article is contributed. See the original author and article here.

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

SSL

Secure .gov websites use HTTPS A lock (lock icon) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.