AKS on Azure Stack HCI December Update

AKS on Azure Stack HCI December Update

This article is contributed. See the original author and article here.

Hi All,


 


The AKS on Azure Stack HCI team has been hard at work responding to feedback from you all, and adding new features and functionality.  Today we are releasing the AKS on Azure Stack HCI December Update.


 


You can evaluate the AKS on Azure Stack HCI December Update by registering for the Public Preview here: https://aka.ms/AKS-HCI-Evaluate (If you have already downloaded AKS on Azure Stack HCI – this evaluation link has now been updated with the December Update)


 


Some of the new changes in the AKS on Azure Stack HCI December Update include:


 


Workload Cluster Management Dashboard in Windows Admin Center


With the December update, AKS on Azure Stack HCI now provides you with a dashboard where you can:



  • View any workload clusters you have deployed

  • Connect to their Arc management pages

  • Download the kubeconfig file for the cluster

  • Create new workload clusters

  • Delete existing workload clusters


Untitled.png


We will be expanding the capabilities of this dashboard overtime.


 


Naming Scheme Update for AKS on Azure Stack HCI worker nodes


As people have been integrating AKS on Azure Stack HCI into their environments, there were some challenges encountered with our naming scheme for worker nodes.  Specifically as people needed to join them to a domain to enable GMSA for Windows Containers.  With the December update AKS on Azure Stack HCI worker node naming is now more domain friendly.


 


Windows Server 2019 Host Support


When we launched the first public preview of AKS on Azure Stack HCI – we only supported deployment on top of new Azure Stack HCI systems.  However, some users have been asking for the ability to deploy AKS on Azure Stack HCI on Windows Server 2019.  With this release we are now adding support for running AKS on Azure Stack HCI on any Windows Server 2019 cluster that has Hyper-V enabled, with a cluster shared volume configured for storage.


 


There have been several other changes and fixes that you can read about in the December Update release notes (Release December 2020 Update · Azure/aks-hci (github.com))


 


Once you have downloaded and installed the AKS on Azure Stack HCI December Update – you can report any issues you encounter, and track future feature work on our GitHub Project at https://github.com/Azure/aks-hci


 


I look forward to hearing from you all!


 


Cheers,


Ben

Auto start/stop Flexible Server using Azure Automation Python RunBook

Auto start/stop Flexible Server using Azure Automation Python RunBook

This article is contributed. See the original author and article here.

Flexible Server is a new deployment option for Azure Database for PostgreSQL that gives you the control you need with multiple configuration parameters for fine-grained database tuning along with a simpler developer experience to accelerate end-to-end deployment. With Flexible Server, you will also have a new way to optimize cost with stop/start capabilities. The ability to stop/start the Flexible Server when needed is ideal for development or test scenarios where it’s not necessary to run your database 24×7. When Flexible Server is stopped, you only pay for storage, and you can easily start it back up with just a click in the Azure portal.


 


Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. It comprises process automation, configuration management, update management, shared capabilities, and heterogeneous features. Automation gives you complete control during deployment, operations, and decommissioning of workloads and resources. The Azure Automation Process Automation feature supports several types of runbooks such as Graphical, PowerShell, Python. Other options for automation include PowerShell runbook, Azure Functions timer trigger, Azure Logic Apps. Here is a guide to choose the right integration and automation services in Azure.


 


Runbooks support storing, editing, and testing the scripts in the portal directly. Python is a general-purpose, versatile, and popular programming language. In this blog, we will see how we can leverage Azure Automation Python runbook to auto start/stop a Flexible Server on weekend days (Saturdays and Sundays).


 


Prerequisites



 


Steps


1. Create a new Azure Automation account with Azure Run As account at:


https://ms.portal.azure.com/#create/Microsoft.AutomationAccount


 


NOTE: An Azure Run As Account by default has the Contributor role to your entire subscription. You can limit Run As account permissions if required. Also, all users with access to the Automation Account can also use this Azure Run As Account.


 


b1.png


 

2. After you successfully create the Azure Automation account, navigate to Runbooks.


Here you can already see some sample runbooks.


 


b2.png


 


3. Let’s create a new python runbook by selecting+ Create a runbook.


 


4. Provide the runbook details, and then select Create.


 


b3.png


 


After the python runbook is created successfully, an Edit screen appears, similar to the image below.


 


b4.png


 


5. Copy paste the below python script. Fill in appropriate values for your Flexible Server’s subscription_id, resource_group, and server_name, and then select Save.


 


 


 

import azure.mgmt.resource
import requests
import automationassets
from msrestazure.azure_cloud import AZURE_PUBLIC_CLOUD
from datetime import datetime

def get_token(runas_connection, resource_url, authority_url):
    """ Returns credentials to authenticate against Azure resoruce manager """
    from OpenSSL import crypto
    from msrestazure import azure_active_directory
    import adal

    # Get the Azure Automation RunAs service principal certificate
    cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
    pks12_cert = crypto.load_pkcs12(cert)
    pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM, pks12_cert.get_privatekey())

    # Get run as connection information for the Azure Automation service principal
    application_id = runas_connection["ApplicationId"]
    thumbprint = runas_connection["CertificateThumbprint"]
    tenant_id = runas_connection["TenantId"]

    # Authenticate with service principal certificate
    authority_full_url = (authority_url + '/' + tenant_id)
    context = adal.AuthenticationContext(authority_full_url)
    return context.acquire_token_with_client_certificate(
            resource_url,
            application_id,
            pem_pkey,
            thumbprint)['accessToken']

action = ''
day_of_week = datetime.today().strftime('%A')
if day_of_week == 'Saturday':
    action = 'stop'
elif day_of_week == 'Monday':
    action = 'start'

subscription_id = '<SUBSCRIPTION_ID>'
resource_group = '<RESOURCE_GROUP>'
server_name = '<SERVER_NAME>'

if action: 
    print 'Today is ' + day_of_week + '. Executing ' + action + ' server'
    runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
    resource_url = AZURE_PUBLIC_CLOUD.endpoints.active_directory_resource_id
    authority_url = AZURE_PUBLIC_CLOUD.endpoints.active_directory
    resourceManager_url = AZURE_PUBLIC_CLOUD.endpoints.resource_manager
    auth_token=get_token(runas_connection, resource_url, authority_url)
    url = 'https://management.azure.com/subscriptions/' + subscription_id + '/resourceGroups/' + resource_group + '/providers/Microsoft.DBforPostgreSQL/flexibleServers/' + server_name + '/' + action + '?api-version=2020-02-14-preview'
    response = requests.post(url, json={}, headers={'Authorization': 'Bearer ' + auth_token})
    print(response.json())
else: 
    print 'Today is ' + day_of_week + '. No action taken'

 


 


 


After you save this, you can test the python script using “Test Pane”. When the script works fine, then select Publish.


 


Next, we need to schedule this runbook to run every day using Schedules.


 


6. On the runbook Overview blade, select Link to schedule.


 


b5.png


 


7. Select Link a schedule to your runbook.


 


b6.png


 


8. Select Create a new schedule.


 


b7.png


 


9. Create a schedule to run every day at 12:00 AM using the following parameters


 


 

 


10. Select Create and verify that the schedule has been successfully created and verify that the Status is “On“.


 


b9.png


 


After following these steps, Azure Automation will run the Python runbook every day at 12:00 AM. The python script will stop the Flexible Server if it’s a Saturday and start the server if it’s a Monday. This is all based on the UTC time zone, but you can easily modify it to fit the time zone of your choice. You can also use the holidays Python package to auto start/stop Flexible Server during the holidays.


 


If you want to dive deeper, the new Flexible Server documentation is a great place to find out more. You can also visit our website to learn more about our Azure Database for PostgreSQL managed service. We’re always eager to hear your feedback, so please reach out via email using the Ask Azure DB for PostgreSQL alias.


 

Fileless Attack Detection for Linux is now Generally Available

Fileless Attack Detection for Linux is now Generally Available

This article is contributed. See the original author and article here.

This blog post was co-authored by:


Aditya Joshi, Senior Software Engineer, Microsoft Defender for Endpoint


Tino Morenz, Senior Software Engineer, Enterprise Data Protection


 


The Azure Defender team is excited to share that the Fileless Attack Detection for Linux Preview, which we announced earlier this year, is now generally available for all Azure VMs and non-Azure machines enrolled in Azure Defender.


 


Fileless Attack Detection for Linux periodically scans your machine and extracts insights directly from the memory of processes.  Automated memory forensic techniques identify fileless attack toolkits, techniques, and behaviors.  This detection capability identifies attacker payloads that persist within the memory of compromised processes and perform malicious activities.


 


See below for an example fileless attack from our preview program, a description of detection capabilities, and an overview of the onboarding process.


 


Real-world attack pattern from our preview program


In our continuous monitoring of fileless attacks we often encounter malware components, exhibiting in-memory ELF and shellcode payloads that are in the initial stages of being weaponized by attackers.


 


In this example, a customer’s VM is infected with malware that is attempting to blend in as standard system security components.



  • The first component of the malware is the binary /usr/bin/.securetty/.esd-644/auditd, running from the user’s bin location under hidden folders. On disk, the file has been packed with UPX and contains no section headers.

  • The malware filename is auditd, which is the userspace component of the Linux Auditing System. In addition, the commandline for the malware is “/usr/sbin/abrtd”. This path is associated with the Automatic Bug Reporting Tool, a daemon that watches for application crashes.

  • Accompanying the masquerading auditd is another payload impersonating anacron, a system utility used to execute commands periodically.

  • The second payload runs with the commandline “/usr/sbin/anacron -s” and runs as the file name devkit-power-daemon to impersonate the DeviceKit-power daemon. The malware also maintains a persistent outgoing TCP connection to port 53, which is typically associated with DNS queries.


 


Detecting the attack



  • Fileless Attack Detection begins by identifying dynamically allocated code segments that are not backed by the filesystem. In this case, this scan identifies a 32-bit ELF in an anonymous executable region of memory.

  • Next our detector scans these segments for specific behaviors and indicators. Packed malware, such as in this case, obfuscates its contents on disk, but often exhibits malicious indicators in-memory.

  • The in-memory ELF analysis identifies numerous syscalls to perform system operations for process control, dynamic memory allocation, signal handling and changing thread context. Some of the syscalls identified include clone, epoll_create, getpid, gettid, kill, mmap, munmap, rt_sigaction, rt_sigprocmask, set_thread_area, sigaltstack, and tgkill.


 


Fileless attack detection capabilities


Fileless Attack Detection for Linux scans the memory of all processes for shellcode, malicious injected ELF executables, and well-known toolkits.  Toolkits include crypto mining software.


 


Here is an example alert:


 

PI for Linux Alert Summary.png


The alerts contain information to assist with triaging and correlation activities, which include process metadata:


 

PI for Linux Alert Metadata.png


We plan to add and refine alert capabilities over time. Additional alert types will be documented here.


 


Process memory scanning is non-invasive and does not affect the other processes on the system. Most scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.


 


Onboarding details


This capability is automatically deployed to your Linux machines as an extension to the Log Analytics Agent for Linux, which is also known as the OMS Agent. This agent supports the Linux OS distributions described in this document. Azure VMs and non-Azure machines must be enrolled in Azure Defender to benefit from this detection capability.


 


To learn more about Azure Defender, visit the Azure Defender Page.

IT on the Front Line of the Battle Against AIDS

IT on the Front Line of the Battle Against AIDS

This article is contributed. See the original author and article here.

On World AIDS Day, Elizabeth Glasier Pediatric AIDS Foundation (EGPAF) Informatics Officer Ts’epo Ntelane shares how data tools are helping to end AIDS in Lesotho.


 


Tsepo.jpg


by Ts’epo Ntelane


Mhealth and Informatics Officer


Elizabeth Glaser Pediatric AIDS Foundation


 


Growing up in Lesotho means having an early awareness of HIV. Our small, proudly independent nation was hit hard by the HIV pandemic. During the early 2000s, HIV was like a raging fire burning across our plains and mountains. It touched virtually every family with the grief of losing loved ones.


 


Over the past decade, the pandemic has begun to come under control, and yet Lesotho still has the second highest HIV prevalence rate in the world. Twenty-five percent of the people in Lesotho are living with HIV—but the emphasis now is on living. People are living with HIV, rather than dying.


 


An HIV support group meets in rural Lesotho. Photo by Eric Bond/EGPAFAn HIV support group meets in rural Lesotho. Photo by Eric Bond/EGPAF


December 1 is World AIDS Day.


 


Today, I take pride in being a member of the coalition that has brought Lesotho out of the dark times. When people think about combatting infectious diseases like HIV, they often picture the health workers at clinics and hospitals testing and treating patients. They don’t think about people like me, sitting in an office crunching numbers and developing charts. But a big part of our success here in Lesotho has happened because our work is driven by accurate data.


 


Not long ago, we relied on paper records collected at health centers throughout Lesotho, which created delays in data analysis and reporting. Now that we have implemented a client-tracking application, we are able to use Microsoft Power BI to really understand what is happening on the ground—with specificity and nuance.


 


Although Lesotho is a relatively small country, there are often great distances between villages and cities. Photo by Eric Bond/EGPAFAlthough Lesotho is a relatively small country, there are often great distances between villages and cities. Photo by Eric Bond/EGPAF


As a senior mhealth and informatics officer at the Elizabeth Glaser Pediatric AIDS Foundation (EGPAF), I help aggregate data from the health centers around Lesotho and build them into easy-to-understand reports and dashboards. It makes us nimble and laser-focused on reaching the individuals who need our services the most.


 


Accurate, up-to-date information helps us quickly identify risks and opportunities so that we can save lives. For example, let’s say that we have 100 people at a health center who are on HIV treatment, but we can see that only 30 of them have achieved viral suppression (meaning that HIV is no longer a threat to their health). The data in our Power BI reports and dashboards can help isolate the issue and craft an intervention to bridge the gap.



It is essential that a person living with HIV take daily medication. Through our tracking system, we are able to identify clients who have missed their appointments or are failing to pick up their medication, and we quickly follow up to get them back on treatment.


 


The supply chain presents another potential gap in achieving our goal to eliminate AIDS. Through our data management in Power BI, we can know exactly how many patients are served by any health center and exactly how many supplies are at that location for testing and treating HIV. We also see local trends in the transmission rates. This helps us anticipate needs and adjust if we discern any slowdowns in the supply chain so that we always have supplies on hand to test for HIV and an adequate stock of lifesaving medicine.


 


Dedicated health workers administer HIV services and collect data to help us steer our programs. Photo by Eric Bond/EGPAFDedicated health workers administer HIV services and collect data to help us steer our programs. Photo by Eric Bond/EGPAF


One unique initiative we implemented in Lesotho was to prominently display a Power BI dashboard on large screens in our reception area, boardroom, and other key offices. Now, every person in our office is aware of our most important programmatic numbers.


 


At EGPAF, I’ve been able to grow professionally in my use of Power BI. In 2019, we received a training from Patrick LeBlanc from Microsoft. I also participate in EGPAF’s internal Power BI Learning Group, a community of practice of Power BI developers, in which we share resources and help each other with challenges. In July, I presented to the group regarding our data validation work and shared details of how we are connecting to one of our in-country data systems through Power BI.  


 


The Lesotho Informatics team relaxes during a break from Power BI training.The Lesotho Informatics team relaxes during a break from Power BI training.


In addition to Power BI, we use the integrated tools of M365, particularly SharePoint, to manage and share data and knowledge with our EGPAF colleagues throughout Africa, the United States, and Switzerland.


 


I can say that I am in love with data and with the power that it brings to decision-making and to our individual beneficiaries. Lesotho has gone from being ravaged by AIDS to reaching or surpassing global targets established by UNAIDS and the World Health Organization.



 


As a Mosotho, I am excited and really proud to be making an impact in the battle against this monster. My pride comes from feeling that every time when I get into the office, I am a soldier on the front line of the battle against AIDS.


 

Introducing the Admin Simplified View

Introducing the Admin Simplified View

This article is contributed. See the original author and article here.

While organizations of all sizes use Microsoft 365, very small businesses (VSBs)—those with 10 or fewer people—have very different needs from medium and enterprise customers. When it comes to using cloud services, VSBs tend to focus on the productivity essentials like using Office to share information, using calendars and email, and using Teams to communicate with their customers.


 


To help VSBs also focus on the administrative essentials, we are introducing a new experience in the Microsoft 365 admin center called the admin simplified view. This experience can be used by Global admins in any organization that has Microsoft 365, but it is designed primarily for VSBs.


 


ASV1.png


The experience provides you with access to users, teams, and billing information in one view, and it guides you through simplified flows of top actions to take, such as adding users, installing Office, and enabling Teams, email, and calendaring. To revert to the complete admin center experience, you can click Dashboard view.



We are testing this simplified experience over the next few months. An early version is shipping first to targeted release customers starting early next month. We plan on adding more scenarios around onboarding and ongoing subscription management. As always, we are listening to your feedback, so please feel free to reach out using the feedback option in the Microsoft 365 admin center at any time!