Extending Next.js support in Azure Static Web Apps

Extending Next.js support in Azure Static Web Apps

This article is contributed. See the original author and article here.

Next.js is one of the most popular JavaScript frameworks for building complex, server-driven React applications, combining the features that make React a useful UI library with server-side rendering, built-in API support and SEO optimizations.


With today’s preview release, we’re improving support for Next.js on Azure Static Web Apps.


What’s new


In this preview we’re focusing on making zero-config deployments with Next.js even easier than it’s been before by including support for Server-Side Rendering and Incremental Static Regeneration (SSR and ISR respectively), API Routes, advanced image compression, and Next.js Auth. In this post, we want to highlight three features that make building Next.js apps on Azure more powerful.


Server-Side Rendering


When we first launched Static Web Apps, we ensured we had support for Next.js, but our focus on this was Static-Site Generation or SSG. SSG takes the application and compiles static HTML from it that is then served and while SSG is useful in some scenarios it doesn’t support dynamic updates to the content of the page per request.


This is where Server-Side Rendering, or SSR, comes in. With SSR you can inject data from a backend data source before the HTML is sent to the client, aka in the pre-rendering phase, allowing for more contextual, real-time updates to the data. Check out Next.js’s docs for more on SSR.


For this demo we’ll add a getServerSideProps function to our index.js file that has the current timestamp:


 

export async function getServerSideProps() {
  const data = JSON.stringify({ time: new Date() });
  return { props: { data } };
}

 


We can then consume this in the component:


 

export default function Home({ data }) {
  const serverData = JSON.parse(data);
  // snip

 


And then output the date timestamp.


ssr.gif


API routes


API routes allow us to build a backend API for the client-side components of our Next.js app to communicate with and get data from other systems. These are added to a project by creating an api folder within our Next.js app and defining JavaScript (or TypeScript) files with exported functions that Next.js will turn into APIs that can be called to return JSON to the client.


API routes can be as simple as masking an external service, or as complex as hosting a GraphQL server, which we’re doing in the example below.


api-routes.gif


Here you’ll see that we called the API route, /graphql, and got back a GraphQL payload response. You’ll find this sample on GitHub.


Image optimisation


When it comes to ensuring your website is optimized for all web clients, Web Vitals is a valuable measure. To help with this Next.js has an Image component and image optimisation. This feature of Next.js also makes it easier to create responsive images on your website and optimise the image being sent to the browser based off the dimensions of the view port and whether the image is currently visible or not.


You can see this in action where we have deployed the Next.js image example.


Deploying with the new Next.js support


When it comes to deploying an application to leverage these Next.js features, you need to select Next.js from the Build Presets and leave the rest of the options as their default, as SSR Next.js applications are the default for Static Web Apps. If you wish to use Next.js as a Static Site Generator, you’ll need to add the environment variable is_static_export to your deployment pipeline and set it to true and the output location set to out.


Common Questions


Can I still use SSG?


Yes! Static rendered Next.js applications are still supported on Static Web Apps, and we encourage you to keep using them if they are the right model for your applications.


Existing Next.js SSG sites should be unaffected by the launch today, although it is encouraged that you add the is_static_export environment flag to your deployment pipeline, ensuring that Static Web Apps correctly identifies the SSG.


Should I use SSR over SSG?


This is very much an it depends answer. The support announced today for SSR is preview support, meaning that it is not recommended for production workloads, for production we still encourage people to using SSG as their preferred model when working with Next.js. Additionally, echoing the recommendations from Next.js themselves, that SSG should be the preferred model when publishing sites.


SSG sites have a performance benefit over SSR, as the HTML files are created at build time rather than runtime, meaning there is less work for the server to do when producing content, thus increasing performance.


But if you’re looking to use features like dynamic routing, have a very large site that have hundreds (or thousands) of pages, or want to fetch data before sending to the client, SSR will be a better fit for you and worth exploring.


If you’re still unsure which approach to use, check out this excellent guide from Next.js themselves.


Can I use Azure Functions or BYO Backends as well as API routes?


No, if you’re deploying a hybrid Next.js application then no additional backend will be available for the site as API routes can be used to achieve much of the same functionality.


Next steps


This is all exciting and if you’re like me and can’t wait to try out the new features, check out the sample repo from this post then head on over to our documentation and get started with your next Next.js application today.

AKS, the elephant in the hub & spoke room, deep dive

AKS, the elephant in the hub & spoke room, deep dive

This article is contributed. See the original author and article here.

Hi,


 


In this article, I will walk you through some typical challenges when building AKS platforms. I will also reflect on the AKS impact over the Hub & Spoke topology, which is widely adopted by organizations.


 


Quick recap of the Hub & Spoke


 


I won’t spend much time on this because, most of you already know what the Hub & Spoke model is all about. Microsoft has already documented this here, although the proposed diagram is a little bit naïve, but the essentials are there. The Hub & Spoke model, is a network-centric architecture where everything ends up in a virtual network in a way or another. This gives companies a greater control over the network traffic. There are many variants, but the spokes are virtual networks dedicated to business workloads, while the hubs play a control role, mostly to rule and inspect east-west  (spoke to spoke & DC to DC) and south-north traffic (coming from outside the private perimeter and going outside).  On top of increased control over the network traffic, the Hub & Spoke model aims at sharing some services across workloads, such as DNS to name just one.



Most companies rely on network virtual appliances (NVA) to rule the network traffic, although we see a growing adoption of Azure Firewall.


 


Today, most PaaS services can be plugged to the Hub & Spoke model in a way or another:


 



  • Through VNET Integration for outbound traffic

  • Through Private Link for inbound traffic

  • Through Microsoft-managed virtual networks for many data services.

  • Natively, such as App Service Environments, Azure API Management, etc. and of course AKS!!


That is why we see a growing adoption of this model. The ultimate purpose of Hub & Spoke is to isolate workloads from Internet and have an increased control over internet-facing workloads, for which it is functionally required to be public (ie: a mobile app talking to an API), a B2C offering, an e-business site, etc.


 


The Hub & Spoke model gives companies the opportunity to:


 



  • Route traffic as they wish

  • Use layer-4 & 7 firewalls

  • Use IDS/IPS and TLS inspection

  • Do network micro-segmentation and workload isolation


Some companies push the micro-segmentation very far, by for example, allocating a dedicated virtual network for each and every asset, which is only peered with the required capabilities (internet in-out, dc, etc.), while others share some zones across applications. However, no matter what they do, they will still rely on network security groups and next gen firewalls to govern their traffic. 


 


AKS, the elephant in the Hub & Spoke room


 


AKS is not a service like others, it has a vast ecosystem and a different approach to networking. An AKS cluster is typically meant to host more than a single application, and you can’t afford to “simply” rely on the Hub & Spoke to manage network traffic. Kubernetes is aimed at abstracting away infrastructure components such as nodes, load balancers etc. Most K8s solutions are based on dynamic  rules and programmable networks…this is light years away from the rather static approach of NSGs and NVAs. A single AKS cluster might host hundreds of applications…That is why I consider AKS as the elephant in the Hub & Spoke room here. Somehow, AKS “breaks” the hub & spoke model, at least for East-West traffic. South-North traffic remains more controllable using traditional techniques, as it involves the cluster boundaries (IN and OUT).


 


Network plugins


 


Before talking about how you can isolate apps in AKS, let’s have a look at the different networking options for the cluster itself:


 



  • Kubenet: K8s native network plugin. Often used by companies because of its IP friendliness. Only nodes get a routable IP while pods get NAT IPs. Kubenet comes with some limitations, such as max 400 nodes per cluster incurred by the underlying route table, whose the UDR limit is (400 routes max). Note that 400 nodes is a theoretical limit because you are likely to add your own routes on top of the AKS ones, which will reduce even more the max number of nodes you can have. Also, if you provision new node pools during cluster upgrades, that limit will even be lower. At this stage, you can’t use virtual nodes with Kubenet. NAT is also supposed to incur a performance penalty but I never perceived any visible effect. By default, you can’t leverage K8s network policies with Kubenet, but Calico Policy comes to the rescue. At last, because of NAT, you can’t use NSGs and NVAs the same way you would do with VMs or Azure CNI (more on that later).


  • Azure CNI: The Azure Container Network Interface for K8s.  In a nutshell, with Azure CNI, every pod instance gets a routable IP assigned, hence a potential high usage of IPs. With Azure CNI, you can leverage Azure Networking as if AKS was just a bunch of mere VMs. You also have K8s network policies available


  • Bring your own CNI: features will vary according to the CNI vendor. Mind the fact that you won’t have MS support for CNI-related issues.


  • AKS CNI Overlay (early days in 10/2022): this makes me thing of some sort of managed Kubenet. It has the benefits of Kubenet (IP friendly) but overcomes some of its limitations.


 


We will see later what impact the network plugin may have over traffic management, but let us focus first on a more concrete example.


 


South-North and East-West traffic


 


In a typical Hub & Spoke approach, a N-TIER architecture for a single application could look like this (simplified views):


 


east-west-north-south1.png


Figure 1 – East-West and South-North traffic – Multiple Hubs


 


where the different layers of the application are talking to each other through the system routes and controlled by Network Security Groups (which is commonly accepted). Only traffic coming from outside the VNET (South) or leaving the VNET (North) would be routed through an NVA or Azure Firewall. In the above diagram, there are dedicated hubs for North & South traffic.


 


An alternative to this could be:


 


east-west-north-south2.png


Figure 2 – Alternative for East-West and South-North traffic – Single Hub


 


where you’d have a single hub for South-North traffic, and where you might optionally route internal VNET traffic (East-West) of that single application to an NVA or Azure Firewall, if you really want to enforce IDPS everywhere and/or TLS inspection.  


 


With UDRs and peering, everything is possible. The number of hubs you want to use depends on you. While multiple hubs improve visibility, they incur extra costs (at least one firewall or NVA per hub, and even more for HA and/or DR setup). Costs can rise pretty quickly.


 


For East-West traffic involving multiple applications, you would simply handle this with a hub (either the main hub, either an integration hub) in the middle (very typical this one):


 


east-west1.png


Figure 3 – East-West traffic with the Hub in the middle


 


Whatever you put (VMs, ASE, APIM, other PaaS services…) inside the different subnets, all of this makes perfect sense in a Hub & Spoke network, but what about AKS?


 


As a reminder, an AKS cluster is a set of node pools (system and user node pools). A best practice is to have at least one dedicated system node pool and 1 or more user node pools. While it is a best practice, this is by no way enforced by Azure. Each node pool, except the system one, will result in 0 to n nodes (virtual machines), depending on the defined scaling settings. Ultimately, these nodes will end up in one or more subnets, as illustrated below: 


 


aks-east-west.png


Figure 4 – Subnets for worker nodes


 


Before diving into East-West traffic, let us focus on South-North for a second, because that is the easiest. You can simply peer VNET(s) to the AKS one to filter out what comes inside the cluster (where you’ll also have an ingress controller), and what goes outside of the cluster (where you could have an egress controller). Egress traffic could be initiated by a pod reaching out to Azure data services, other VNETs or Internet.


 


Regarding East-West and the above diagram, a few questions arise:


 



  • Are you going to have one subnet for system node pools and one for user node pools?

  • Are you going to combine altogether?

  • Will it be one subnet per user node pool???


Note that, there is already one sure thing: you cannot use different VNETS to rule East-West traffic across applications hosted in the same cluster.… Your best possible boundary is the subnet. But, even though subnets could be used as a boundary, should you use NSGs, NVAs, etc. for internal traffic?? That is for sure not a cloud native approach to rely on this, but it does not mean you can’t use them. Let’s first make the assumption that you want to go the cloud native way. For this, you’d rely on:


 



  • A service mesh such as Istio, Linkerd, Open Service Mesh, NGINX Service Mesh, etc. to apply layer-7 policies (access controls, mTLS, etc.)

  • K8s network policies or Calico Network policies to apply layer-4 policies. Note that Calico also integrates with Istio.


 


And you would rely on this, independently of the underlying nodes that are being used by the cluster. This approach is fine, as long as:


 



  • You master Service Meshes et Network Policies, which is not an easy task :).

  • The process initiating or receiving traffic is a container running in the cluster.


Relying “only” on a Service Mesh and Network Policies to manage internal cluster traffic, can still let some doors opened to the following types of attacks:


 



  1. Escape from containers type of vulnerabilities, where the process would have access to the underlying OS. In such a case, the execution context will be insensitive to your mesh & network policies

  2. Operators logged onto the cluster worker nodes themselves, or host-level processes. Same as above, they could perform lateral movements escaping the control of your mesh & network policies. 


With proper Azure Policy in place, you can manage to mitigate item number 1, at least to make sure that containers do not run as root, are not able to escalate privileges etc. Also, making sure images only come from trusted registries and are built on trusted base images, etc. will greatly help, but working on container CAPS is the best way to go. Of course, if there is an OS-level vulnerability or an admission controller vulnerability, you could still be at risk.


 


Item number 2 can be mitigated with strict access policies, making sure that not everybody has access to the private SSH keys. This is more of a malicious insider threat, which you should handle like any other malicious insider threat.


 


But by the way, why would you even want to go the Cloud native way and use a Service Mesh combined to Calico (or K8s network policies)?


 


– To build a zero-trust environment, based on layer-4 and layer-7 rules.


– To rely 100% on automation, since everything is “as code”, leading to predictable and repeatable outputs


– To benefit from the built-in elasticity and resilience of K8s, where pods can be re-scheduled in case of adverse event to any node that can accommodate the required resources.


– To benefit from smart load balancers that understand application protocols such as gRPC, HTTP/2, etc.


– To benefit from rolling upgrades and modern deployment techniques such as blue/green, A/B testing, etc.


– To have a greater visibility thanks to the built-in observability mechanisms that are part of service meshes


– To have more robust applications, which you can stress with chaos engineering techniques, again built-in in multiple meshes


 


All of these are very valid reasons to go the Cloud native way, but most organizations are just not ready yet for this mindset shift.


 


Risks highlighted above might be acceptable under the following circumstances:


 


– You host multiple applications, which are closely related to each other (same family, same business line, etc.)


– You host a single application in the cluster (yes, it happens)


– You host multiple applications belonging to a single tenant


 


It is of course up to you to define where to put the limits. However, if you host multiple assets belonging to different customers, in other words, if you have a true multi-tenant cluster, you will want to make sure a given customer cannot access another one. In that case, relying only on Service Meshes and K8s network policies might be more risky, and this is where subnet segregation and NSGs might come into play.


 


Multitenant clusters


Let us explore the possibilities for true multi-tenant clusters. Something you can end up with would look like this:


 


multi-tenant-aks.png


Figure 5 – Multitenancy in AKS – Possible setup


I got rid of the South-North traffic to focus only on East-West. You could isolate each tenant in a dedicated subnet and define the NSG rules you want. UDRs can be added on top.


 


When network plugin matters


 


With such a topology, network plugins matter. Let’s see how:


 


multi-tenant-aks-kubenet.png


Figure 6 – POD to POD with Kubenet


With the Kubenet plugin, when POD1 and POD2 talk together, the NSGs will see the NAT IP, not even the underlying node IP onto which the pod is running. Because the mapping between each AKS node and a POD CIDR range is unpredictable, there is no way you could use the pod or underlying node IP in the NSG. With Kubenet, you’d be forced to allow the entire POD CIDR range for every subnet you have, else you might even block pods belonging to the same tenant to talk together. To isolate tenants, you’d then need to rely on:


 


– NSGs for the subnet ranges, not to rule how pod can/cannot talk together but to isolate tenant nodes from each other and mitigate lateral movements in case of container escape and direct access to underlying VMs.


– K8s network policies to prevent lateral movements from within the cluster. Optionally, adding layer-7 policies with a mesh


 


You would have to combine both NSGs and K8s network policies.


 


With CNI:


 


multi-tenant-aks-cni.png 


Figure 7 – POD to POD with CNI


POD1 and POD2 would see each other’s assigned routable IP, as well as NSG. Therefore, whitelisting internal subnet traffic only at the level of the NSG, would do the job. However, I would discourage the use of POD IPs in NSG rules because POD IPs are very volatile. Out of the box, there is no way to assign a static IP to a POD. This can be achieved with some CNI plugins but I don’t think it is a good idea. I would recommend to work with subnet-wide rules and/or node-level rules, not deeper. Of course, you can also use internal K8s network policies to apply fine-grained rules.


 


Managing resilience and availability


 


Ok, but, how do you make sure pods from tenant1 are not scheduled to tenant2 nodes? Well, there are multiple ways to achieve this in K8s. You can use node selectors, node affinity, taints & tolerations. By attaching appropriate taints or labels to your node pools, ie: tenant=tenant1, tenant=tenant2, etc., you could achieve this easily. 


 


Great but, isn’t there something that bothers you here? What about the elasticity and K8s’built-in features to re-schedule pods on healthy nodes in case of node failures? With these siloes in place, you’d put your availability at risk….Well, this indeed not ideal, but there are some possiblities:


 



  • Define each node pool as zone-redundant to maximize each tenant’s resilience

  • Define each node pool with minimum 2/3 nodes (one per zone) to maximize each tenant’s active/active availability


Because all nodes belonging to the same node pool are tainted/labelled the same way, K8s would still be able to re-schedule pods accordingly should a given node suffer form hardware/software failure. Of course, such a setup would require many nodes (mind Kubnet’s 400 limit) and would incur huge costs. 


 


CNI or Kubenet, impact on Egress


 


What will a data service (or anything else) see when a POD calls it?


 


kubenet-cni-egress.png


Figure 8 – Egress traffic



  • with Kubenet, the NSG around the data subnet will see the underlying node IP

  • with CNI, the NSG will see the POD IP


Remember that with Kubenet, for internal traffic, even if it involves multiple nodes on different subnets, NSGs will see the NAT IP. However, for traffic leaving the cluster, the underlying node IP is seen. With CNI, it makes no difference whether traffic targets an internal or external service.


 


Combining best of both worlds


 


Beyond being multi-tenant or not, one setup which can be interesting is the following:


 


hybrid-native-non-native.png


Figure 9 – Combining Cloud native with traditional approach


 


where you isolate key cluster features such as ingress & egress. For example, with a dedicated egress subnet for the Istio (for example) egress controller, could help you enforce Istio egress rules everywhere, by allowing only that subnet to get out of the cluster, while still giving flexibility to the ones managing the cluster.


 


Conclusion


 


The cost friendliest and the most cloud native approach to handle East-West traffic, consists in relying on K8s built-in mechanisms and ecosystem solutions such as Service Meshes & Calico, to make abstraction of the infrastructure and leverage the full K8s potential in terms of elasticity, resilience and self-healing capabilities. However, we have seen some limits of that approach, in some very specific scenarios. I would still advocate to work the cloud native way, unless you really deal with highly sensitive workloads and/or true multi-tenancy. 


 


South-North traffic is not a game changer, whether you go cloud native or not. You will keep using WAFs and Azure Firewall/NVAs to manage that type of network traffic. Bringing back NSGs (and potentially NVAs) for East-West traffic can be challenging and the chosen network plugin dramatically impacts the NSG/UDR configuration. It can also potentially harm other non-functional requirements such as high availability, scalability and maintainability.

CISA Releases Three Industrial Control Systems Advisories

This article is contributed. See the original author and article here.

CISA has released three Industrial Control Systems (ICS) advisories on October 11, 2022. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.  

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations:  

Recently Released: Updates to SqlPackage, DacFx, and GitHub sql-action

This article is contributed. See the original author and article here.

We recently released v19.2 of SqlPackage, a v161-preview of DacFx with Microsoft.Data.SqlClient 5, and v2 of GitHub sql-action.  In this article we’ll discuss some of the latest updates and changes across SQL tooling for development and deployment. 


 


SqlPackage v19.2 


 


In v19.2 of SqlPackage there is added support for recently introduced SQL features, including Dynamic Data Masking and XML compression.  With support for Dynamic Data Masking, granular unmask permissions are supported for all operations including import/export and extract/publish. 


 


A significant change for SqlPackage in v19.2 is that we upgraded the SqlPackage build from .NET Core 3.1 to .NET 6, which has shown performance improvements ranging from 5% to 30% in our standard test suite.  We’re looking forward to additional opportunities in the future for performance improvement based on code changes dependent on .NET 6. 


 


The previous release of SqlPackage added the ability to extract a database to SQL files in addition to the established extract to a dacpac file.  With the introduction of the `ExtractTarget` property on SqlPackage extract you can obtain your database’s objects as a collection of SQL files organized by folders based on object type, schema, and more. 


 


DacFx and Microsoft.Data.SqlClient 5 


 


The data-tier application framework (DacFx) utilizes the Microsoft.Data.SqlClient SQL driver, which recently introduced support for TDS 8 in SqlClient v5.0.0.  A preview NuGet package for DacFx v161-preview is now available, the first DacFx preview release for v161.  With v161, DacFx has moved from SqlClient version 3 to version 5.  There are a number of breaking changes introduced in this multi-version upgrade. For example, in SqlClient v4 the default connection value for Encrypt has been changed to true. 


 


GitHub sql-action


 


With the preview of SDK-style SQL projects and Microsoft.Build.Sql we announced a desire to make deploying SQL databases from the SQL-schema-as-code easier than ever.  The recently released v2 of GitHub sql-action can build and deploy a SQL project to a SQL database, leveraging SDK-style SQL projects and SqlPackage.  


 


Another notable change to sql-action in v2 is the conversion to go-sqlcmd from sqlcmd for script execution.  Authentication options were added to the v2 release for both SQL project deployment and script execution, including Azure Active Directory Managed Identity and Azure Active Directory Service Principal.  GitHub sql-action v2 supports both Windows and Linux pipelines. 


 


Learn More 


FBI and CISA Publish a PSA on Information Manipulation Tactics for 2022 Midterm Elections

This article is contributed. See the original author and article here.

Title: FBI and CISA Publish a PSA on Information Manipulation Tactics for 2022 Midterm Elections
 
Content:
The Federal Bureau of Investigation (FBI) and CISA have published a joint public service announcement that:

  • Describes methods that foreign actors use to spread and amplify false information—including reports of alleged malicious cyber activity—in attempts to undermine trust in election infrastructure.
  • Confirms “the FBI and CISA have no information suggesting any cyber activity against U.S. election infrastructure has impacted the accuracy of voter registration information, prevented a registered voter from casting a ballot, or compromised the integrity of any ballots cast.”

The PSA also describes the extensive safeguards in place to protect election infrastructure and includes recommendations to assist the public in understanding how to find trustworthy sources of election-related information.

Top CVEs Actively Exploited by People’s Republic of China State-Sponsored Cyber Actors   

This article is contributed. See the original author and article here.

CISA, the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA) have released a joint Cybersecurity Advisory (CSA) providing the top Common Vulnerabilities and Exposures (CVEs) used since 2020 by People’s Republic of China (PRC) state-sponsored cyber actors. PRC state-sponsored cyber actors continue to exploit known vulnerabilities to actively target U.S. and allied networks, including software and hardware companies to illegally obtain intellectual property and develop access into sensitive networks.

CISA, the FBI, and the NSA urge U.S. and allied governments, critical infrastructure, and private sector organizations to apply the recommendations listed in the Top CVEs Actively Exploited by People’s Republic of China State-Sponsored Cyber Actors to increase their defensive posture and reduce the threat of compromise from PRC state-sponsored malicious cyber actors.

For more information on PRC state-sponsored malicious cyber activity, see CISA’s China Cyber Threat Overview and Advisories webpage, the FBI’s Industry Alerts, and the NSA’s Cybersecurity Advisories & Guidance.

Top CVEs Actively Exploited By People’s Republic of China State-Sponsored Cyber Actors

This article is contributed. See the original author and article here.

Summary

This joint Cybersecurity Advisory (CSA) provides the top Common Vulnerabilities and Exposures (CVEs) used since 2020 by People’s Republic of China (PRC) state-sponsored cyber actors as assessed by the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and Federal Bureau of Investigation (FBI). PRC state-sponsored cyber actors continue to exploit known vulnerabilities to actively target U.S. and allied networks as well as software and hardware companies to steal intellectual property and develop access into sensitive networks.

This joint CSA builds on previous NSA, CISA, and FBI reporting to inform federal and state, local, tribal and territorial (SLTT) government; critical infrastructure, including the Defense Industrial Base Sector; and private sector organizations about notable trends and persistent tactics, techniques, and procedures (TTPs).

NSA, CISA, and FBI urge U.S. and allied governments, critical infrastructure, and private sector organizations to apply the recommendations listed in the Mitigations section and Appendix A to increase their defensive posture and reduce the threat of compromise from PRC state-sponsored malicious cyber actors.

For more information on PRC state-sponsored malicious cyber activity, see CISA’s China Cyber Threat Overview and Advisories webpage, FBI’s Industry Alerts, and NSA’s Cybersecurity Advisories & Guidance

Download the PDF version of this report: pdf, 409 KB

Technical Details

NSA, CISA, and FBI continue to assess PRC state-sponsored cyber activities as being one of the largest and most dynamic threats to U.S. government and civilian networks. PRC state-sponsored cyber actors continue to target government and critical infrastructure networks with an increasing array of new and adaptive techniques—some of which pose a significant risk to Information Technology Sector organizations (including telecommunications providers), Defense Industrial Base (DIB) Sector organizations, and other critical infrastructure organizations.

PRC state-sponsored cyber actors continue to exploit known vulnerabilities and use publicly available tools to target networks of interest. NSA, CISA, and FBI assess PRC state-sponsored cyber actors have actively targeted U.S. and allied networks as well as software and hardware companies to steal intellectual property and develop access into sensitive networks. See Table 1 for the top used CVEs.

Table I: Top CVEs most used by Chinese state-sponsored cyber actors since 2020

Vendor

CVE

Vulnerability Type

Apache Log4j

CVE-2021-44228

Remote Code Execution

Pulse Connect Secure

CVE-2019-11510

Arbitrary File Read

GitLab CE/EE

CVE-2021-22205

Remote Code Execution

Atlassian

CVE-2022-26134

Remote Code Execution

Microsoft Exchange

CVE-2021-26855

Remote Code Execution

F5 Big-IP

CVE-2020-5902

Remote Code Execution

VMware vCenter Server

CVE-2021-22005

Arbitrary File Upload

Citrix ADC

CVE-2019-19781

Path Traversal

Cisco Hyperflex

CVE-2021-1497

Command Line Execution

Buffalo WSR

CVE-2021-20090

Relative Path Traversal

Atlassian Confluence Server and Data Center

CVE-2021-26084

Remote Code Execution

Hikvision Webserver

CVE-2021-36260

Command Injection

Sitecore XP

CVE-2021-42237

Remote Code Execution

F5 Big-IP

CVE-2022-1388

Remote Code Execution

Apache

CVE-2022-24112

Authentication Bypass by Spoofing

ZOHO

CVE-2021-40539

Remote Code Execution

Microsoft

CVE-2021-26857

Remote Code Execution

Microsoft

CVE-2021-26858

Remote Code Execution

Microsoft

CVE-2021-27065

Remote Code Execution

Apache HTTP Server

CVE-2021-41773

Path Traversal

These state-sponsored actors continue to use virtual private networks (VPNs) to obfuscate their activities and target web-facing applications to establish initial access. Many of the CVEs indicated in Table 1 allow the actors to surreptitiously gain unauthorized access into sensitive networks, after which they seek to establish persistence and move laterally to other internally connected networks. For additional information on PRC state-sponsored cyber actors targeting network devices, please see People’s Republic of China State-Sponsored Cyber Actors Exploit Network Providers and Devices.

Mitigations

NSA, CISA, and FBI urge organizations to apply the recommendations below and those listed in Appendix A.

  • Update and patch systems as soon as possible. Prioritize patching vulnerabilities identified in this CSA and other known exploited vulnerabilities.
  • Utilize phishing-resistant multi-factor authentication whenever possible. Require all accounts with password logins to have strong, unique passwords, and change passwords immediately if there are indications that a password may have been compromised. 
  • Block obsolete or unused protocols at the network edge. 
  • Upgrade or replace end-of-life devices.
  • Move toward the Zero Trust security model. 
  • Enable robust logging of Internet-facing systems and monitor the logs for anomalous activity.
     

Appendix A

Table II: Apache CVE-2021-44228

Apache CVE-2021-44228 CVSS 3.0: 10 (Critical)

Vulnerability Description

Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against malicious actor controlled LDAP and other JNDI related endpoints. A malicious actor who can control log messages or log message parameters could execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.

Recommended Mitigations

  • Apply patches provided by vendor and perform required system updates.

Detection Methods

Vulnerable Technologies and Versions

There are numerous vulnerable technologies and versions associated with CVE-2021-44228. For a full list, check https://nvd.nist.gov/vuln/detail/CVE-2021-44228.

Table III: Pulse CVE-2019-11510

Pulse CVE-2019-11510 CVSS 3.0: 10 (Critical)

Vulnerability Description

This vulnerability has been modified since it was last analyzed by NVD. It is awaiting reanalysis, which may result in further changes to the information provided. In Pulse Secure Pulse Connect Secure (PCS) 8.2 before 8.2R12.1, 8.3 before 8.3R7.1, and 9.0 before 9.0R3.4, an unauthenticated remote malicious actor could send a specially crafted URI to perform an arbitrary file reading vulnerability.

Recommended Mitigations

  • Apply patches provided by vendor and perform required system updates.

Detection Methods

  • Use CISA’s “Check Your Pulse” Tool.

Vulnerable Technologies and Versions

Pulse Connect Secure (PCS) 8.2 before 8.2R12.1, 8.3 before 8.3R7.1, and 9.0 before 9.0R3.4

Table IV: GitLab CVE-2021-22205

GitLab CVE-2021-22205 CVSS 3.0: 10 (Critical)

Vulnerability Description

An issue has been discovered in GitLab CE/EE affecting all versions starting from 11.9. GitLab was not properly validating image files passed to a file parser, which resulted in a remote command execution.

Recommended Mitigations

  • Update to 12.10.3, 13.9.6, and 13.8.8 for GitLab.
  • Hotpatch is available via GitLab.

Detection Methods

  • Investigate logfiles.
  • Check GitLab Workhorse.

Vulnerable Technologies and Versions

Gitlab CE/EE.

Table V: Atlassian CVE-2022-26134

Atlassian CVE-2022-26134 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

In affected versions of Confluence Server and Data Center, an OGNL injection vulnerability exists that could allow an unauthenticated malicious actor to execute arbitrary code on a Confluence Server or Data Center instance. The affected versions are from 1.3.0 before 7.4.17, 7.13.0 before 7.13.7, 7.14.0 before 7.14.3, 7.15.0 before 7.15.2, 7.16.0 before 7.16.4, 7.17.0 before 7.17.4, and 7.18.0 before 7.18.1.

Recommended Mitigations 

  • Immediately block all Internet traffic to and from affected products AND apply the update per vendor instructions. 
  • Ensure Internet-facing servers are up-to-date and have secure compliance practices.
  • Short term workaround is provided here.

Detection Methods

N/A

Vulnerable Technologies and Versions

All supported versions of Confluence Server and Data Center

Confluence Server and Data Center versions after 1.3.0

Table VI: Microsoft CVE-2021-26855

Microsoft CVE-2021-26855                                                     CVSS 3.0: 9.8 (Critical)

Vulnerability Description

Microsoft has released security updates for Windows Exchange Server. To exploit these vulnerabilities, an authenticated malicious actor could send malicious requests to an affected server. A malicious actor  who successfully exploited these vulnerabilities would execute arbitrary code and compromise the affected systems. If successfully exploited, these vulnerabilities could allow an adversary to obtain access to sensitive information, bypass security restrictions, cause a denial of service conditions, and/or perform unauthorized actions on the affected Exchange server, which could aid in further malicious activity.

Recommended Mitigations

  • Apply the appropriate Microsoft Security Update.
  • Microsoft Exchange Server 2013 Cumulative Update 23 (KB5000871)
  • Microsoft Exchange Server 2016 Cumulative Update 18 (KB5000871)
  • Microsoft Exchange Server 2016 Cumulative Update 19 (KB5000871)
  • Microsoft Exchange Server 2019 Cumulative Update 7 (KB5000871)
  • Microsoft Exchange Server 2019 Cumulative Update 8 (KB5000871)
  • Restrict untrusted connections.

Detection Methods

  • Analyze Exchange product logs for evidence of exploitation.
  • Scan for known webshells.

Vulnerable Technologies and Versions

Microsoft Exchange 2013, 2016, and 2019.

Table VII: F5 CVE-2020-5902

Table VIII: VMware CVE-2021-22005

VMware CVE-2021-22005 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

The vCenter Server contains an arbitrary file upload vulnerability in the Analytics service. A malicious actor with network access to port 443 on vCenter Server may exploit this issue to execute code on vCenter Server by uploading a specially crafted file.

Recommended Mitigations

  • Apply Vendor Updates.

Detection Methods

N/A

Vulnerable Technologies and Versions

VMware Cloud Foundation

VMware VCenter Server

Table IX: Citrix CVE-2019-19781

Citrix CVE-2019-19781 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

This vulnerability has been modified since it was last analyzed by NVD. It is awaiting reanalysis, which may result in further changes to the information provided. An issue was discovered in Citrix Application Delivery Controller (ADC) and Gateway 10.5, 11.1, 12.0, 12.1, and 13.0. They allow Directory Traversal.

Recommended Mitigations

Detection Methods

N/A

Vulnerable Technologies and Versions

Citrix ADC, Gateway, and SD-WAN WANOP

Table X: Cisco CVE-2021-1497

Cisco CVE-2021-1497 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

Multiple vulnerabilities in the web-based management interface of Cisco HyperFlex HX could allow an unauthenticated, remote malicious actor to perform a command injection against an affected device. For more information about these vulnerabilities, see the Technical details section of this advisory.

Recommended Mitigations

  • Apply Cisco software updates.

Detection Methods

  • Look at the Snort Rules provided by Cisco.

Vulnerable Technologies and Versions

Cisco Hyperflex Hx Data Platform 4.0(2A)

Table XI: Buffalo CVE-2021-20090

Buffalo CVE-2021-20090 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

A path traversal vulnerability in the web interfaces of Buffalo WSR-2533DHPL2 firmware version <= 1.02 and WSR-2533DHP3 firmware version <= 1.24 could allow unauthenticated remote malicious actors to bypass authentication.

Recommended Mitigations

  • Update firmware to latest available version.

 

Detection Methods

Vulnerable Technologies and Versions

Buffalo Wsr-2533Dhpl2-Bk Firmware

Buffalo Wsr-2533Dhp3-Bk Firmware

Table XII: Atlassian CVE-2021-26084

Atlassian CVE-2021-26084 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

In affected versions of Confluence Server and Data Center, an OGNL injection vulnerability exists that would allow an unauthenticated malicious actor to execute arbitrary code on a Confluence Server or Data Center instance. The affected versions are before version 6.13.23 and from version 6.14.0 before 7.4.11, version 7.5.0 before 7.11.6, and version 7.12.0 before 7.12.5.

Recommended Mitigations

  • Update confluence version to 6.13.23, 7.4.11, 7.11.6, 7.12.5, and 7.13.0.
  • Avoid using end-of-life devices.
  • Use Intrusion Detection Systems (IDS).

Detection Methods

N/A

Vulnerable Technologies and Versions

Atlassian Confluence

Atlassian Confluence Server

Atlassian Data Center

Atlassian Jira Data Center

Table XIII: Hikvision CVE-2021-36260

Hikvision CVE-2021-36260 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

This vulnerability has been modified since it was last analyzed by NVD. It is awaiting reanalysis, which may result in further changes to the information provided. A command injection vulnerability exists in the web server of some Hikvision products. Due to the insufficient input validation, a malicious actor can exploit the vulnerability to launch a command injection by sending some messages with malicious commands.

Recommended Mitigations

  • Apply the latest firmware updates.

Detection Methods

N/A

Vulnerable Technologies and Versions

Various Hikvision Firmware to include Ds, Ids, and Ptz

References

https://www.cisa.gov/uscert/ncas/current-activity/2021/09/28/rce-vulnerability-hikvision-cameras-cve-2021-36260  

Table XIV: Sitecore CVE-2021-42237

Sitecore CVE-2021-42237 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

Sitecore XP 7.5 Initial Release to Sitecore XP 8.2 Update-7 is vulnerable to an insecure deserialization attack where it is possible to achieve remote command execution on the machine. No authentication or special configuration is required to exploit this vulnerability.

Recommended Mitigations

  • Update to latest version.
  • Delete the Report.ashx file from /sitecore/shell/ClientBin/Reporting/Report.ashx.

Detection Methods

Vulnerable Technologies and Versions

Sitecore Experience Platform 7.5, 7.5 Update 1, and 7.5 Update 2

Sitecore Experience Platform 8.0, 8.0 Service Pack 1, and 8.0 Update 1-Update 7

Sitecore Experience Platform 8.0 Service Pack 1

Sitecore Experience Platform 8.1, and  Update 1-Update 3

Sitecore Experience Platform 8.2, and Update 1-Update 7

Table XV: F5 CVE-2022-1388

F5 CVE-2022-1388 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

This vulnerability has been modified since it was last analyzed by NVD. It is awaiting reanalysis, which may result in further changes to the information provided. On F5 BIG-IP 16.1.x versions prior to 16.1.2.2, 15.1.x versions prior to 15.1.5.1, 14.1.x versions prior to 14.1.4.6, 13.1.x versions prior to 13.1.5, and all 12.1.x and 11.6.x versions, undisclosed requests may bypass iControl REST authentication. Note: Software versions which have reached End of Technical Support (EoTS) are not evaluated.

Recommended Mitigations

  • Block iControl REST access through the self IP address.
  • Block iControl REST access through the management interface.
  • Modify the BIG-IP httpd configuration.

Detection Methods

N/A

Vulnerable Technologies and Versions

Big IP versions:

16.1.0-16.1.2

15.1.0-15.1.5

14.1.0-14.1.4

13.1.0-13.1.4

12.1.0-12.1.6

11.6.1-11.6.5

Table XVI: Apache CVE-2022-24112

Apache CVE-2022-24112 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

A malicious actor can abuse the batch-requests plugin to send requests to bypass the IP restriction of Admin API. A default configuration of Apache APISIX (with default API key) is vulnerable to remote code execution. When the admin key was changed or the port of Admin API was changed to a port different from the data panel, the impact is lower. But there is still a risk to bypass the IP restriction of Apache APISIX’s data panel. There is a check in the batch-requests plugin which overrides the client IP with its real remote IP. But due to a bug in the code, this check can be bypassed.

Recommended Mitigations

  • In affected versions of Apache APISIX, you can avoid this risk by explicitly commenting out batch-requests in the conf/config.yaml and conf/config-default.yaml files and restarting Apache APISIX.
  • Update to 2.10.4 or 2.12.1.

Detection Methods

N/A

Vulnerable Technologies and Versions

Apache APISIX between 1.3 and 2.12.1 (excluding 2.12.1)

LTS versions of Apache APISIX between 2.10.0 and 2.10.4

Table XVII: ZOHO CVE-2021-40539

ZOHO CVE-2021-40539 CVSS 3.0: 9.8 (Critical)

Vulnerability Description

Zoho ManageEngine ADSelfService Plus version 6113 and prior is vulnerable to REST API authentication bypass with resultant remote code execution.

Recommended Mitigations

  • Upgrade to latest version.

Detection Methods

  • Run ManageEngine’s detection tool.
  • Check for specific files and logs.

Vulnerable Technologies and Versions

Zoho Corp ManageEngine ADSelfService Plus

Table XVIII: Microsoft CVE-2021-26857

Microsoft CVE-2021-26857 CVSS 3.0: 7.8 (High)

Vulnerability Description

Microsoft Exchange Server remote code execution vulnerability. This CVE ID differs from CVE-2021-26412, CVE-2021-26854, CVE-2021-26855, CVE-2021-26858, CVE-2021-27065, and CVE-2021-27078.

Recommended Mitigations

  • Update to support latest version.
  • Install Microsoft security patch.
  • Use Microsoft Exchange On-Premises Mitigation Tool.

Detection Methods

  • Run Exchange script: https://github.com/microsoft/CSS-Exchange/tree/main/Security.
  • Hashes can be found here: https://www.microsoft.com/security/blog/2021/03/02/hafnium-targeting-exchange-servers/#scan-log.

Vulnerable Technologies and Versions

Microsoft Exchange Servers

Table XIX: Microsoft CVE-2021-26858

Table XX: Microsoft CVE-2021-27065

Table XXI: Apache CVE-2021-41773

Apache CVE-2021-41773 CVSS 3.0: 7.5 (High)

Vulnerability Description

This vulnerability has been modified since it was last analyzed by NVD. It is awaiting reanalysis, which may result in further changes to the information provided. A flaw was found in a change made to path normalization in Apache HTTP Server 2.4.49. A malicious actor could use a path traversal attack to map URLs to files outside the directories configured by Alias-like directives. If files outside of these directories are not protected by the usual default configuration “require all denied,” these requests can succeed. Enabling CGI scripts for these aliased paths could allow for remote code execution. This issue is known to be exploited in the wild. This issue only affects Apache 2.4.49 and not earlier versions. The fix in Apache HTTP Server 2.4.50 is incomplete (see CVE-2021-42013).

Recommended Mitigations

  • Apply update or patch.

Detection Methods

  • Commercially available scanners can detect CVE.

Vulnerable Technologies and Versions

Apache HTTP Server 2.4.49 and 2.4.50

Fedoraproject Fedora 34 and 35

Oracle Instantis Enterprise Track 17.1-17.3

Netapp Cloud Backup

Revisions

Initial Publication: October 6, 2022

This product is provided subject to this Notification and this Privacy & Use policy.