CISA Releases Two Industrial Control Systems Advisories

This article is contributed. See the original author and article here.

CISA released two (2) Industrial Control Systems (ICS) advisories on October 06, 2022. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations:

Use Demand Driven Material Requirements Planning to increase service and decrease lead time

Use Demand Driven Material Requirements Planning to increase service and decrease lead time

This article is contributed. See the original author and article here.

For better or worse, customers have come to expect short lead times. Responding to those expectations has become more complex from a supply chain perspective: product availability has become less predictable, which lowers forecast accuracy; parts have long lead times; and there’s pressure to maintain leaner inventories to reduce holding costs. Is it possible to have high customer service levels while not holding too much inventory? With Demand Driven Material Requirements Planning (DDMRP), it is.

What is Demand Driven Material Requirements Planning?

DDMRP is a formal method for modeling, planning, and managing supply chains. It has been proven to improve performance in these volatile, complex, and ambiguous environments, where cumulative lead times are longer than your customers’ tolerance. It’s based on maintaining a stock buffer at strategic decoupling points, absorbing variability to avoid the bullwhip effect.

DDMRP methodology consists of five sequential components:

1. Strategic inventory positioning: Determine decoupling points where stock buffers can be placed.

2. Buffer profiles and levels: Determine the amount of protection (“shock absorption”) at the decoupling points that’s needed to mitigate variability in both directions. Historical and forecasted usage rates and DDMRP part settings are used to create unique, three-zone, color-coded buffers.

3. Dynamic adjustments: After the initial buffer sizes are determined, allow the level of protection to flex up or down based on factors such as operating parameters, market changes, and known or planned future events.

4. Demand-driven planning: Generate supply orders (purchase orders, manufacturing orders, and stock transfer orders) from qualified (as opposed to planned) sales orders within a short planning horizon. The equation On-Hand + Open Supply Qualified Sales Order Demand determines each day’s net flow position. If the net flow position is below the top of the yellow zone, a supply order is generated for the amount needed to reach the top of the green zone.

chart, funnel chart
From Demand Driven Institute

5. Visible and collaborative execution: Manage open supply orders using intuitive, easily interpreted signals on open supply priorities against the on-hand buffer position. The lower the on-hand level, the higher the risk to maintaining flow and the higher the execution priority. That is, priority is assigned by buffer status, not due date. It’s easy to get an overview of the state of the buffers.

Benefits of Demand Driven Material Requirements Planning

DDMRP has proven benefits across many industries:

Benefit Typical improvements
Improved customer service 97%100% on-time order fulfillment rates
Lead time compression Lead time reductions of more than 80%
Balanced inventory Inventory reductions of 30%45%
Lowest total operating cost Costs related to expedited activity and false signals are largely eliminated
Improved planner productivity Planners see priorities instead of constantly fighting the conflicting messages of MRP
From Demand Driven Institute

Dynamics 365 Supply Chain Management is DDMRP-compliant

Microsoft Dynamics 365 Supply Chain Management is DDMRP-compliant according to the Demand Driven Institute, the leading authority on demand-driven methodologies. To be compliant, software must meet five compliance criteria. By compliant, we mean that Dynamics 365 Supply Chain Management follows the methodology according to the DDMRP industry standards as indicated by the Demand Driven Institute.

How to get started with DDMRP

DDMRP is a new concept for many companies. We suggest you start with a small pilot or simulation involving a subset of items to determine if DDMRP would be valuable for your organization. It’s simple to set up. Just enable Priority Driven MRP support for Planning Optimization and DDMRP for Planning Optimization in Dynamics 365 Supply Chain Management.

Next steps

Sources

Demand Driven Institute. What is DDMRP?

The post Use Demand Driven Material Requirements Planning to increase service and decrease lead time appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Microsoft is a Market Champion in the KuppingerCole Analysts Leadership Compass, Customer Data Platforms

Microsoft is a Market Champion in the KuppingerCole Analysts Leadership Compass, Customer Data Platforms

This article is contributed. See the original author and article here.

We are honored to announce that Microsoft Dynamics 365 Customer Insights was named a Market Champion in the KuppingerCole Analysts Leadership Compass, Customer Data Platforms.

The Leadership Compass report speaks to how a customer data platform (CDP) can help organizations address the challengessiloed data, personalization, multichannel orchestration among themthat they may face when seeking to improve their customer experience. Microsoft’s customers are overcoming these challenges with Dynamics 365 Customer Insights, a recognized product-leading, innovation-leading, and market-leading CDP with comprehensive, powerful capabilities.

One Microsoft customer that is committed to a data-centric approach to its customer experience initiatives is Valencia Club de Ftbol (CF). The club is taking charge of its customer data with Dynamics 365 Customer Insights and using the CDP to help its entire organization usher in a data-driven mindset. As a result, the club is creating more meaningful and personalized fan engagement. As explained by Franco Segarra, Head of Innovation for Valencia CF, “Becoming data-driven helps everyone get more out of their job.”

Three areas of recognized leadership

Microsoft customers like Valencia CF are powering hyper-personalized, delightful customer experiences at scale by embracing the need to have a deep understanding of their customers. They are driving meaningful actions with confidence as a result of recognized leadership of Dynamics 365 Customer Insights in three areas.

1. Product leadership

The functional strengths and complete services of the Microsoft CDP empower you to get the most complete view of your customers by unifying all your customer data with ease. Best-of-breed technologies such as Microsoft Azure Data Lake and Cosmos DB power this innovation at massive scale. Customers can store many hundreds of millions of profiles within a single environment, making the CDP an exceptional powerhouse for end-to-end enterprise marketing stacks.

2. Product innovation

Microsoft customers benefit from ongoing, customer-oriented innovation that helps them meet their evolving and emerging business requirements. We are focused on differentiation and solving customer pain points with both customer-requested enhancements and cutting-edge features. We are also supporting our customers in expanding and accelerating their discovery of insights with out-of-box machine learning templates, as well as support for custom AI/ML models with Microsoft Azure Synapse Analytics. Microsoft customers are benefiting from a limitless analytics solution that significantly reduces their project development time while delivering breakthrough price performance.

3. Market leadership

Microsoft and our extensive ecosystem of more than 7,500 worldwide partners (2022) help customers solve important challenges. Our partners include ISVs building solutions on top of or connecting their solutions to Dynamics 365 and systems integrators providing customizations and integrations for customers’ unique environments. Together, we’re helping customers across industries and around the world grow their businesses by taking full control of their customer data.

We’re delighted to share the news of this recognition of Microsoft as a Market Champion. We agree with KuppingerCole’s assessment that the [Microsoft] “roadmap is ambitious, and the product vision is clear, and are closely linked to overarching activities in the Microsoft ecosystem.” In this unprecedented time of radically shifting consumer behaviors, delivering quality, highly tailored customer experiences is a path to competitive differentiation that can help lead to customer loyalty. Customer Insights is your key to engaging your customers like your business depends on it.

Learn more

To learn more about how Microsoft compared to the other technology providers included in this Leadership Compass, please access the KuppingerCole Leadership Compass, Customer Data Platforms for a complimentary copy of the report.

The post Microsoft is a Market Champion in the KuppingerCole Analysts Leadership Compass, Customer Data Platforms appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Expanding the workforce through greater inclusion

This article is contributed. See the original author and article here.

Kim Akers – CVP, MCAPS Enablement and Operations


 


Over the past few years, across every industry, we have seen organizations quickly adjust to challenges and pursue new business opportunities as the pandemic reshaped our world. From health care, to manufacturing, retail and beyond, organizations have had to not only focus on building their own digital capability but hiring talent with proven potential.


 


As more and more organizations seek to fill the nearly 150 million jobs being created by this transformation, it has become acutely clear: talent is everywhere but the opportunity is not. In fact, Covid-19 put a giant spotlight on just how many people have been overlooked for far too long—people of color, women, people with less education. People with disabilities.


 


It’s never been more important to ensure everyone can prove they have the tech skills to take on that new assignment, get that new job or achieve the impossible.


 


With this in mind, and in honor of National Disability Employment Awareness Month, I’m excited to share more detail on how we’re helping to reshape the certification industry to be more inclusive for people with disabilities.


 


Understanding disabilities


For decades, “disability” has focused on mobility, vision, or hearing issues. Yet, 70 percent of disabilities don’t have visible indicators. Examples of non-apparent disabilities include:



  • Learning: Includes difficulty focusing, concentrating, or understanding

  • Mental health: Includes anxiety, bipolar disorder, PTSD, and/or depression

  • Neurodiversity: Includes dyslexia, seizures, autism, or other cognitive differences


 


I am part of that 70% and my experience with dyslexia and dysgraphia helps me have empathy for the variety of challenges faced by the disability community. Especially knowing that having a seen, or unseen, disability can have a tremendous impact on someone’s career and opportunities—especially in an Industry with years of tradition stacked against them.


 


Take for instance Kevin’s story.


 


Kevin is a sales director whose job required him to complete a certification. He was diagnosed with ADHD as a child but thought it had subsided as he grew up. The symptoms re-emerged in adulthood, impacting his life at work.


 


For example, Kevin spent more than 500 hours studying and preparing for a certification test. He didn’t know how to get the accommodations required for success; the process was too complex. He failed the exam several times. This had a cascade effect. Not passing meant he missed his mandatory training goal, resulting in reduced compensation, contributing to increased anxiety at work and at home.


 


“The more we can help people to learn on their terms, the more we can help people take the time that they need and to have the resources they need to succeed,” Kevin says, noting that he passed the exam after receiving proper accommodations.


 


It is painful to read stories like Kevin’s. No one should be left behind because they need additional accommodation while taking a test or anything else. Yet that’s what happens every day.


 


Removing barriers to success, trying new approaches


I believe it’s time to shake things up.


 


We have been listening, researching, and learning how to be more inclusive—this includes reviewing and updating our certification exam accommodations. And just three months ago, we rolled out the first of many exam improvements: testers no longer have to ask before moving around or looking away from the computer during a test. They must simply stay in view of the camera. That will make a big difference for many test takers.


 


We also know seeking an accommodation has historically been complicated and may even require the need to share sensitive, personal information. So, we’ve also made changes like:



  • Making the accommodation application process simpler

  • Removing the documentation requirement for most requests; and when it is required, expanding the list of acceptable documentation and reducing the burden placed on applicants

  • Ensuring proctors understand how to provide accommodations

  • Establishing a Microsoft Certification Accommodations HyperCare support team to support learners who need extra help (msftexamaccom@microsoft.com)


 


For a complete list of accommodations requirements, please visit: Accommodations and associated documentation requirements.


 


Change begins within


Certifications are a proven method for employees and job candidates to stand out in an increasingly competitive industry. I’m thrilled to see the steps taken to ensure our Microsoft Certification program is accessible to all.


 


After all, living with a disability shouldn’t hinder opportunity. Simply put, organizations must go beyond compliance when it comes to accommodations. That includes both offering them and ensuring proctors are properly trained. I’m thrilled that Microsoft is leading the way.


 


Stay tuned, more changes are in the works. I can’t wait to share them with you.


 


Related announcements


Improvements to the Exam Accommodation Process

Microsoft is a Market Champion in the KuppingerCole Analysts Leadership Compass, Customer Data Platforms

Your guide to Dynamics 365 at Microsoft Ignite 2022

This article is contributed. See the original author and article here.

Microsoft Ignite returns live next weeka digital and in-person event in Seattle, Washington on Wednesday, October 12, and Thursday, October 13. Register today for two content-packed days where you’ll explore the future of Microsoft Dynamics 365 and Microsoft Power Platform and join other technologists in immersive learning experiences, product demos, breakout sessions, and expert meet-ups.

This year, the Dynamics 365 and Power Platform teams will showcase new and upcoming capabilities as well as demonstrate how your organization can make the most of AI and automation to streamline business processes, enhance collaboration, and improve customer and employee experiences.

Register now for the in-person or digital event. The free digital event will be the foundation of Microsoft Ignite this year, offering hours of sessions and interactive learning, Q&As with experts, live discussions, roundtables, and much more, all streaming live and on-demand, at no cost.  

Dynamics 365 at Microsoft Ignite: Essential sessions and activities

To help you plan your experience from the variety of sessions and activities, we’ve compiled some essential presentations, sessions, and viewing tips. Click the linked titles to learn more and add each event to your session scheduler.

Ignite opening keynote

Wednesday, October 12 | 9:00 AM9:50 AM Pacific Time

Join the opening keynote, hosted by Microsoft CEO Satya Nadella, for an overview of innovations that will shape the future of business.

Core Theme Session

Wednesday, October 12 | 11:00 AM11:30 AM Pacific Time
Deliver efficiency with automation and AI across your business

Learn how organizations across industries are applying AI, automation, and mixed reality to streamline business processes, enhance collaboration, and improve customer and employee experiences. You’ll get a first-hand look at how products like Microsoft Viva Sales, Microsoft Digital Contact Center Platform, and Microsoft Power Platform rapidly enable AI and automation with modern capabilities.

Into Focus

Wednesday, October 12 | 3:00 PM3:40 PM Pacific Time
Business Applications Into Focus: Biz Apps 2022 Release Wave 2 Launch

Don’t miss this first look at the new Dynamics 365 and Power Platform innovations coming to market. We’ll debut new technologies not previously announced, as well as give you a first look at innovations in release wave 2features that are planned for release between October 2022 and March 2023. We’ll also spotlight organizations that will use these new technologies to drive better operational outcomes and customer success.

Dynamics 365 breakout sessions

After the keynote, learn what’s new and on the horizon for Dynamics 365 in these featured sessions:

Wednesday, October 12 | 11:05 AM11:30 AM Pacific Time
Re-energize your workforce in the office, at home, and everywhere in between

In today’s shifting macroeconomic climate, technology can help organizations in every industry overcome challenges and emerge stronger. From enabling hybrid work to bringing business processes into the flow of work, learn how Microsoft 365 helps organizations deliver on their digital imperative, so they can “do more with less.”

Wednesday, October 12 | 2:00 PM2:30 PM Pacific Time
Jumpstart your physical operations transformation with technologies built for the industrial metaverse

Explore what the industrial metaverse means today and where the technology is headed. From autonomous automation to connected field service and mixed reality to digitization of connected environments, we’ll showcase a maturity model that you can use to guide your implementation over time while solving business challenges each step of the way. We’ll also share how innovative customers are using this technology now to secure a competitive edge and build for the future.

Wednesday, October 12 | 12:00 PM12:40 PM Pacific Time
Microsoft Viva: Latest innovations and roadmap for the new digital employee experience

Hybrid work presents new challenges for engaging, motivating, and growing a workforce. IT leaders and human resources (HR) leaders have an opportunity to partner on a more advanced digital experience to support various ways of working. We’ll explore how Viva puts people at the center, connecting them to company information, communications, workplace insights, knowledge, and learning. Product leadership will share the latest innovations from Viva to prepare your organization for the new digital employee experience, today. 

Wednesday, October 12 | 12:00 PM12:35 PM Pacific Time
Unlock new customer experiences with NLP at scale

Organizations around the world use Microsoft’s natural language processing (NLP) capabilities to simplify tasks and support human connection, from helping employees better understand customer needs to helping customers find information more quickly. Learn why technology leaders are doubling down on NLP, and get a deeper understanding of NLP capabilities available across Dynamics 365 and Microsoft Azure Cognitive Services that can help transform customer and employee engagement at scale.

Wednesday, October 12 | 2:00 PM2:30 PM Pacific Time
Create rich connections and customer experiences with Microsoft Teams Phone and contact center capabilities

Staying connected with colleagues, partners, and customers is more important than ever. Join us to learn how Teams Phone and contact center capabilities for Teams can create richer communications while helping organizations turn customer service into a team sport. We’ll share the latest updates on our mobility innovation and discuss how organizations are using Teams Phone enterprise-grade calling.

Attend live or watch on-demand

In addition to the live streams above, each segment will be rebroadcasted throughout the event. The key segments are open to everyone, but we encourage you to register in advance to unlock the full Microsoft Ignite experiencefrom digital breakout sessions with live Q&As to conversations with Microsoft experts and your global community.

More to explore

Microsoft Ignite will include live segments and Q&As, available across time zones. Check out all of the events and activities hosted by our team of experts:

  • Ask the Experts: An opportunity to ask questions at sessions with experts in cloud, desktop, mobile, and web development for specific guidance on your project or interests. 
  • Table Topics: A live discussion with the community on camera and in chat. Get inspired by community experts,learn best practices, and share helpful resources with other attendees. 
  • Local Connections: An opportunity to engage with attendees local to you, no matter where you are in the world. Dedicated time to help find developers, Microsoft experts, and partners with similar interests in your area. 
  • Learn Live: Guided online content with subject matter experts to direct you through Microsoft Learn modules that you can complete on your own at any time. 
  • Product Roundtables: Two-way discussions direct with Microsoft engineering. 
  • Cloud Skills Challenge: A collection of interactive, online learning modules to complete for a chance to earn a free certification exam. 
  • One-to-One Consultations: A unique opportunity to connect with an expert during the event to get the technical answers you need. These 45-minute sessions provide the event’s only one-to-one setting.

Get the most of your Microsoft Ignite experience

Be sure to follow Microsoft Ignite on LinkedIn and Twitter to stay up to date and connected with the community, and register for Microsoft Ignite today.

The post Your guide to Dynamics 365 at Microsoft Ignite 2022 appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

FBI and CISA Publish a PSA on Malicious Cyber Activity Against Election Infrastructure

This article is contributed. See the original author and article here.

The Federal Bureau of Investigation (FBI) and CISA have published a joint public service announcement that:

  • Assesses malicious cyber activity aiming to compromise election infrastructure is unlikely to result in large-scale disruptions or prevent voting.
  • Confirms “the FBI and CISA have no reporting to suggest cyber activity has ever prevented a registered voter from casting a ballot, compromised the integrity of any ballots cast, or affected the accuracy of voter registration information.”

The PSA also describes the extensive safeguards in place to protect election infrastructure and includes recommendations for protecting against election-related cyber threats.

Intrastat reporting redesigned in Dynamics 365 Business Central

Intrastat reporting redesigned in Dynamics 365 Business Central

This article is contributed. See the original author and article here.

Intrastat is the system the European Union (EU) uses to collect statistics on the trade in goods among EU member nations. Microsoft Dynamics 365 Business Central customers that operate in the EU can use the Intrastat Report to meet their monthly reporting requirements. With 2022 release wave 2 of Business Central, we’re introducing a redesigned Intrastat reporting experience with extended features. Here’s what you need to know.

Test in a sandbox first

The new experience is disabled by default. You’ll need to enable it on the Feature Management page in Business Central. We suggest you enable and test it in a sandbox environment with a copy of your production data first. Once you activate the new user experience in your production environment, you can’t go back to the old Intrastat functionality.

New Intrastat reporting experience

The old Intrastat Report was based on journals. In the new report, you’ll see a list of entries, and when you create a new Intrastat Report, it opens on the document page.

graphical user interface, application, table, Excel

Additionally, Intrastat reporting is no longer part of the base application, but is now an extension.

Enhanced functionality

We’ve added features to make your Intrastat reporting smoother and more easily customized to meet your business needs.

  • Data Exchange Framework for reporting. Almost all EU countries require a file for reporting. Previously we created a hardcoded file. Now we use the Data Exchange Framework, and you can easily create timestamped files for export. We include prepared formats for countries for which we have localizations. You can change the out-of-box report or make your own, especially if we don’t have a localization for your country.

graphical user interface, table

  • Configurable Checklist Report. After you fill in the Intrastat Report, you can run a configurable Checklist Report to make sure the information you entered is correct, and that all fields you indicated are mandatory have been entered.
  • Fixed asset reporting. You can also include fixed assets in your report.
  • Weight and supplementary unit management. For both goods and fixed assets, you can easily configure the weight and supplementary unit of measure and, if needed, recalculate weights and supplementary units without changing any other values.
  • Manual corrections. You can manually correct your lines, and edited lines are indicated.
  • More report configuration options. The new Intrastat Report configuration has more options, and you can also adjust your reporting in the Data Exchange Definition settings. You can set file export preferences, default values, which country will be the base for reporting, how to treat VAT numbers, and more.
  • Service Declaration is coming soon as a separate app. Service Declaration, or Intrastat for Services, will be available in November 2022 as a separate app. Business Central will report services that come from the purchase and sale of items configured as services, resources, and item charges.

Technical information

  • Now modularized and open source. With the new Intrastat application, the Business Central development team continues to apply the strategy of modularizing the common application layer (base application). At the same time, we’re providing more capabilities for partners to contribute by making both the Intrastat app and report formats open source.

diagram

  • Developed for extensibility. The central part of the design is the Intrastat core app. The app allows a partner to define business in two ways:
    • Use the app logic with the report format exposed through the Data Exchange Framework to generate the report
    • For a heavily customized solution, define logic through the standard report configuration on the VAT Report Configuration page (suggest lines, content, response handler, and validation object)

      To support extensibility, the app has 47 integration events. If you need to submit more than that, use the standard process.

  • Customizable formats. After receiving many requests to allow easier report format modifications, we decided to expose the format through the Data Exchange Framework to support text and XML files. Microsoft will continue to provide changes in accordance with local market law requirements, but users may customize the format and keep their own version of the format definition. The Intrastat core application will have a common format defined in DataExchDefMap.xml in the AppResources folder.
  • Country formats. The Intrastat core extension supports all countries and follows the existing Intrastat logic. For several countries, the required Intrastat format is significantly different. Microsoft is releasing country-specific apps, which will be developed on top of the Intrastat core app. Both Intrastat core and country apps will be preinstalled by default but must be enabled in Feature Management. Developers can choose to develop their solution on top of a common application layer, the Intrastat common application, or an Intrastat country app.
  • Open-source app. Our goal is to open the source code completely, both the app code and report formats. The formats will be exposed as Data Exchange Framework schema and shared through GitHub. Like other first-party apps, the app code will be available on GitHub at ALAppExtensions/Apps/W1 at main microsoft/ALAppExtensions (github.com).

When will the new Intrastat reporting experience be available?

The new Intrastat experience is available starting in October 2022 with 2022 release wave 2 for all countries using the W1 version and country apps for Austria, Spain, and Sweden. Other country-based apps for Microsoft localizations only will be available in November 2022. Service Declaration will be available as an additional app in the same release starting in November.

For EU countries without Microsoft localization

What if you’re in an EU country where Microsoft doesn’t provide localization? In that case, partners can start by adding country-based features on top of the Intrastat core app as soon as the W1 version is available.

If you see some obstacles in this process, please contact us. Our intention was not to create a new Intrastat only for Microsoft localizations, but to create a solution that our partners can easily extend to meet local requirements.

Action needed

You should transition to the new Intrastat app soon. The existing Intrastat functionality will be supported until 2023 release wave 2 to provide enough time for a smooth transition. However, we encourage Business Central customers to move to the new Intrastat app before then.

Important notes:

  • Existing Intrastat functionality is being deprecated. The Intrastat objects in the base application (27 in total) will be tagged for obsoletion and will be available until 2023 release wave 2.
  • The transition process is one-directional. Once you move to the new Intrastat app, users will not be able to return to the old experience.
  • No data will be transferred in the transition.
  • There is no overlap between the existing and new Intrastat objects, so there is no risk of data corruption.
  • Users who try to access existing Intrastat pages will be redirected to the new experience.
  • To modify the assisted setup, use the following setting:
    [EventSubscriber(ObjectType::Codeunit, Codeunit::"Feature Management Facade", 'OnAfterFeatureEnableConfirmed', '', true, true)]

Learn more

For more information about the new Intrastat reporting experience, read the documentation:

Work with Intrastat Reporting – Business Central | Microsoft Learn

Set Up Intrastat Reporting – Business Central | Microsoft Learn

Learn about more globalization features in Dynamics 365: Reduce complexity across global operations with Dynamics 365 – Microsoft Dynamics 365 Blog

New to Dynamics 365 Business Central? Try it for free.

The post Intrastat reporting redesigned in Dynamics 365 Business Central appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

App Service Hybrid connections: is it fast enough?

App Service Hybrid connections: is it fast enough?

This article is contributed. See the original author and article here.

App Service Hybrid connection offers a quick and uncomplicated way to reach your on-premises services in scenarios where there aren’t other networking solutions like VPN or ExpressRoute available. Normally, you don’t even need to open any firewall ports in your on-premises environments because it only requires outbound HTTP connection over port 443 towards Azure to work. Behind the scenes, it is a TCP relay proxy over websockets. It only works to reach services that run on TCP protocols and not UDP. 


Therefore, it might be a good fit if you are planning to migrate your application(s) to Azure App Service but this app has dependencies to on-premises databases or APIs and your networking team is not yet ready to set up a VPN/ExpressRoute connection between these environments. The migration work can be unblocked using Hybrid connections towards these external dependencies with no code changes within your app.


However, what to expect in terms of performance? Apart from the pure networking latency of having an App Service connecting back to on-premises service… will the Hybrid connection itself introduce extra latency on top of network? What about the different scenarios:



  • Reaching on-premises HTTP APIs;

  • Reaching on-premises databases;

  • Downloading on-premises large files over HTTP


 


In this article we will run benchmarks on all given scenarios above and compare them with and without Hybrid connection. It is not the goal here how to configure such a connection, because that tutorial is very well described here.


 


The test setup


 


An App Service Hybrid connection relies on a service called Azure Relay to work (and Azure Relay is based on Azure Service Bus platform). This is how the architecture looks like:


 

AndreDewes_MSFT_3-1664899839679.png


Now, let me explain how the setup in this test is done when comparing to the diagram above:



  • App Service: a small PremiumV2 .NET Core 6 app running in Brazil South;

  • Azure Relay: if you don’t have an already created Azure Relay created, the App Service Hybrid connection will ask you to do so. Here, I created one in Brazil South region;

  • On Premises: to simulate an on-premises environment, here I have a physical computer with a fast and modern hardware (Ryzen 5 5600H, 16GB ram, 512gb SSD) connected to a 600mbps stable fiber connection. This system has an average 12ms (milliseconds) latency to Azure and vice-versa. It also has one SQL Express 2019 database, a .NET 6 API to simulate on-premises services for these tests and the HCM (Hybrid Connection Manager) that is required for this setup.


Now, we want to compare the Hybrid connection overhead over the raw network connection. So, for each test that will follow in this article, we will configure the App Service to hit the services via Hybrid connection endpoints and then run the same test but going directly to the public IP of the “on-premises” server, skipping the relay completely. 


Here’s the configuration in the Portal:


 


AndreDewes_MSFT_4-1664901074611.png


 


Scenario 1: HTTP requests


 


Let’s assume you got on-premises HTTP services to reach from an App Service via Hybrid connection. In the configuration picture above, that endpoint name is “andre-api” which points to a on-premises DNS name of “testerelay” on port 5001. That is the .NET API running in the on-premises computer. This API has a REST endpoint that returns random strings of around ~8kb in size.


From the App Service side, it runs another .NET API that calls the previous endpoint in three different ways:



  • Single request: App Service calls the on-premises API once

  • Sequentially: App Service calls the on-premises API 50 times in a row. When the previous request finishes, the next goes ahead and so on… until we reach 50 requests;

  • Parallel: App Service calls the on-premises API 50 times at the same time. This is accomplished by making use of .NET tasks


The intention here is to verify how well the relay handles a typical real-world scenario where you get many parallel requests at a given time. All requests here are using HTTP2 protocol.


Check out the results table:


 

































 


 



Average response time per HTTP request



Difference



Direct



Hybrid connection



Single request



13ms



24ms



+84%



Sequential (50)



13ms



34ms



+161%



Parallel (50)



50ms



60ms



+20%



 


Important note


Having the App Service .NET API calls the relay forcing the HttpClient to use HTTP2 by default made a huge difference for the positive side in the tests results. HTTP 1.1 was much worse especially in the parallel requests test;


 


Conclusion for HTTP tests


If we look at the difference numbers in % it seems to be a huge overhead added by the Hybrid Connection, but looking at absolute numbers, it is not. In the more realistic test of this setup – the Parallel HTTP simulation – we get only 10ms added compared to a direct connection, which is negligible for most applications. Another point to keep in mind here is that we are comparing the Hybrid connection to a direct connection back to on-premises. In reality we would have a VPN or other appliance which might add some extra delay there too.


 


Scenario 2: database connections


 


Another very common use case is the need to fetch data from a on-premises database that could not be migrated to Azure at the same time as the application. Here we will make the App Service .NET API call the on-premises SQL Server using the relay connection and then directly. The query returns from the database around ~8kb of data per call. Like the HTTP tests, there will be three different scenarios:



  • Single request: AppService queries the database once

  • Sequentially: App Service queries the database 50 times in a row. When the previous query finishes, the next goes ahead and so on… until we reach 50 queries;

  • Parallel: App Service queries the on-premises database 50 times at the same time. This is accomplished by making use of .NET tasks

































 


 



Average response time per SQL query



Difference



Direct



Hybrid connection



Single query



13ms



13ms



0%



Sequential (50)



13ms



27ms



+107%



Parallel (50)



13ms



30ms



+130%



 


Conclusion for database tests


Compared to the HTTP tests, the database queries have less overhead because of the TCP nature of the connections. While the direct connection had no extra overhead even when querying 50 in parallel, the Hybrid counterpart added some but not significantly – again, looking from absolute numbers perspective and not purely in percentage.


 


Scenario 3: large file downloads


 


Now let’s benchmark something less usual: what about using the Hybrid connection to stream a 1GB file (a Linux ISO file) from on-premises REST API via HTTP? Here I’m expecting more overhead because the underlying websockets protocol that Azure Relay is using is not really meant for these cases.  But anyway, here are the results:


 



















REST API HTTP download speed



Difference



Direct



Hybrid connection



27 MB/s



20 MB/s



35%



 


Conclusion for file download test


I was expecting a much worse result, but the Hybrid connection surprised for the better here. I wouldn’t recommend this connection for streaming large files but this test shows that this is possible if it is really needed.


 


Overall conclusion


 


These benchmarks did not cover all the possibilities for a Hybrid connection but certainly give us an idea what to expect. Generally speaking, it is a solid alternative and I would recommend for scenarios where a VPN or ExpressRoute connection is not possible. The biggest advantage for sure is ease of use – setting up your own environment to run similar tests will take just a couple of hours top. 


 


If you wish that I run additional benchmarks and scenarios, please let me know in the comments!


 


 

Impacket and Exfiltration Tool Used to Steal Sensitive Information from Defense Industrial Base Organization

This article is contributed. See the original author and article here.

Actions to Help Protect Against Russian State-Sponsored Malicious Cyber Activity:

• Enforce multifactor authentication (MFA) on all user accounts.
• Implement network segmentation to separate network segments based on role and functionality.
• Update software, including operating systems, applications, and firmware, on network assets.
• Audit account usage.

From November 2021 through January 2022, the Cybersecurity and Infrastructure Security Agency (CISA) responded to advanced persistent threat (APT) activity on a Defense Industrial Base (DIB) Sector organization’s enterprise network. During incident response activities, CISA uncovered that likely multiple APT groups compromised the organization’s network, and some APT actors had long-term access to the environment. APT actors used an open-source toolkit called Impacket to gain their foothold within the environment and further compromise the network, and also used a custom data exfiltration tool, CovalentStealer, to steal the victim’s sensitive data.

This joint Cybersecurity Advisory (CSA) provides APT actors tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs) identified during the incident response activities by CISA and a third-party incident response organization. The CSA includes detection and mitigation actions to help organizations detect and prevent related APT activity. CISA, the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA) recommend DIB sector and other critical infrastructure organizations implement the mitigations in this CSA to ensure they are managing and reducing the impact of cyber threats to their networks.

Download the PDF version of this report: pdf, 692 KB

For a downloadable copy of IOCs, see the following files:

Threat Actor Activity

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 11. See the MITRE ATT&CK Tactics and Techniques section for a table of the APT cyber activity mapped to MITRE ATT&CK for Enterprise framework.

From November 2021 through January 2022, CISA conducted an incident response engagement on a DIB Sector organization’s enterprise network. The victim organization also engaged a third-party incident response organization for assistance. During incident response activities, CISA and the trusted –third-party identified APT activity on the victim’s network.

Some APT actors gained initial access to the organization’s Microsoft Exchange Server as early as mid-January 2021. The initial access vector is unknown. Based on log analysis, the actors gathered information about the exchange environment and performed mailbox searches within a four-hour period after gaining access. In the same period, these actors used a compromised administrator account (“Admin 1”) to access the EWS Application Programming Interface (API). In early February 2021, the actors returned to the network and used Admin 1 to access EWS API again. In both instances, the actors used a virtual private network (VPN).

Four days later, the APT actors used Windows Command Shell over a three-day period to interact with the victim’s network. The actors used Command Shell to learn about the organization’s environment and to collect sensitive data, including sensitive contract-related information from shared drives, for eventual exfiltration. The actors manually collected files using the command-line tool, WinRAR. These files were split into approximately 3MB chunks located on the Microsoft Exchange server within the CU2hedebug directory. See Appendix: Windows Command Shell Activity for additional information, including specific commands used.

During the same period, APT actors implanted Impacket, a Python toolkit for programmatically constructing and manipulating network protocols, on another system. The actors used Impacket to attempt to move laterally to another system.

In early March 2021, APT actors exploited CVE-2021-26855, CVE-2021-26857, CVE-2021-26868, and CVE-2021-27065 to install 17 China Chopper webshells on the Exchange Server. Later in March, APT actors installed HyperBro on the Exchange Server and two other systems. For more information on the HyperBro and webshell samples, see CISA MAR-10365227-2 and -3.

In April 2021, APT actors used Impacket for network exploitation activities. See the Use of Impacket section for additional information. From late July through mid-October 2021, APT actors employed a custom exfiltration tool, CovalentStealer, to exfiltrate the remaining sensitive files. See the Use of Custom Exfiltration Tool: CovalentStealer section for additional information.

APT actors maintained access through mid-January 2022, likely by relying on legitimate credentials.

Use of Impacket

CISA discovered activity indicating the use of two Impacket tools: wmiexec.py and smbexec.py. These tools use Windows Management Instrumentation (WMI) and Server Message Block (SMB) protocol, respectively, for creating a semi-interactive shell with the target device. Through the Command Shell, an Impacket user with credentials can run commands on the remote device using the Windows management protocols required to support an enterprise network.

The APT cyber actors used existing, compromised credentials with Impacket to access a higher privileged service account used by the organization’s multifunctional devices. The threat actors first used the service account to remotely access the organization’s Microsoft Exchange server via Outlook Web Access (OWA) from multiple external IP addresses; shortly afterwards, the actors assigned the Application Impersonation role to the service account by running the following PowerShell command for managing Exchange:

powershell add-pssnapin *exchange*;New-ManagementRoleAssignment – name:”Journaling-Logs” -Role:ApplicationImpersonation -User:<account>

This command gave the service account the ability to access other users’ mailboxes.

The APT cyber actors used virtual private network (VPN) and virtual private server (VPS) providers, M247 and SurfShark, as part of their techniques to remotely access the Microsoft Exchange server. Use of these hosting providers, which serves to conceal interaction with victim networks, are common for these threat actors. According to CISA’s analysis of the victim’s Microsoft Exchange server Internet Information Services (IIS) logs, the actors used the account of a former employee to access the EWS. EWS enables access to mailbox items such as email messages, meetings, and contacts. The source IP address for these connections is mostly from the VPS hosting provider, M247.

Use of Custom Exfiltration Tool: CovalentStealer

The threat actors employed a custom exfiltration tool, CovalentStealer, to exfiltrate sensitive files.

CovalentStealer is designed to identify file shares on a system, categorize the files, and upload the files to a remote server. CovalentStealer includes two configurations that specifically target the victim’s documents using predetermined files paths and user credentials. CovalentStealer stores the collected files on a Microsoft OneDrive cloud folder, includes a configuration file to specify the types of files to collect at specified times and uses a 256-bit AES key for encryption. See CISA MAR-10365227-1 for additional technical details, including IOCs and detection signatures.

MITRE ATT&CK Tactics and Techniques

MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. CISA uses the ATT&CK Framework as a foundation for the development of specific threat models and methodologies. Table 1 lists the ATT&CK techniques employed by the APT actors.

Table 1: Identified APT Enterprise ATT&CK Tactics and Techniques

Initial Access

Technique Title

ID

Use

Valid Accounts

T1078

Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.

Execution

Technique Title

ID

Use

Windows Management Instrumentation

T1047

Actors used Impacket tools wmiexec.py and smbexec.py to leverage Windows Management Instrumentation and execute malicious commands.

Command and Scripting Interpreter

T1059

Actors abused command and script interpreters to execute commands.

Command and Scripting Interpreter: PowerShell

T1059.001

Actors abused PowerShell commands and scripts to map shared drives by specifying a path to one location and retrieving the items from another. See Appendix: Windows Command Shell Activity for additional information.

Command and Scripting Interpreter: Windows Command Shell

T1059.003

Actors abused the Windows Command Shell to learn about the organization’s environment and to collect sensitive data. See Appendix: Windows Command Shell Activity for additional information, including specific commands used.

The actors used Impacket tools, which enable a user with credentials to run commands on the remote device through the Command Shell.

Command and Scripting Interpreter: Python

T1059.006

The actors used two Impacket tools: wmiexec.py and smbexec.py.

Shared Modules

T1129

Actors executed malicious payloads via loading shared modules. The Windows module loader can be instructed to load DLLs from arbitrary local paths and arbitrary Universal Naming Convention (UNC) network paths.

System Services

T1569

Actors abused system services to execute commands or programs on the victim’s network.

Persistence

Technique Title

ID

Use

Valid Accounts

T1078

Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion.

Create or Modify System Process

T1543

Actors were observed creating or modifying system processes.

Privilege Escalation

Technique Title

ID

Use

Valid Accounts

T1078

Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.

Defense Evasion

Technique Title

ID

Use

Masquerading: Match Legitimate Name or Location

T1036.005

Actors masqueraded the archive utility WinRAR.exe by renaming it VMware.exe to evade defenses and observation.

Indicator Removal on Host

T1070

Actors deleted or modified artifacts generated on a host system to remove evidence of their presence or hinder defenses.

Indicator Removal on Host: File Deletion

T1070.004

Actors used the del.exe command with the /f parameter to force the deletion of read-only files with the *.rar and tempg* wildcards.

Valid Accounts

T1078

Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.

Virtualization/Sandbox Evasion: System Checks

T1497.001

Actors used Windows command shell commands to detect and avoid virtualization and analysis environments. See Appendix: Windows Command Shell Activity for additional information.

Impair Defenses: Disable or Modify Tools

T1562.001

Actors used the taskkill command to probably disable security features. CISA was unable to determine which application was associated with the Process ID.

Hijack Execution Flow

T1574

Actors were observed using hijack execution flow.

Discovery

Technique Title

ID

Use

System Network Configuration Discovery

T1016

Actors used the systeminfo command to look for details about the network configurations and settings and determine if the system was a VMware virtual machine.

The threat actor used route print to display the entries in the local IP routing table.

System Network Configuration Discovery: Internet Connection Discovery

T1016.001

Actors checked for internet connectivity on compromised systems. This may be performed during automated discovery and can be accomplished in numerous ways.

System Owner/User Discovery

T1033

Actors attempted to identify the primary user, currently logged in user, set of users that commonly use a system, or whether a user is actively using the system.

System Network Connections Discovery

T1049

Actors used the netstat command to display TCP connections, prevent hostname determination of foreign IP addresses, and specify the protocol for TCP.

Process Discovery

T1057

Actors used the tasklist command to get information about running processes on a system and determine if the system was a VMware virtual machine.

The actors used tasklist.exe and find.exe to display a list of applications and services with their PIDs for all tasks running on the computer matching the string “powers.”

System Information Discovery

T1082

Actors used the ipconfig command to get detailed information about the operating system and hardware and determine if the system was a VMware virtual machine.

File and Directory Discovery

T1083

Actors enumerated files and directories or may search in specific locations of a host or network share for certain information within a file system.

Virtualization/Sandbox Evasion: System Checks

T1497.001

Actors used Windows command shell commands to detect and avoid virtualization and analysis environments.

Lateral Movement

Technique Title

ID

Use

Remote Services: SMB/Windows Admin Shares

T1021.002

Actors used Valid Accounts to interact with a remote network share using Server Message Block (SMB) and then perform actions as the logged-on user.

Collection

Technique Title

ID

Use

Archive Collected Data: Archive via Utility

T1560.001

Actor used PowerShell commands and WinRAR to compress and/or encrypt collected data prior to exfiltration.

Data from Network Shared Drive

T1039

Actors likely used net share command to display information about shared resources on the local computer and decide which directories to exploit, the powershell dir command to map shared drives to a specified path and retrieve items from another, and the ntfsinfo command to search network shares on computers they have compromised to find files of interest.

The actors used dir.exe to display a list of a directory’s files and subdirectories matching a certain text string.

Data Staged: Remote Data Staging

T1074.002

The actors split collected files into approximately
3 MB chunks located on the Exchange server within the CU2hedebug directory.

Command and Control

Technique Title

ID

Use

Non-Application Layer Protocol

T1095

Actors used a non-application layer protocol for communication between host and Command and Control (C2) server or among infected hosts within a network.

Ingress Tool Transfer

T1105

Actors used the certutil command with three switches to test if they could download files from the internet.

The actors employed CovalentStealer to exfiltrate the files.

Proxy

T1090

Actors are known to use VPN and VPS providers, namely M247 and SurfShark, as part of their techniques to access a network remotely.

Exfiltration

Technique Title

ID

Use

Schedule Transfer

T1029

Actors scheduled data exfiltration to be performed only at certain times of day or at certain intervals and blend traffic patterns with normal activity.

Exfiltration Over Web Service: Exfiltration to Cloud Storage

T1567.002

The actor’s CovalentStealer tool stores collected files on a Microsoft OneDrive cloud folder.

DETECTION

Given the actors’ demonstrated capability to maintain persistent, long-term access in compromised enterprise environments, CISA, FBI, and NSA encourage organizations to:

  • Monitor logs for connections from unusual VPSs and VPNs. Examine connection logs for access from unexpected ranges, particularly from machines hosted by SurfShark and M247.
  • Monitor for suspicious account use (e.g., inappropriate or unauthorized use of administrator accounts, service accounts, or third-party accounts). To detect use of compromised credentials in combination with a VPS, follow the steps below:
    • Review logs for “impossible logins,” such as logins with changing username, user agent strings, and IP address combinations or logins where IP addresses do not align to the expected user’s geographic location.
    • Search for “impossible travel,” which occurs when a user logs in from multiple IP addresses that are a significant geographic distance apart (i.e., a person could not realistically travel between the geographic locations of the two IP addresses in the time between logins). Note: This detection opportunity can result in false positives if legitimate users apply VPN solutions before connecting to networks.
    • Search for one IP used across multiple accounts, excluding expected logins.
      • Take note of any M247-associated IP addresses used along with VPN providers (e.g., SurfShark). Look for successful remote logins (e.g., VPN, OWA) for IPs coming from M247- or using SurfShark-registered IP addresses.
    • Identify suspicious privileged account use after resetting passwords or applying user account mitigations.
    • Search for unusual activity in typically dormant accounts.
    • Search for unusual user agent strings, such as strings not typically associated with normal user activity, which may indicate bot activity.
  • Review the YARA rules provided in MAR-10365227-1 to assist in determining whether malicious activity has been observed.
  • Monitor for the installation of unauthorized software, including Remote Server Administration Tools (e.g., psexec, RdClient, VNC, and ScreenConnect).
  • Monitor for anomalous and known malicious command-line use. See Appendix: Windows Command Shell Activity for commands used by the actors to interact with the victim’s environment.
  • Monitor for unauthorized changes to user accounts (e.g., creation, permission changes, and enabling a previously disabled account).

CONTAINMENT AND REMEDIATION

Organizations affected by active or recently active threat actors in their environment can take the following initial steps to aid in eviction efforts and prevent re-entry:

  • Report the incident. Report the incident to U.S. Government authorities and follow your organization’s incident response plan.
  • Reset all login accounts. Reset all accounts used for authentication since it is possible that the threat actors have additional stolen credentials. Password resets should also include accounts outside of Microsoft Active Directory, such as network infrastructure devices and other non-domain joined devices (e.g., IoT devices).
  • Monitor SIEM logs and build detections. Create signatures based on the threat actor TTPs and use these signatures to monitor security logs for any signs of threat actor re-entry.
  • Enforce MFA on all user accounts. Enforce phishing-resistant MFA on all accounts without exception to the greatest extent possible.
  • Follow Microsoft’s security guidance for Active DirectoryBest Practices for Securing Active Directory.
  • Audit accounts and permissions. Audit all accounts to ensure all unused accounts are disabled or removed and active accounts do not have excessive privileges. Monitor SIEM logs for any changes to accounts, such as permission changes or enabling a previously disabled account, as this might indicate a threat actor using these accounts.
  • Harden and monitor PowerShell by reviewing guidance in the joint Cybersecurity Information Sheet—Keeping PowerShell: Security Measures to Use and Embrace.

Mitigation recommendations are usually longer-term efforts that take place before a compromise as part of risk management efforts, or after the threat actors have been evicted from the environment and the immediate response actions are complete. While some may be tailored to the TTPs used by the threat actor, recovery recommendations are largely general best practices and industry standards aimed at bolstering overall cybersecurity posture.

Segment Networks Based on Function

  • Implement network segmentation to separate network segments based on role and functionality. Proper network segmentation significantly reduces the ability for ransomware and other threat actor lateral movement by controlling traffic flows between—and access to—various subnetworks. (See CISA’s Infographic on Layering Network Security Through Segmentation and NSA’s Segment Networks and Deploy Application-Aware Defenses.)
  • Isolate similar systems and implement micro-segmentation with granular access and policy restrictions to modernize cybersecurity and adopt Zero Trust (ZT) principles for both network perimeter and internal devices. Logical and physical segmentation are critical to limiting and preventing lateral movement, privilege escalation, and exfiltration.

Manage Vulnerabilities and Configurations

  • Update software, including operating systems, applications, and firmware, on network assets. Prioritize patching known exploited vulnerabilities and critical and high vulnerabilities that allow for remote code execution or denial-of-service on internet-facing equipment.
  • Implement a configuration change control process that securely creates device configuration backups to detect unauthorized modifications. When a configuration change is needed, document the change, and include the authorization, purpose, and mission justification. Periodically verify that modifications have not been applied by comparing current device configurations with the most recent backups. If suspicious changes are observed, verify the change was authorized.

Search for Anomalous Behavior

  • Use cybersecurity visibility and analytics tools to improve detection of anomalous behavior and enable dynamic changes to policy and other response actions. Visibility tools include network monitoring tools and host-based logs and monitoring tools, such as an endpoint detection and response (EDR) tool. EDR tools are particularly useful for detecting lateral connections as they have insight into common and uncommon network connections for each host.
  • Monitor the use of scripting languages (e.g., Python, Powershell) by authorized and unauthorized users. Anomalous use by either group may be indicative of malicious activity, intentional or otherwise.

Restrict and Secure Use of Remote Admin Tools

  • Limit the number of remote access tools as well as who and what can be accessed using them. Reducing the number of remote admin tools and their allowed access will increase visibility of unauthorized use of these tools.
  • Use encrypted services to protect network communications and disable all clear text administration services(e.g., Telnet, HTTP, FTP, SNMP 1/2c). This ensures that sensitive information cannot be easily obtained by a threat actor capturing network traffic.

Implement a Mandatory Access Control Model

  • Implement stringent access controls to sensitive data and resources. Access should be restricted to those users who require access and to the minimal level of access needed.

Audit Account Usage

  • Monitor VPN logins to look for suspicious access (e.g., logins from unusual geo locations, remote logins from accounts not normally used for remote access, concurrent logins for the same account from different locations, unusual times of the day).
  • Closely monitor the use of administrative accounts. Admin accounts should be used sparingly and only when necessary, such as installing new software or patches. Any use of admin accounts should be reviewed to determine if the activity is legitimate.
  • Ensure standard user accounts do not have elevated privileges Any attempt to increase permissions on standard user accounts should be investigated as a potential compromise.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, CISA, FBI, and NSA recommend exercising, testing, and validating your organization’s security program against threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. CISA, FBI, and NSA recommend testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 1).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze the performance of your detection and prevention technologies.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA, FBI, and NSA recommend continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

RESOURCES

CISA offers several no-cost scanning and testing services to help organizations reduce their exposure to threats by taking a proactive approach to mitigating attack vectors. See cisa.gov/cyber-hygiene-services.

U.S. DIB sector organizations may consider signing up for the NSA Cybersecurity Collaboration Center’s DIB Cybersecurity Service Offerings, including Protective Domain Name System (PDNS) services, vulnerability scanning, and threat intelligence collaboration for eligible organizations. For more information on how to enroll in these services, email dib_defense@cyber.nsa.gov.

ACKNOWLEDGEMENTS

CISA, FBI, and NSA acknowledge Mandiant for its contributions to this CSA.

APPENDIX: WINDOWS COMMAND SHELL ACTIVITY

Over a three-day period in February 2021, APT cyber actors used Windows Command Shell to interact with the victim’s environment. When interacting with the victim’s system and executing commands, the threat actors used /q and /c parameters to turn the echo off, carry out the command specified by a string, and stop its execution once completed.

On the first day, the threat actors consecutively executed many commands within the Windows Command Shell to learn about the organization’s environment and to collect sensitive data for eventual exfiltration (see Table 2).

Table 2: Windows Command Shell Activity (Day 1)

Command

Description / Use

net share

Used to create, configure, and delete network shares from the command-line.[1] The threat actor likely used this command to display information about shared resources on the local computer and decide which directories to exploit.

powershell dir

An alias (shorthand) for the PowerShell Get-ChildItem cmdlet. This command maps shared drives by specifying a path to one location and retrieving the items from another.[2] The threat actor added additional switches (aka options, parameters, or flags) to form a “one liner,” an expression to describe commonly used commands used in exploitation: powershell dir -recurse -path e:<redacted>|select fullname,length|export-csv c:windowstemptemp.txt. This particular command lists subdirectories of the target environment when.

systeminfo

Displays detailed configuration information [3], tasklist – lists currently running processes [4], and ipconfig displays all current Transmission Control Protocol (TCP)/IP network configuration values and refreshes Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) settings, respectively [5]. The threat actor used these commands with specific switches to determine if the system was a VMware virtual machine: systeminfo > vmware & date /T, tasklist /v > vmware & date /T, and ipconfig /all >> vmware & date /.

route print

Used to display and modify the entries in the local IP routing table. [6] The threat actor used this command to display the entries in the local IP routing table.

netstat

Used to display active TCP connections, ports on which the computer is listening, Ethernet statistics, the IP routing table, IPv4 statistics, and IPv6 statistics.[7] The threat actor used this command with three switches to display TCP connections, prevent hostname determination of foreign IP addresses, and specify the protocol for TCP: netstat -anp tcp.

certutil

Used to dump and display certification authority (CA) configuration information, configure Certificate Services, backup and restore CA components, and verify certificates, key pairs, and certificate chains.[8] The threat actor used this command with three switches to test if they could download files from the internet: certutil -urlcache -split -f https://microsoft.com temp.html.

ping

Sends Internet Control Message Protocol (ICMP) echoes to verify connectivity to another TCP/IP computer.[9] The threat actor used ping -n 2 apple.com to either test their internet connection or to detect and avoid virtualization and analysis environments or network restrictions.

taskkill

Used to end tasks or processes.[10] The threat actor used taskkill /F /PID 8952 to probably disable security features. CISA was unable to determine what this process was as the process identifier (PID) numbers are dynamic.

PowerShell Compress-Archive cmdlet

Used to create a compressed archive or to zip files from specified files and directories.[11] The threat actor used parameters indicating shared drives as file and folder sources and the destination archive as zipped files. Specifically, they collected sensitive contract-related information from the shared drives.

On the second day, the APT cyber actors executed the commands in Table 3 to perform discovery as well as collect and archive data.

Table 3: Windows Command Shell Activity (Day 2)

Command

Description / Use

ntfsinfo.exe

Used to obtain volume information from the New Technology File System (NTFS) and to print it along with a directory dump of NTFS meta-data files.[12]

WinRAR.exe

Used to compress files and subsequently masqueraded WinRAR.exe by renaming it VMware.exe.[13]

On the third day, the APT cyber actors returned to the organization’s network and executed the commands in Table 4.

Table 4: Windows Command Shell Activity (Day 3)

Command

Description / Use

powershell -ep bypass import-module .vmware.ps1;export-mft -volume e

Threat actors ran a PowerShell command with parameters to change the execution mode and bypass the Execution Policy to run the script from PowerShell and add a module to the current section: powershell -ep bypass import-module .vmware.ps1;export-mft -volume e. This module appears to acquire and export the Master File Table (MFT) for volume E for further analysis by the cyber actor.[14]

set.exe

Used to display the current environment variable settings.[15] (An environment variable is a dynamic value pointing to system or user environments (folders) of the system. System environment variables are defined by the system and used globally by all users, while user environment variables are only used by the user who declared that variable and they override the system environment variables (even if the variables are named the same).

dir.exe

Used to display a list of a directory’s files and subdirectories matching the eagx* text string, likely to confirm the existence of such file.

tasklist.exe and find.exe

Used to display a list of applications and services with their PIDs for all tasks running on the computer matching the string “powers”.[16][17][18]

ping.exe

Used to send two ICMP echos to amazon.com. This could have been to detect or avoid virtualization and analysis environments, circumvent network restrictions, or test their internet connection.[19]

del.exe with the /f parameter

Used to force the deletion of read-only files with the *.rar and tempg* wildcards.[20]