Summarize text with Text Analytics API

Summarize text with Text Analytics API

This article is contributed. See the original author and article here.

Text Analytics extractive summarization is in public preview!


ylxiong_0-1628032473948.png


The extractive summarization feature in Text Analytics uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document. This feature is provided as an API for developers. They can use it to build intelligent solutions based on the relevant information extracted to support various use cases.


 


In the public preview, extractive summarization supports 10 languages.  It is based on pretrained multilingual transformer models, part of our quest for holistic representations.  It draws its strength from transfer learning across monolingual and harness the shared nature of languages to produce models of improved quality and efficiency. The 10 languages are English, Spanish, French, Italian, German, Chinese (Simplified), Japanese, Korean,  Portuguese (Portugal), and Portuguese (Brazilian). 


Learn more about Text Analytics extractive summarization


 


References:


Quickstart offers an easy way to get started with any of the Text Analytics offerings.


Text Analytics v3.1 GA Announcement


 


 

Eliminating IoT vulnerabilities using CIS Benchmarks                      and Azure Defender for IoT

Eliminating IoT vulnerabilities using CIS Benchmarks and Azure Defender for IoT

This article is contributed. See the original author and article here.

CIS Benchmarks from the Center for Internet Security (CIS) provide organizations with configuration best practices for securing operating systems. Using these standards which have been defined by cybersecurity industry experts and research institutions, can help ensure that your organization’s devices are configured securely from day one using Azure Defender for IoT. 


 


What are the CIS Benchmarks? 


Without the right configuration, operating systems are highly vulnerable and include many opportunities for attackers to penetrate the organization. The CIS security Benchmarks program provides the best practices and industry-agreed standards to secure devices against cyber threats. The benchmarks include security recommendations for operating systems, network devices, cloud hosting services and more. 


CIS Benchmarks can be used to implement CIS controls which are a prioritized set of safeguards to mitigate the most prevalent cyber-attacks against systems and networks.  


MiaShpan_0-1628426369027.png


 


Fig. 1 CIS Benchmarks controls 


 


The top 5 most impactful controls from the OS CIS Benchmarks to implement are:  



  1. Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers – As delivered by manufacturers and resellers, the default configurations for operating systems are not security-oriented. Botnets brute force attacks for example, can be avoided if the defaults of the device are changed. 

  2. Maintenance, Monitoring and Analysis of Audit Logs – Without solid audit logs, attackers can hide their activities on victim machines. Manage audit logs of events to help detect an attack.  

  3. Limitation and Control of Network Ports, Protocols, and Services – Manage processes on your networked devices to prevent attackers from exploiting services that are installed by software packages automatically without alerting the user. 

  4. Controlled Access Based on the Need to Know – In many attacks the victim is not aware that sensitive data is leaving their system as they were not monitoring data movement. Manage your data flow to minimize its exposure to attackers. 

  5. Account Monitoring and Control – Inactive accounts can be exploited to impersonate legitimate users making the discovery of attacker behavior difficult. The lifecycle of all accounts should be monitored to minimize opportunities for attackers to leverage them. 


 


How can Azure Defender for IoT help you comply with CIS Benchmarks? 


Azure Defender for IoT is a comprehensive IoT/OT security solution for discovering IoT/OT devices, identifying vulnerabilities, and continuously monitoring for threats. It is available in both agentless and agent-based architectures, and is tightly integrated with Azure Sentinel, Microsoft’s cloud-native SIEM/SOAR platform. The solution also integrates with third-party SOC solutions such as Splunk, IBM QRadar, and ServiceNow. 


 


In this blog we focus on the agent-based solution that device builders can include in their devices – a lightweight micro agent that allows them to build security directly into your new IoT devices and Azure IoT projects. The micro agent provides endpoint visibility into security posture management, threat detection, and integration into Microsoft’s other security tools for unified security management. 


 


Because each Benchmark can include hundreds of configurations that touch different parts of the OS, such as network configurations, authentication, authorization, maintenance, and others. Reviewing all the checks and ensuring the company is compliant, is a complex and time-consuming task. Defender for IoT helps organizations to automate this process while constantly identifying any existing weak links in their OS security posture.   


 


As soon as the agent is activated, the Azure Defender for IoT micro agent automatically runs all relevant benchmark checks on your devices. The results are then populated into IoT Hub interface under the Defender for IoT Security console as a recommendation. 


MiaShpan_1-1628426369021.png


Fig. 2 CIS Benchmarks recommendations as seen in the recommendation page under the IoT Hub 


 


All vulnerable devices are then displayed including the number of failed checks on each device. 


MiaShpan_2-1628426369001.png


Fig. 3 Vulnerable devices and the results of checks 


 


You can view a specific device and see all the reasons the checks failed and the exact mitigation steps needed to configure the devices correctly using Log analytics, a tool in the Azure portal used to edit and run log queries.  


MiaShpan_3-1628426369031.png


Fig. 4 CIS Benchmarks result and the appropriate mitgation steps for a single device 


 


Currently, the feature is available for Debian and Ubuntu Linux distributions, we are working towards adding more checks for additional OSs in the future. 


 


The CIS Benchmark can also be used to implement the NIST (National Institute of Standards and Technology) Framework for Improving Critical Infrastructure Cybersecurity. The NIST Framework is a U.S Department of Commerce initiative that defines industry standards and best practices in order to help organizations manage their cybersecurity risks.  


 


For customers that need to comply with NIST Cybersecurity Framework (CSF), you can use the CIS controls v7.1 mapping to NIST CSF document that has all the necessary controls, between the two systems mapped out for you.  


 


Thank you for reading and we hope this information proves helpful. To learn more about Azure Defender for IoT can benefit enterprises and device builders check out the following: 


Oracle Workloads on Azure- IO is King!

Oracle Workloads on Azure- IO is King!

This article is contributed. See the original author and article here.

If you’re migrating your data estate to Azure, as is normal considering Azure is an enterprise cloud that can be the home for all data, including Oracle, you may wonder what storage solutions there are to support these types of IO heavy workloads.  Maybe you didn’t realize how important storage was to Oracle in the cloud.  Most customers we word with are focused on what vCPU and memory are available in Azure, but for 95% of Oracle workloads, it’s IO that makes the decision on the infrastructure we choose and of that IO, its throughput, (MBPs) that is most often the deciding factor in the final VM sizes and storage type. 


 


This post isn’t going to be about promoting one storage vendor or any solution over another, but hopefully help you understand that each customer engagement is different and that there is a solution for everyone, and you can build out what you need and meet every IO workload with Oracle, (or any other heavy IO workload) in Azure.


 


There are limits on storage, but more importantly, there are limits per each VM on storage and Network that must be considered.  When choosing a VM, don’t just match the amount of vCPU and memory, but can the VM handle the throughput demands your workload will place on it.  One of our favorite VMs types is the E-series ds v4.  This sku series offers us the ability to have premium SSD for the OS disk, constrained vCPU versions if we need to have a larger “chassis” and memory with lesser vCPU for licensing constraints and higher throughput than we see with many others with similar configurations.


If you inspect the specifications by SKU size, you will see displayed the max cached IOPS/MBPs and Network bandwidth for the E ds v4 series:


kegorman_0-1628281409698.png


 


Table 1, E-series, ds v4 VMs in Azure specifications


 


The above table will result in both premium disk and ultra disk being held to both storage and network limits displayed, or for solutions such as ANF, Silk, Excelero or Flashgrid, we’re held to only the network limits shown.  As I stated earlier that throughput, MBPs is the biggest headache, not IOPs, (i.e., the number of requests) you can understand why the latter solutions come in handy with IO heavy workloads such as Oracle in Azure.


 


If you have sized out the Oracle workload for Azure properly, then you will know what you require in Azure IaaS to run it and can then choose the best VM, and storage needed.  If you’re then puzzled by storage solutions, let’s take a deeper look and especially for Exadata, demonstrate what options there are.


Don’t Guess


I can’t stress enough, if you haven’t sized out the Oracle workload from an AWR that shows considerable database workload activity from the customer’s environment, you’re just guessing.  Do NOT try to lift and shift the infrastructure, especially from an Exadata- you are AGAIN, wasting your time.  An Exadata is an engineered system and there are infrastructure components that can’t be shifted over and more often is quite over-provisioned. 


 


I also run into pushback on going through the sizing exercise.  Many will want to simply take the existing hardware and lift and shift it to the cloud.  This is one of the quickest ways to pay two or more times for Oracle licensing.  I’m not going to argue with sales people who pushback with me on this, but chalk the deal or migration up as lost and go spend my time on a migration that’s going to be successful from the beginning.


Fight Me


So I know what the documentation says for storage IaaS VMs


kegorman_1-1628281409703.png


 


Table 2, Storage Options for IaaS VMs in Microsoft Docs.


 


Reality of what is best for Oracle on Azure may not agree with what is in this table and I’m going to tell you now, I don’t agree with the above table.  Ultra disk may look appealing for Oracle, but we simply haven’t found the limitations vs. the cost for Oracle worthy, where for other uses, such as redo logs, it’s a fantastic win, (along with non-Oracle opportunities.)  Ultra is still limited by the storage and network limit per VM, and this means that we can’t gain the throughput that we require for most heavy Oracle workloads with 2000+ MBPs.


 


Using example workloads, I can begin to direct and “evolve” our storage solution and the levels we use with real customer use cases:
























































Storage Name



Storage Type



Use



Standard HDD



Native



Not for use with Relational Databases



Standard SSD



Native



Less than acceptable for Oracle workloads or Oracle VM OS Disks



Premium SSD



Native



Standard Oracle Workloads and OS Disk, With Datafiles, always turn on ReadOnly host caching.



Ultra Disk



Native



Redo Logs, rarely for datafiles due to limitations.



Azure NetApp Files



Native with 1st Party Partnership



High IO Oracle Workloads, some Exadata



Silk



Third Party Partnership



High IO Oracle + Workloads, especially Exadata



Excelero NVMesh



Third Party Partnership



High IO Oracle + Workloads



Flashgrid Storage Fabric SW



Third Party Partnership



High IO Oracle + Workloads



NetApp Cloud Volume OnTap (CVO)



Third Party Partnership



High IO Oracle + Workloads, especially Exadata



Table 3, Storage options most common for Oracle in Azure


 


As you look at the table above, you realize that there are a few options at the lower IO workload levels and many at the higher ones.  This is where knowledge of your workload and demands, along with unique features of each will come in handy when deciding.


Premium Disk


It is going to be very, very rare day that we place an Oracle workload on standard SSD.  A P10 disk will be recommended practice for the OS Disk for each VM Oracle will be running on and then we need to start allocating storage for the datafiles, redo logs, etc.


We rarely, if ever come across Oracle databases that don’t need the IO horsepower for anything but Premium.  With that, we get significant performance gain from ReadOnly host caching so the P40/P50, (minus that last 1Gb to leave it inside the limit for ReadOnly host caching of 4095 TiB) disks are our favorite and meet the needs of many Oracle workloads.  For the smallest Oracle database workloads, we may use smaller premium SSD or stripe premium disk, as we can use multiple disks with a single ASM diskgroup.  Always pay attention when choosing your VM size, there is a max number of data disks that can be attached, so this will also determine what storage you choose, (refer to table 1.)


 


ReadOnly host caching is only available on certain VM series/skus, as well as mid to high range premium SSD and limited to 4095 TiB.  If you allocate a disk larger than that, the host caching will automatically be disabled.  We hear a lot from Infra folks about “disk bursting”, either credit or On-demand versions with IO in Azure premium SSD storage.  I haven’t had a customer workload that really could make use of it, but for smaller workloads, upwards of 30 minutes of bursting could be beneficial.  For P40-P80, there is an unlimited bursting that can be an option at 1000 MBPs.  Many customers batch loads in Oracle are just too intensive and too long to take advantage of credit-based bursting and On-demand or changing performance tier is too expensive or too inconsistent in performance for many.  For relational database workloads, consistency in performance really is key.  Most customers choose to stripe disks to get the max throughput from storage for most Oracle databases or choose higher tier storage, skipping bursting options all together.


 


Using the table below, you can see the size and the IO max for each premium storage, which tells you what you will be limited to for a single disk unless you stripe-

































































       

Premium Disk Size



Storage Amount



IOPs Max/Bursting



MBPs Max/Bursting



P10



128



500/3500



100/170



P20



256



1100/3500



125/170



P30



512



2300/3500



150/170



P40



1024



5000/30000



250/1000



P50



2048



7500/30000



250/1000



P60



8192



16000/30000



500/1000



P70



16384



18000/30000



750/1000



P80



32767



20000/30000



900/1000



 


When striping, again, you must be cognizant of the max number of disks you’re allowed per VM, remembering that the OS disk is counted as one.


Ultra Disk


We like Ultra disk, but it has numerous limitations when we start pricing out what it will take to run a database on it, realizing that it will be limited at the storage, not just the network limit per VM, that we have no volume snapshot mechanism or availability zone solution using it and the complicated pricing model, it ends up being a solution with limited use with Oracle.  That use is redo logs when a database resides on premium disk and experiencing redo log latency. 


 


More often a better option is to stripe premium disks to achieve upwards of 2000 MBPs, use Azure Backup volume snapshots to eliminate excess IO created by large RMAN backups and no need to spend more money on Ultra Disk. 


Azure NetApp Files (ANF)


Many folks think this is a third-party solution, but it’s really a native solution in Azure in partnership with NetApp, and might need a rename to something like, “Azure Enterprise Files”.  It’s a first tier storage for high IO workloads and is only limited by the network per VM.  An ANF capacity pool is storage built at the region level, has HA built into the solution and can be allocated to multiple VMs, offering the ability to meet numerous workloads that other native solutions can’t.  Along with robust cloning capabilities, shared volume snapshots across capacity pools even across regions, which can be used to bring up a second Oracle environment in a short order and avoid additional licensing that would be required if Oracle Data Guard was present. 


 


ANF is also an excellent choice for datacenter migrations where a native storage solution is required or strict, Azure certified storage with high IO capabilities are needed.


Silk


As I work almost primarily on Exadata migrations, I spend a lot of time with Silk data pods.  This is a third-party solution that uses a number of patented features to build out a Kubernetes data pod inside Azure, out of compute nodes, (D-series VMs) and management nodes, (L-series VMs) using the NVMe storage to accomplish fast IO.  They have compression and dedupe that simplifies some of the additional “decoupling” I’d have to do with the loss of Hybrid Columnar Compression, (HCC) in Exadata.  As the IO would grow considerably without HCC, I commonly use additional partitioning and Oracle Advanced Compression to try to make up for that loss. 


Another feature that I love about Silk is it’s thin cloning.  The ability to create a read/write clone and not have a storage cost is beneficial for E-Business Suite (EBS) and other Oracle applications that require consistent copies across multiple stage environments and the ability to save on storage while doing this, plus doing it quickly is fantastic.  Anyone who’s used sparse clone on Exadata would love to have the thin clone in Silk, too.


The Rest


Excelero NVMesh I’ve only been working with for a short while and same with Flashgrid Storage Fabric.  While both, like Silk, use VMs and the local storage to creation high IO solutions with the only per VM limitation at the Network layer, they don’t have some of the additional features such as compression/dedupe, thin cloning and volume snapshots.  I’ve also been introduced to NetApp Cloud Volume OnTap,(CVO) which marries the best of onprem OnTap storage with Azure in a third-party solution that is closer in features to Silk and can benefit Exadata workloads that rely on HCC, thin cloning and snapshot backups.


 


The How


When deciding what solution to go through, it’s important to identify the following:



  1. The vCPU and memory requirements

  2. The IO, both IOPs and MBPs, especially the latter limit for the VM

  3. Using the size of the database, along with IOPS/MBPs, then choose the type of storage, (premium or high IO)

  4. The business’ SLA around Recovery Point Objective (RPO) and Recovery Time Objectcive (RTO) will tell you then which solution will be best that meets the IO needs.

  5. A secondary service can be added to create additional functionality, (as an example, we add Commvault to ANF to add additional cloning features at a low cost).


So let’s take a few example and look at what kind of combinations you might choose:


Example Oracle Workload #1



  • 16 vCPU

  • 128G of RAM

  • IOPS 22K

  • MBPs 212M

  • DB Size: 5TB

  • Backup Size: 23TB

  • RPO/RTO of 15 min/8 hrs


I know this may seem limited on all that you might need to size it out, but we are assuming a sizing assessment has been done from an AWR and from this we can come up with the following recommendations: 


 


Recommended VM: E16ds v4


Storage Option A



  • 1 Premium P10- OS Disk

  • 6 Premium P40 Disks- Datafiles and Redo Logs

  • 24 TiB of Azure Blob Storage- Backups

  • Backup strategy: RMAN


Storage Option B



  • 1 Premium P10- OS Disk

  • 5 Premium P40 Disks- Datafiles

  • 40G Ultra Disk- Redo Logs

  • 24 TiB of Azure Premium Blob Storage- Backups

  • Backup Strategy: Azure Backup for Oracle


Example Oracle Workload #2



  • 32 vCPU

  • 480G RAM

  • IOPs 100K

  • MBPS 2800M

  • DB Size 8TB

  • Backup Size 28TB

  • RPO/RTO of 15 min/2 hrs


Due to the limited RTO, I would use Oracle Data Guard to support the 2 hr RTO, as an RMAN recovery from storage wouldn’t meet the requirements for the DR on it’s own.


Recommended VM: E64-32ds v4 constrained vCPU VM



  • 1 Premium P10- OS Disk

  • Storage Option A: ANF with capacity pool and snapshot taken every 10 minutes to secondary capacity pool in separate region.

  • Storage Option B: Excelero with Oracle Data Guard secondary in second Availability Zone, using Fast-start Failover and Observer/DG Broker and RMAN backups to Premium file storage.


Example Oracle Workload #3



  • 16 vCPU

  • 85G of Ram

  • IOPs 300K

  • MBPs 4500M

  • DB Size 12T

  • Backup Size: Using ZDLRS from Oracle

  • RPO/RTO of 5 min/1 hr

  • Using Exadata features HCC, smart scans, storage indexes, flash cache and flash logging


Recommended VM: E16ds v4, (extra memory will come in handy with the SGA and PGA grows post migration)



  • 1 Premium P10- OS Disk

  • Storage Option A: Silk with Data Guard, thin cloning, and volume snapshot and their compression/dedupe.  Lessen post migration optimization that will need to be done.

  • Storage Option B: ANF with Data Guard, volume snapshot for backups to eliminate some of the overhead of IO from RMAN, add Oracle advanced compression and partitioning, along with build out a partitioning strategy to assist with increased IO with loss of HCC.


With the examples above, I stuck to the E-series, ds v4 type VMs, as again, these are some of our favorite skus to work with Oracle on Azure in IaaS.  Realize that we do have options for each type of workload, but that depending on the IO, there are different solutions that will meet the customer’s requirements and it’s important to have the right data.

Near real-time monitoring of SQL Server Linux/containers using Telegraf-InfluxDB and Grafana

Near real-time monitoring of SQL Server Linux/containers using Telegraf-InfluxDB and Grafana

This article is contributed. See the original author and article here.

Introduction: 


In this blog, we will look at how we configure near real-time monitoring of SQL Server on Linux and containers with the Telegraf-InfluxDB and Grafana stack. This is built on similar lines to Azure SQLDB and Managed Instance solutions already published by my colleague Denzil Ribeiro. You can refer to the above blogs to know more about Telegraf, InfluxDB and Grafana. 


 


A quick rundown of all the tasks we’ll be carrying out to complete the setup:



  1. We will first install the Telegraf, InfluxDB, and Grafana containers on the monitoring host machine. You may be wondering why containers are used? because they are simple to set up and also provide isolation. 

  2. Then, we will prepare the target SQL Server instances that we will monitor by creating the login on all of the target SQL Server instances (SQL Server on Linux/containers/Windows) that telegraf will use to connect to SQL Server instances for data collection.

  3. As this is a demo, I am running all three containers on a single host machine, but depending on the instances you monitor and data that is collected, you may decide to run the containers on different nodes.

  4. The data retention policies of InfluxDB will then be configured. The retention policy ensures that Influxdb does not grow out of bounds. 

  5. Finally, we will configure and set up Grafana to create our dashboard with graphs and charts.


 


Let’s Build:


For this demonstration, the host on which I deploy the containers is an Azure VM running Ubuntu 20.04. I’m collecting data from the four SQL Server instances listed below:



  1. A SQL Server instance running on RHEL.

  2. Two SQL Server container instances, one deployed using the Ubuntu image and the other using the RHEL image.

  3. A SQL Server running on Windows.


Let’s start deploying containers:



  1. Install docker on the Ubuntu 20.04 host, which is our monitoring VM. To install Docker on Ubuntu 20.04 VM, refer to this article.

  2. Run the command below to create a docker network. This is the common network on which all three containers (Telegraf, InfluxDB, and Grafana) will be deployed.

    docker network create --driver bridge influxdb-telegraf-net 
    #You can change the name of the network from “influxdb-telegraf-net” to whatever you want.​

    you can list the network using the command

    docker network ls​

    amvin87_0-1628258795694.png



  3. We will now create the SQL Server login that telegraf will use to connect to the target SQL Server instances. This login must be created on all target SQL Server instances that you intend to monitor. You can change the login name from telegraf to any other name of your choice, but the same also needs to be changed in the telegraf.conf file as well.

    USE master; 
    CREATE LOGIN telegraf WITH PASSWORD = N'StrongPassword1!', CHECK_POLICY = ON; 
    GO 
    GRANT VIEW SERVER STATE TO telegraf; 
    GO 
    GRANT VIEW ANY DEFINITION TO telegraf; 
    GO 



  4. Run the following command to deploy the telegraf container

    docker run -d --name=telegraf -v /home/amvin/monitor/sqltelegraf/telegraf.conf:/etc/telegraf/telegraf.conf --net=influxdb-telegraf-net telegraf 
    # where:/home/amvin/monitor/sqltelegraf/telegraf.conf is a telegraf configuration file placed on my host machine, please update the path as per your environment.
    # please ensure that you change the IP addresses and port numbers to your target SQL Server instances in the telegraf.conf file that you create in your environment. 

    Note: You can download the sample telegraf.conf from here. Please remember to change the IP address to your target SQL Server instance IP addresses.



  5.  Run the following command to deploy the InfluxDB container

    docker run --detach --net=influxdb-telegraf-net -v /home/amvin/monitor/data/influx:/var/lib/influxdb:rw --hostname influxdb --restart=always -p 8086:8086 --name influxdb influxdb:1.8 
    
    # where: /home/amvin/monitor/data/influx is a folder on the host that I am mounting inside the container, you can create this folder in any location.
    # please ensure you set the right permissions so files can be written inside this folder by the container.  ​


  6. Deploy the Grafana container using the following command

    docker run --detach -p 3001:3000 --net=influxdb-telegraf-net --restart=always -v /home/amvin/monitor/data/grafana:/var/lib/grafana -e "GF_INSTALL_PLUGINS=grafana-azure-monitor-datasource,grafana-piechart-panel,savantly-heatmap-panel" --name grafana grafana/Grafana 
    
    # where: home/amvin/monitor/data/grafana is a folder on the host that I am mounting inside the container, you can create this folder in any location.
    # please ensure you set the right permissions so files can be written inside this folder. 



With the containers now deployed, use “docker ps -a” to list them, and you should see something like this:


amvin87_0-1628260115106.png


 


Note: Please ensure that you open the ports on the host to which Grafana and InfluxDB containers are mapped to, in this case they are 3000 and 8086 respectively. 


 


Let’s now setup retention policy on InfluxDB to ensure that there is limited growth of the database. I am setting this for 30 days, you can configure it as per your requirement.


 


 

sudo docker exec -it influxdb bash
#then run beow commands inside the container
influx
use telegraf; 
show retention policies; 
create retention policy retain30days on telegraf duration 30d replication 1 default; 
quit

 


 


 


Setting up Grafana: 


We are now ready to create the dashboard, before that we need to setup Grafana and to do that follow the below steps: 



  • Browse to your Grafana instance – http://[GRAFANA_IP_ADDRESS_OR_SERVERNAME]:3000 

  • First time you login into Grafana, login and password are set to: admin. Also take a look at the Getting Started Grafana documentation. 

  • Add a data source for InfluxDB. Detailed instructions are at in the grafana data source docs 

    • Type: InfluxDB

    • Name: InfluxDB (this is also the default) 

    • URL: http://[INFLUXDB_HOSTNAME_OR_IP_ADDRESS]:8086. (The default of http://localhost:8086 works if Grafana and InfluxDB are on the same machine; make sure to explicitly enter this URL in the field. ) 

    • Database: telegraf 

    • Click “Save & Test”. You should see the message “Data source is working”. 



  • Download Grafana dashboard JSON definitions from the repo from here and then import them into Grafana. 


You are ready and this is how the dashboard should look, feel free to modify the graphs as per your requirement.


 


amvin87_0-1628261239658.png


amvin87_1-1628261266254.png


amvin87_2-1628261285192.png


 


 


 


 


 


 

Customer review: Abnormal Security helps protect our environment with next-gen email security

This article is contributed. See the original author and article here.

Abnormal Security, an app available in Azure Marketplace, uses advanced artificial intelligence detection techniques to stop targeted phishing attacks. The cloud-native email security platform protects enterprises by detecting anomalous behavior and developing a deep understanding of people, relationships, and business context. Abnormal Security is a member of the Microsoft Intelligent Security Association.



Azure Marketplace interviewed Ben S., an IT director in the manufacturing sector, to learn what he had to say about the product.


 


What do you like best about Abnormal Security?
Abnormal Security stood out to us as a nuanced and unique way to approach the idea of business email compromise. Through their behavioral engine, they would build out personas for what is normal and expected interaction for your employee base, and through that identification, they would classify what is abnormal activity. And they carry that forward from your internal personnel to the vendor base that you contact and interact with.


 


It does a really great job of providing reporting both at a high level and then down to the granular details. So there’s a handful of dashboards that help to show attack trends and attack types, whether it be credential phishing, malware scam, or social engineering. Any of those types of categories it’s able to represent both in percentage and count. It’s also able to show attacker origin. And then the other piece that I think is incredibly helpful is that, for the emails it does remediate or take action on, it doesn’t just do that blindly. It actually takes that email message and is able to highlight the pieces that caused its threat score to be elevated so that you, as a security analyst or a support individual, can go through and understand what it is you’re looking at and know why something would be considered a threat or malicious.


 


How has the product helped your organization?
We saw a lot of banking impersonation and, in some cases, internal invoice impersonation taking place. We were receiving pretty legitimate-looking invoices from known vendors. But they were coming from different email servers. There were also instances where the external contact had been compromised and the invoice had banking information changes to it, trying to get us to wire funds to an attacker’s bank account. Abnormal had a great proof of concept that they were able to walk us through. From the time we turned it on, we saw immediate results from that. The solution integrates with our Exchange Online environment and doesn’t sit in line like a traditional secure email gateway type of solution. It sits next to it and maintains that same visibility. So if an attack is identified after the fact, it’s still connected to the point where it’s able to then do post-remediation and pull those delivered messages out from mailboxes.


 


Another useful feature is the abuse mailbox. It’s a function that allows us in IT support to leverage some email client toolbar applications for employees to be able to submit suspect messages. Previously that was a manual effort by our security team, where that would become a helpdesk ticket item that then would require hands-on analysis by someone on my team.


 


How are customer service and support?
Customer service has been great. When we reached out and started to engage with them on the proof of concept, they were tremendous in helping to get the platform configured. And then that carried forward to when we were customers as we were getting more and more familiar with the platform and asking questions, primarily around why certain emails were classified the way they were. Those were all easy-to-open cases where we got connected with dedicated support personnel. They configured this solution for us so that we have some flexibility in some different classifications, most notably the ability for us to maintain our VIP list of people that potentially are at higher risk, or that we want additional scrutiny around because of approval power.


 


Any recommendations to other users considering this product?
I think the biggest thing in the security space is there are a ton of different solutions and platforms trying to address similar issues. It’s important, when you’re looking for a solution, to understand what you’re looking to address. Financial loss, for us, was one of the biggest drivers, and in the evaluations we did, Abnormal showed the best capabilities to help address that risk.


 


What is your overall rating for this product?
5 out of 5 stars.


 


Cloud marketplaces are transforming the way businesses find, try, and deploy applications to help their digital transformation. We hope these improvements make your experience in Azure Marketplace intuitive and simple. Learn more about Azure Marketplace and find ways to discover the right application for your cloud solution needs.