This article is contributed. See the original author and article here.
Do you remember the Command Prompt? Are you still using it?
Command Prompt
There were so many customization options in there like font size, font type, and colors (all eight of them)!
Command Prompt Options
If like me, you need a bit more customization, I do have something for you.
Windows Terminal is the re-imagination of what a first-class command prompt experience should and gosh does it deliver. Let me show you what I got when I first launched the Command Prompt from the terminal.
Windows Terminal First Look around
Introducing the Amazing Windows Terminal
So the first thing I noticed? Transparency. I know it may sound superficial, but it’s catchy. The second thing? The tab with two buttons beside it. + will allow me to have more Command Prompt in here without changing windows(yay!).
The down arrow had me wondering for a second, so I clicked it.
So many options…
Your options may vary, but I have Windows Linux Subsystem installed on my machine and a few other options, so Ubuntu shows up. I’m amazed that I can use any of those prompts from a single option. Mind blown!
Back to Command Prompt
We can open up any shell/prompt/command line from Windows Terminal, which is nice, but how does it make Command Prompt better?
See that image above with the Setting option in there? Let’s click on that, and it opens up this file in Visual Studio Code.
Making the old feel new again
Settings.json of Windows Terminal
That’s a ton of JSON, but it will become quite easy quite fast.
The first link at Line 3 will bring you to the Official Docs, which is perfect. It will show you how to go in detail on every point.
I want to make this even more straightforward for you.
Do you see Line 6? Visual Studio Code will read the schema definition and enable code completion within your JSON file straight away.
If I wanted to modify something, I would create a new line and press “, and suddenly, you have all the options available.
Let me give you a new cmd.exe profile that you can overwrite and have something feels brand new right now.
{ // Make changes here to the cmd.exe profile “guid”: “{0caa0dad-35be-5f56-a8ff-afceeeaa6101}”, “name”: “cmd”, “commandline”: “cmd.exe”, “useAcrylic”: true, “acrylicOpacity”: 0.7, “backgroundImageOpacity”: 0.7, “backgroundImageStretchMode”: “fill”, “backgroundImage”: “https://wallpapercave.com/wp/wp2053618.jpg”, “startingDirectory”: “C:git_ws”, “fontFace”: “Cascadia Code”, “fontSize”: 12, “hidden”: false
}
End Result
Refreshed Command Prompt
That doesn’t even closely look like our classic Command Prompt.
There are be many more options to cover that we can cover in another article. What we did for Command Prompt, we could do for every terminal/shell in our previous list.
Next Steps
Want to have a terminal that stands out? Do you want a terminal that is suited just for you and no one else?
This article is contributed. See the original author and article here.
Jigar Dani, Principal PM Manager, Microsoft Sriram Srinivasan, Principal Software Engineering Manager, Microsoft
Over a decade ago Skype invented the Silk audio codec to transmit speech over the internet and catalyzed the voice over internet protocol (VoIP) industry. The primary codec used in VoIP then was G.722 that required 64 kbps to transmit wide band (16 kHz) speech, Silk on the other hand offered wideband quality starting at 14 kbps. Additionally, Silk was an adaptive variable bitrate codec that seamlessly switched from delivering narrow band (8 kHz) speech at ultra-low bandwidth of 6 kbps to offer a near transparent quality speech at higher bit rates. This was critical for dial-up and limited broadband internet that existed at that time and has served us well as the default codec for Skype and Microsoft Teams. It is also the basis of the voice mode of the OPUS codec which has been predominantly used in VoIP solutions in the last decade.
As we enter a new decade users have options to choose from several high-end connectivity alternatives such as high-speed broadband, optical fiber and 5G. Yet large segments of our user base are still limited to low cable internet speeds or 3/4G cellular networks. They encounter constrained network situations with over 50% packet loss and sporadic loss of coverage when moving between cell towers on commute or switching between network types. Network availability becomes unpredictable even when sharing internet at home with family members to stream video, gaming, work remotely and attend online schooling. Meanwhile, user expectations and essential need especially in the pandemic sometimes outpace the improvements in network connectivity. We have a need to communicate and collaborate on the go – on every device, every network, and every environment. Thus, efficient utilization of available bitrate is every bit as important today as it was in the dial-up world. Bitrate savings can be used to provide additional resiliency and/or improve experiences on other workloads like video and content sharing. We have considered these aspects to holistically address the challenges and deliver a virtual voice experience that is as good as talking in person even in ultra-low bandwidth and highly constrained network conditions.
Today we share details on our new AI powered audio codec – Satin, that can deliver super wide band speech starting at a bitrate of 6 kbps, and full-band stereo music starting at a bitrate of 17 kbps, with progressively higher quality at higher bitrates. Satin has been designed to provide great audio quality even under high packet loss. Here is the net effect of our improved resiliency algorithms and new Satin codec (use your favorite headset to hear the audio files):
Silk at 6 kbps, burst packet loss:
Satin at 6 kbps with improved resilience, burst packet loss:
We have built this codec with multiple decades of algorithmic experience combined with advanced machine learning techniques and in this blog we provide a deeper look at getting this codec ready for our users.
What’s narrowband, wideband, and super wideband voice? Our ear can generally perceive sounds that range in frequency from 20 Hz to 20 kHz. When dealing with discrete time signals, we need to sample the audio waveform at a minimum of twice the highest frequency we wish to reproduce. This is generally why CD-quality music is sampled at 44.1 kHz (44100 samples per second) or 48 kHz. Early telephony systems used a sampling rate of 8 kHz and could reproduce frequencies up to 4 kHz (in practice up to 3.4 kHz), which was considered sufficient at the time for speech communication. While a lower sampling rate implies fewer bits per second to transmit over the wire, it resulted in the all too familiar tinny voice quality over the phone as the higher vocal frequencies present in natural speech could not be reproduced. VoIP solutions, which were no longer limited by the narrowband telephony infrastructure, introduced us to the magic of wideband speech (reproduce up to 8 kHz, sampled at 16 kHz) and users were immediately able to appreciate the crisper, more natural and intelligible sound.
Codecs such as Silk and Opus (the default audio codec in WebRTC) took this a step further with the introduction of super wideband voice, capturing frequencies up to 12 kHz, sampled at 24 kHz (energy drops off rapidly at frequencies above 12 kHz for human voice). As mentioned earlier, higher sampling rates imply a higher bitrate. Satin re-defines super wideband to cover frequencies up to 16 kHz (sampled at 32 kHz) for greater clarity and sibilance, and its efficient compression enables super wideband voice at 6 kbps.
Frequency components of the sound /t/ in the word “suit.” There is a significant amount of energy well beyond the narrowband cut-off of 4kHz and even the wideband cutoff of 8 kHz. Preserving energy in the higher spectral components results in more natural sounding speech.
Listen to the two samples below in your favorite headphones. The Satin super wideband speech sample sounds a lot more natural and intelligible, much like what you will hear when you are talking to someone in person.
Silk narrowband at 6 kbps: Satin super wideband at 6 kbps:
How do you get super wideband at 6 kbps? To achieve super wideband quality at 6 kbps, Satin uses a deep understanding of speech production, modelling and psychoacoustics to extract and encode a sparse representation of the signal. To further reduce the required bitrate, Satin only encodes and transmits certain parameters in the lower frequency bands. At the decoder, Satin uses deep neural networks to estimate the high band parameters from the received low band parameters, and a minimal amount of side information sent over the wire. This approach solved the primary challenge of reproducing super wideband voice at ultra-low bitrates but introduced a new challenge of computational complexity. The analysis of the input speech signal to extract a low dimensional representation is computationally intensive. Real-time inference on deep neural networks adds to the complexity. The team then focused on reducing the complexity through both algorithmic optimizations as well as techniques such as loop vectorization beyond what the compiler could achieve. This resulted in close to a 40% reduction in computational complexity and allowed us to run on all our users’ devices.
As with all features, we A/B tested Satin before widely rolling it out – both to ensure there were no regressions, as well as to quantify the positive impact for our users. The A/B tests showed a high statistical significant increase in call duration for Satin compared to Silk at these low bitrates. Offline crowdsourced subjective tests to evaluate codec quality at 6 kbps showed the mean opinion score (MOS) rating of Satin to be 1.7 MOS higher than Silk.
How resilient is Satin to packet loss? Yes, majority of our calls are on Wi-Fi and mobile networks, where packet loss is common and can adversely affect call quality. Satin is uniquely positioned to compensate for packet loss. Unlike most other voice codecs, Satin encodes each packet independently, so the effect of losing one packet does not affect the quality of subsequent packets. The codec is also designed to facilitate high quality packet loss concealment in an internal parametric domain. These features help Satin gracefully handle random losses where one or two packets are lost at a time.
Another type of packet loss, which is even more detrimental to perceived quality, is where several packets are lost in a burst. Here, Satin’s ability to deliver great audio at a low rate of 6 kbps provides the flexibility to use some of the available bitrate for adding redundancy and forward error correction that helps us recover from burst packet loss. Satin allows us to do this without having to compromise overall audio quality.
Satin is already used for all Teams and Skype two-party calls. We are rolling it out for meetings soon. Satin currently operates in wideband voice mode within a bitrate range of 6 – 36 kbps and will soon be extended to support full-band stereo music at a maximum sampling rate of 48 kHz. We are very excited for you to try this new codec, let us know what you think.
This article is contributed. See the original author and article here.
Overview
Containerd is the default container runtime with AKS clusters on Kubernetes version 1.19 onwards. With a containerd-based node and node pools, instead of talking to the dockershim, the kubelet will talk directly to containerd via the CRI (container runtime interface) plugin, removing extra hops on the flow when compared to the Docker CRI implementation. As such, you’ll see better pod startup latency and less resource (CPU and memory) usage.
This change restricts containers from accessing the docker engine, /var/run/docker.sock, or use Docker-in-Docker (DinD).
In order to build docker images, Docker-in-Docker is a common technique used with Azure DevOps pipelines running in Self-Hosted agents. With Containerd, the pipelines building docker images no longer work and we need to consider other techniques. This article outlines the steps to modify the pipelines to perform image builds on Containerd enabled Kubernetes clusters.
Azure VM scale set agents is an option to scale self-hosted agents outside Kubernetes. To continue running the agents on Kubernetes, we will look at 2 options. One to perform image builds outside the cluster using ACR Tasks and another using kaniko executor image which is responsible for building an image from a Dockerfile and pushing it to a registry.
Modify the existing pipelines/create a new pipeline to add an Azure CLI Task running the below command.
az acr build --registry <<registryName>> --image <<imageName:tagName>> .
The command will:
Run in the current workspace
Package the code and upload to in a temp volume attached to ACR Tasks
Build the container image
Push the container image to the registry
The pipeline should look as illustrated below:
Though this approach is simple, it has a dependency on ACR. The next option deals with in-cluster builds which does not require ACR.
Building images using Kaniko
To use Kaniko to build images, it needs a build context and the executor instance to perform the build and push to the registry. Unlike Docker-in-Docker scenario, Kaniko builds are executed in a separate pod. We will use Azure Storage to exchange the context (source code to build) between the agent and the kaniko executor. Below are the steps in the pipeline.
Package the build context as a tar file
Upload the tar file to Azure Storage
Create a pod deployment to execute the build
Wait for the Pod completion to continue
The script to perform the build is as below:
# package the source code
tar -czvf /azp/agent/_work/$(Build.BuildId).tar.gz .
#Upload the tar file to Azure Storage
az storage blob upload --account-name codelesslab --account-key $SKEY --container-name kaniko --file /azp/agent/_work/$(Build.BuildId).tar.gz --name $(Build.BuildId).tar.gz
#Create a deployment yaml to create the Kaniko Pod
cat > deploy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: kaniko-$(Build.BuildId)
namespace: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args:
- "--dockerfile=Dockerfile"
- "--context=https://<<storageAccountName>>.blob.core.windows.net/<<blobContainerName>>/$(Build.BuildId).tar.gz"
- "--destination=<<registryName>>/<<imageName>>:k$(Build.BuildId)"
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
env:
- name: AZURE_STORAGE_ACCESS_KEY
value: $SKEY
restartPolicy: Never
volumes:
- name: docker-config
configMap:
name: docker-config
EOF
The storage access key can be added as an encrypted pipeline variable. Since the encrypted variables are not passed on to the tasks directly, we need to map them to an environment variable.
As the build is executed outside the pipeline, it is required to monitor the status of the pod to decide on the next steps within the pipeline. Below is a sample bash script to monitor the pod:
# Monitor for Success or failure
while [[ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod" && sleep 1; done
# Exit the script with error if build failed
if [ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi
The complete pipeline should look similar to below:
Task 1: [Optional ] Get the KubeConfig (If not supplied through secrets)
Task 2: [Optional ] Install Kubectl latest (if not installed with the agent image)
Task 3: Package Context and Prepare YAML
Note how the pipeline variable is mapped to the Task Environment variable
Task 4: Create the Executor Pod
Note: Alternatively, can be included in the script kubectl apply -f deploy.yaml
Task 5: Monitor for Status
Summary
These build techniques are secure compared to Docker-in-Docker scenario as no special permission, privileges or mounts are required to perform a container image build.
This article is contributed. See the original author and article here.
TLDR; Azure Space Mystery is an interactive experience teaching you about Space, women scientist and how you can interact with LEARN to solve mysteries in the game. Blast off for the Azure Space Mystery
For Educators and Students:
This experience points to LEARN (aka.ms/learn) content on both Neural Networks, APIs and getting started with C# in an interactive environment (IDE not needed)
Location: 400 km above Earth, traveling at 27,600 km/h. The space crew is in a good mood. They are looking forward to today’s return to Earth. We have made groundbreaking discoveries that will change the way we understand the …
Suddenly, a voice emanates from your communication console. “Captain, we have received an SOS message from the International Space Station. Their solar array wing was knocked off by debris. They are quickly running out of power! They need our help to collect the four missing pieces and deliver them back to the ISS as soon as possible.”
When that SOS call comes from the International Space Station, you know that you’re the person for the job!
Push the buttons and start your adventure! Will you embark on either the Rosetta, SOHO, Magnet, or Cluster Missions?
Azure Advocates and Community Advocacy PMs are excited to offer our third mystery experience, the Azure Space Mystery! Following on theAzure Mystery Mansionand theAzure Maya Mystery, this adventure sends you on missions in space to collect the four missing pieces of the wing. Store each piece in your space ship’s Collection Bay and find your way to the ISS to save the day.
During your mission, you will have to solve code challenges and unlock elements of the ship to collect the items. Can you figure out the circumference of the spool around which you must wrap the missing wire? Can you find the keyword onMicrosoft Learnto unlock the door so you can complete your space walk? What if you fly into the tail of a comet?
Curious how we built this game? It uses the same architecture as theAzure Maya Mystery: TailwindCSS, VuePress, and an Azure Static Web App.
Every great explorer finds helpers along the way, and the Space Mystery is no different. You will be helped at strategic moments by four famous women who, in history, helped advance scientific inquiry. You’ll have to play the game to discover who they are, but get ready to meet a mathematician, a pilot, a scientist, and an astronomer whose work spanned 14 centuries.
You’ll not only meet these inspiring scientific women in the game, but you can also meet them in Minecraft!
The connection between the Space Mystery game and Minecraft is made by your acquisition of a final badge, available when you complete your missions. Collect your Space Learner badger badge and use it in the MyMetaverse Minecraft server.
In the server, this badger token will give you exclusive access to a Heroes Hangout dedicated for Azure Heroes users. The server is accessible in Minecraft Java edition atmc.mymetaverse.io. To use the badge in the server, link anEnjin Wallet, which is the app where Azure Heroes tokens are stored.
The Azure Space Mystery was brought to you in honor of the 6thInternational Day for Women and Girls in Science. This day was founded by the “Space Princess”: H.R.H. Princess Dr. Nisreen El-Hashemite. It is our hope that our game will teach a little about other famous and impactful women in Science.
But wait, there’s more! Downloadfree wallpapersof our bespoke cosmic art for your video call backgrounds!
We would like to acknowledge the folks who contributed to the content of the game: Marc Duiker who did the pixel art, and Dr. Mark Looper, who provided space-focused technical expertise. Many thanks to the ‘mystery team’ of Cloud Advocates, Chris Noring and myself (Jen) and to our mysterious PM team, in particular Lucie Simeckova, Floor Drees, and Adam Jackson, Eva Amezua de Casado, Jan Schenk, Adi Stein Ben-Nun, and Cynthia Zanoni for overseeing production.
This article is contributed. See the original author and article here.
Initial Update: Monday, 15 February 2021 13:10 UTC
We are aware of issues within Azure Monitor Service and are actively investigating. Some customers may see errors but it will not impact any data access, ingestion or alerting. Errors are caused by some config changes but not impacting any scenario. We have been working to fix the cause of error and provide an update in 2 hours.
Work Around: None.
Next Update: Before 02/15 17:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Mohini
This article is contributed. See the original author and article here.
Overview
Service Bus Explorer is an open-source tool, created with Microsoft supported .NET SDKs and available on any computer with the .NET framework. Service Bus Explorer allows users to connect to a messaging namespace (Service Bus, Event Hub, Notification Hub, and Relay) and administer their messaging entities through a GUI. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
Download the latest version from link below. It is standalone so no installation is required, just unzip and run the ServiceBusExplorer.exe executable from the folder.
Go to the corresponding Service Bus or Event Hub namespace in the Azure Portal. Click on Settings-> “Shared Access Policies” and click on RootManageSharedAccessKey. Copy the Primary Connection String and keep this value handy. NOTE: You need to use a connection string with Manage Claims with Service Bus Explorer
Start Service Bus Explorer, go to “File” and click on “Connect” and in the popup window, choose “Enter connection string…” from Service Bus Namespace dropdown. And in the Connection Settings pane, under Connection String, enter Primary Connection String from step 1) and click on “OK.”
Service Bus – Sending messages to Queues or Topics
To send a message to a Queue or a Topic, right click on the queue or topic name on the Service Bus Explorer navigation pane and select “Send Messages”.
In the “Send Messages” popup dialog box, enter the message in the text box, select the message format and click “Start” to send the message:
** You can specify message properties in the “Senders” tab, and attach files to the message from “Files” tab.
Service Bus – Receiving messages from Queues or Topic Subscriptions
1. To peak or receive a message from a Queue or a topic subscription, right click on the queue or subscription name on the Service Bus Explorer navigation pane and select “Receive Messages”.
2. In the popup dialog box, it is possible to peek or receive a configurable number of messages from a queue or topic subscription:
3. When you click the Ok button in the Retrieve messages from queue (or topic subscription) dialog, messages are retrieved and shown in the following tab:
Service Bus – Check the deadletter reason for specific messages
1. Once you have successfully connected to your Service Bus namespace, select one of your queue as shown below. You can also do this for a topic subscriptions.
2. Just like receive active messages shown in the previous section, you can choose to peak or receive deadlettered messages in the popup dialog box. Select “Peek” to peek some messages from the deadletter and click “Ok”
3. You can then review the deadletter reason for each individual message as shown below
Event Hub – Sending Event messages to a Event Hub
1. To send a event message to a event hub, right click on the event hub name on the Service Bus Explorer navigation pane and select “Send Events”.
2. In the “Send Events” popup dialog box, enter the message in the text box, select the message format and click “Start” to send the message:
** You can specify message properties in the “Sender” tab, and attach files to the message from “Files” tab.
Event Hub – View messages through a consumer group
Once you have successfully connected to your Event Hub namespace, drill down to one of your consumer groups, right click on it, and select “Create Consumer Group Listener.” NOTE: If your Event Hub is actively being read from, you will want to use a different consumer group from the one(s) actively being used
In the listener dialog popup, click “Start” to start the consumer group listener:
NOTE: You can specify the “Starting Date Time UTC” to selectively fetch event data enqueued after this point, and check “Verbose” option to enable verbose logs.
Check the Log at the bottom of the screen and you can see the message receiving logs. You can also select the Events tab and check for the message body to check the actual events that are in your Event Hub.
This article is contributed. See the original author and article here.
Today, I saw this error message when our customer is trying to obtain the properties of a database. This issue sometimes happened when SQL Server Management Studio is trying to obtain the last backup date done in Azure SQL Managed Instance.
For example, using SQL Server Profiler or Extended Event Profiler I was able to find the TSQL that SQL Server Management Studio is running, obtaining the same error message.
exec sp_executesql N'
create table #tempbackup (database_name nvarchar(128), [type] char(1), backup_finish_date datetime)
insert into #tempbackup select database_name, [type], max(backup_finish_date) from msdb..backupset where [type] = ''D'' or [type] = ''L'' or [type]=''I'' group by database_name, [type]
SELECT
(select backup_finish_date from #tempbackup where type = @_msparam_0 and db_id(database_name) = dtb.database_id) AS [LastBackupDate]
FROM
master.sys.databases AS dtb
WHERE
(dtb.name=@_msparam_1)
drop table #tempbackup
',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'D',@_msparam_1=N'database name'
In this situation, as the backup system that Azure SQL Managed Instance is different than other ones, in order to fix this issue, basically, you need to run the following command:
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a heap-based buffer overflow vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an Integer Overflow vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Path Traversal vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an Out-of-bounds Write vulnerability when parsing a crafted jpeg file. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a use-after-free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use After Free vulnerability. An unauthenticated attacker could leverage this vulnerability to achieve arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an Out-of-bounds Read vulnerability. An unauthenticated attacker could leverage this vulnerability to locally elevate privileges in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Pro DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a Use-after-free vulnerability when parsing a specially crafted PDF file. An unauthenticated attacker could leverage this vulnerability to disclose sensitive information in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Adobe Acrobat Pro DC versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an improper input validation vulnerability. An unauthenticated attacker could leverage this vulnerability to disclose sensitive information in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by a null pointer dereference vulnerability when parsing a specially crafted PDF file. An unauthenticated attacker could leverage this vulnerability to achieve denial of service in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an memory corruption vulnerability. An unauthenticated attacker could leverage this vulnerability to cause an application denial-of-service. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
Acrobat Reader DC versions versions 2020.013.20074 (and earlier), 2020.001.30018 (and earlier) and 2017.011.30188 (and earlier) are affected by an Out-of-bounds Read vulnerability. An unauthenticated attacker could leverage this vulnerability to locally escalate privileges in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
An instance of a cross-site scripting vulnerability was identified to be present in the web based administration console on the message.jsp page of Apache ActiveMQ versions 5.15.12 through 5.16.0.
Apostrophe Technologies sanitize-html before 2.3.1 does not properly handle internationalized domain name (IDN) which could allow an attacker to bypass hostname whitelist validation set by the “allowedIframeHostnames” option.
Apostrophe Technologies sanitize-html before 2.3.2 does not properly validate the hostnames set by the “allowedIframeHostnames” option when the “allowIframeRelativeUrls” is set to true, which allows attackers to bypass hostname whitelist for iframe element, related using an src value that starts with “/example.com”.
Open redirect vulnerability in b2evolution CMS version prior to 6.11.6 allows an attacker to perform malicious open redirects to an attacker controlled resource via redirect_to parameter in email_passthrough.php.
Reflected cross-site scripting vulnerability (XSS) in the evoadm.php file in b2evolution cms version 6.11.6-stable allows remote attackers to inject arbitrary webscript or HTML code via the tab3 parameter.
CarrierWave is an open-source RubyGem which provides a simple and flexible way to upload files from Ruby applications. In CarrierWave before versions 1.3.2 and 2.1.1 the download feature has an SSRF vulnerability, allowing attacks to provide DNS entries or IP addresses that are intended for internal use and gather information about the Intranet infrastructure of the platform. This is fixed in versions 1.3.2 and 2.1.1.
The mg_http_serve_file function in Cesanta Mongoose HTTP server 7.0 is vulnerable to remote OOB write attack via connection request after exhausting memory pool.
The mg_tls_init function in Cesanta Mongoose HTTPS server 7.0 and 6.7-6.18 (compiled with mbedTLS support) is vulnerable to remote OOB write attack via connection request after exhausting memory pool.
The mg_tls_init function in Cesanta Mongoose HTTPS server 7.0 (compiled with OpenSSL support) is vulnerable to remote OOB write attack via connection request after exhausting memory pool.
Cosmos Network Ethermint <= v0.4.0 is affected by cache lifecycle inconsistency in the EVM module. Due to the inconsistency between the Storage caching cycle and the Tx processing cycle, Storage changes caused by a failed transaction are improperly reserved in memory. Although the bad storage cache data will be discarded at EndBlock, it is still valid in the current block, which enables many possible attacks such as an “arbitrary mint token”.
Cosmos Network Ethermint <= v0.4.0 is affected by a cross-chain transaction replay vulnerability in the EVM module. Since ethermint uses the same chainIDEpoch and signature schemes with ethereum for compatibility, a verified signature in ethereum is still valid in ethermint with the same msg content and chainIDEpoch, which enables “cross-chain transaction replay” attack.
Cosmos Network Ethermint <= v0.4.0 is affected by a transaction replay vulnerability in the EVM module. If the victim sends a very large nonce transaction, the attacker can replay the transaction through the application.
Cosmos Network Ethermint <= v0.4.0 is affected by cache lifecycle inconsistency in the EVM module. The bytecode set in a FAILED transaction wrongfully remains in memory(stateObject.code) and is further written to persistent store at the Endblock stage, which may be utilized to build honeypot contracts.
In the cryptography package before 3.3.2 for Python, certain sequences of update calls to symmetrically encrypt multi-GB values could result in an integer overflow and buffer overflow, as demonstrated by the Fernet class.
Dell EMC PowerScale OneFS versions 8.2.0 – 9.1.0 contain a privilege escalation vulnerability. A non-admin user with either ISI_PRIV_LOGIN_CONSOLE or ISI_PRIV_LOGIN_SSH may potentially exploit this vulnerability to read arbitrary data, tamper with system software or deny service to users. Note: no non-admin users or roles have these privileges by default.
Dell EMC PowerScale OneFS versions 8.1.2 – 9.1.0 contain an issue where the OneFS SMB directory auto-create may erroneously create a directory for a user. A remote unauthenticated attacker may take advantage of this issue to slow down the system.
Dell EMC PowerScale OneFS versions 8.1.2 and 8.2.2 contain an Incorrect Permission Assignment for a Critical Resource vulnerability. This may allow a non-admin user with either ISI_PRIV_LOGIN_CONSOLE or ISI_PRIV_LOGIN_SSH privileges to exploit the vulnerability, leading to compromised cryptographic operations. Note: no non-admin users or roles have these privileges by default.
Dell EMC PowerScale OneFS versions 8.1.0 – 9.1.0 contain a privilege escalation vulnerability. A user with ISI_PRIV_JOB_ENGINE may use the PermissionRepair job to grant themselves the highest level of RBAC privileges thus being able to read arbitrary data, tamper with system software or deny service to users.
Cross-site request forgery (CSRF) vulnerability in ELECOM WRC-300FEBK-A allows remote attackers to hijack the authentication of administrators and execute an arbitrary request via unspecified vector. As a result, the device settings may be altered and/or telnet daemon may be started.
Cross-site request forgery (CSRF) vulnerability in ELECOM WRC-300FEBK-S allows remote attackers to hijack the authentication of administrators and execute an arbitrary request via unspecified vector. As a result, the device settings may be altered and/or telnet daemon may be started.
ELECOM WRC-300FEBK-S contains an improper certificate validation vulnerability. Via a man-in-the-middle attack, an attacker may alter the communication response. As a result, an arbitrary OS command may be executed on the affected device.
In Electric Coin Company Zcashd before 2.1.1-1, the time offset between messages could be leveraged to obtain sensitive information about the relationship between a suspected victim’s address and an IP address, aka a timing side channel.
Electric Coin Company Zcashd before 2.1.1-1 allows attackers to trigger consensus failure and double spending. A valid chain could be incorrectly rejected because timestamp requirements on block headers were not properly enforced.
An issue was discovered in Epikur before 20.1.1. A Glassfish 4.1 server with a default configuration is running on TCP port 4848. No password is required to access it with the administrator account.
A flaw was found in the default configuration of dnsmasq, as shipped with Fedora versions prior to 31 and in all versions Red Hat Enterprise Linux, where it listens on any interface and accepts queries from addresses outside of its local subnet. In particular, the option `local-service` is not enabled. Running dnsmasq in this manner may inadvertently make it an open resolver accessible from any address on the internet. This flaw allows an attacker to conduct a Distributed Denial of Service (DDoS) against other systems.
An issue was discovered on FiberHome HG6245D devices through RP2613. By default, there are no firewall rules for IPv6 connectivity, exposing the internal management interfaces to the Internet.
An issue was discovered on FiberHome HG6245D devices through RP2613. A hardcoded GEPON password for root is defined inside /etc/init.d/system-config.sh.
An issue was discovered on FiberHome HG6245D devices through RP2613. There is a telnet?enable=0&key=calculated(BR0_MAC) backdoor API, without authentication, provided by the HTTP server. This will remove firewall rules and allow an attacker to reach the telnet server (used for the CLI).
An issue was discovered on FiberHome HG6245D devices through RP2613. There is a password of four hexadecimal characters for the admin account. These characters are generated in init_3bb_password in libci_adaptation_layer.so.
An issue was discovered on FiberHome HG6245D devices through RP2613. The web daemon contains the hardcoded f~i!b@e#r$h%o^m*esuperadmin / s(f)u_h+g|u credentials for an ISP.
An issue was discovered on FiberHome HG6245D devices through RP2613. The web management is done over HTTPS, using a hardcoded private key that has 0777 permissions.
An issue was discovered on FiberHome HG6245D devices through RP2613. Credentials in /fhconf/umconfig.txt are obfuscated via XOR with the hardcoded *j7a(L#yZ98sSd5HfSgGjMj8;Ss;d)(*&^#@$a2s0i3g key. (The webs binary has details on how XOR is used.)
An issue was discovered on FiberHome HG6245D devices through RP2613. It is possible to find passwords and authentication cookies stored in cleartext in the web.log HTTP logs.
An issue was discovered on FiberHome HG6245D devices through RP2613. It is possible to extract information from the device without authentication by disabling JavaScript and visiting /info.asp.
An issue was discovered on FiberHome HG6245D devices through RP2613. It is possible to crash the telnet daemon by sending a certain 0a 65 6e 61 62 6c 65 0a 02 0a 1a 0a string.
An improper neutralization of input during web page generation in FortiWeb GUI interface 6.3.0 through 6.3.7 and version before 6.2.4 may allow an unauthenticated, remote attacker to perform a reflected cross site scripting attack (XSS) by injecting malicious payload in different vulnerable API end-points.
In Foxit Reader 10.1.0.37527, a specially crafted PDF document can trigger reuse of previously free memory which can lead to arbitrary code execution. An attacker needs to trick the user to open the malicious file to trigger this vulnerability. If the browser plugin extension is enabled, visiting a malicious site can also trigger the vulnerability.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of NEF files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11192.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the processing of NEF files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11334.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a memory corruption condition. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11230.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of EPS files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11259.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of EZI files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11247.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of ARW files. The issue results from the lack of proper validation of the length of user-supplied data prior to copying it to a heap-based buffer. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11196.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of NEF files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11488.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11434.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of SR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11433.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CMP files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11432.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of EZIX files. A crafted id in a channel element can trigger a write past the end of an allocated buffer. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11197.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11332.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of CMP files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11337.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of NEF files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11194.
This vulnerability allows remote attackers to execute arbitrary code on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a write past the end of an allocated structure. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-11333.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11358.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of CMP files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11336.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CMP files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11356.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of CR2 files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11335.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of ARW files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11357.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of EPS files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11195.
This vulnerability allows remote attackers to disclose sensitive information on affected installations of Foxit Studio Photo 3.6.6.922. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of NEF files. The issue results from the lack of proper validation of user-supplied data, which can result in a read past the end of an allocated structure. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the current process. Was ZDI-CAN-11193.
A denial-of-service vulnerability exists in the WS-Security plugin functionality of Genivia gSOAP 2.8.107. A specially crafted SOAP request can lead to denial of service. An attacker can send an HTTP request to trigger this vulnerability.
A denial-of-service vulnerability exists in the WS-Security plugin functionality of Genivia gSOAP 2.8.107. A specially crafted SOAP request can lead to denial of service. An attacker can send an HTTP request to trigger this vulnerability.
A denial-of-service vulnerability exists in the WS-Addressing plugin functionality of Genivia gSOAP 2.8.107. A specially crafted SOAP request can lead to denial of service. An attacker can send an HTTP request to trigger this vulnerability.
A denial-of-service vulnerability exists in the WS-Security plugin functionality of Genivia gSOAP 2.8.107. A specially crafted SOAP request can lead to denial of service. An attacker can send an HTTP request to trigger this vulnerability.
Stack buffer overflow vulnerability in gitea 1.9.0 through 1.13.1 allows remote attackers to cause a denial of service (crash) via vectors related to a file path.
An integer overflow issue exists in Godot Engine up to v3.2 that can be triggered when loading specially crafted.TGA image files. The vulnerability exists in ImageLoaderTGA::load_image() function at line: const size_t buffer_size = (tga_header.image_width * tga_header.image_height) * pixel_size; The bug leads to Dynamic stack buffer overflow. Depending on the context of the application, attack vector can be local or remote, and can lead to code execution and/or system crash.
A stack overflow issue exists in Godot Engine up to v3.2 and is caused by improper boundary checks when loading .TGA image files. Depending on the context of the application, attack vector can be local or remote, and can lead to code execution and/or system crash.
In onCreate of BluetoothPermissionActivity.java, there is a possible permissions bypass due to a tapjacking overlay that obscures the phonebook permissions dialog when a Bluetooth device is connecting. This could lead to local escalation of privilege with User execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-11Android ID: A-168504491
In SystemSettingsValidators, there is a possible permanent denial of service due to missing bounds checks on UI settings. This could lead to local denial of service with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10 Android-11Android ID: A-156260178
In onCreate of NotificationAccessConfirmationActivity.java, there is a possible overlay attack due to an insecure default value. This could lead to local escalation of privilege and notification access with User execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.1Android ID: A-170731783
In process of C2SoftHevcDec.cpp, there is a possible out of bounds write due to a use after free. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-160346309
In onCreate of UninstallerActivity, there is a possible way to uninstall an all without informed user consent due to a tapjacking/overlay attack. This could lead to local escalation of privilege with User execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-10 Android-11 Android-8.1 Android-9Android ID: A-171221302
In verifyHostName of OkHostnameVerifier.java, there is a possible way to accept a certificate for the wrong domain due to improperly used crypto. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-11Android ID: A-171980069
Heap buffer overflow in Blink in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page.
Insufficient policy enforcement in extensions in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass content security policy via a crafted Chrome Extension.
Insufficient data validation in V8 in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially perform out of bounds memory access via a crafted HTML page.
Use after free in Media in Google Chrome prior to 88.0.4324.96 allowed a remote attacker who had compromised the renderer process to potentially exploit heap corruption via a crafted HTML page.
Use after free in WebSQL in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page.
Use after free in Omnibox in Google Chrome on Linux prior to 88.0.4324.96 allowed a remote attacker to potentially perform a sandbox escape via a crafted HTML page.
Use after free in Blink in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page.
Potential user after free in Speech Recognizer in Google Chrome on Android prior to 88.0.4324.96 allowed a remote attacker to potentially perform a sandbox escape via a crafted HTML page.
Inappropriate implementation in DevTools in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially perform a sandbox escape via a crafted Chrome Extension.
Use after free in WebRTC in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to potentially exploit heap corruption via a crafted SCTP packet.
Uninitialized use in USB in Google Chrome prior to 88.0.4324.96 allowed a local attacker to potentially perform out of bounds memory access via via a USB device.
Heap buffer overflow in V8 in Google Chrome prior to 88.0.4324.150 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page.
Use after free in Payments in Google Chrome on Mac prior to 88.0.4324.146 allowed a remote attacker to potentially perform a sandbox escape via a crafted HTML page.
Heap buffer overflow in Extensions in Google Chrome prior to 88.0.4324.146 allowed an attacker who convinced a user to install a malicious extension to potentially exploit heap corruption via a crafted Chrome Extension.
Use after free in Navigation in Google Chrome prior to 88.0.4324.146 allowed a remote attacker who had compromised the renderer process to potentially perform a sandbox escape via a crafted HTML page.
Insufficient policy enforcement in File System API in Google Chrome on Windows prior to 88.0.4324.96 allowed a remote attacker to bypass filesystem restrictions via a crafted HTML page.
Heap buffer overflow in Tab Groups in Google Chrome prior to 88.0.4324.146 allowed an attacker who convinced a user to install a malicious extension to potentially exploit heap corruption via a crafted Chrome Extension.
Use after free in Fonts in Google Chrome prior to 88.0.4324.146 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page.
Insufficient policy enforcement in Cryptohome in Google Chrome prior to 88.0.4324.96 allowed a local attacker to perform OS-level privilege escalation via a crafted file.
Inappropriate implementation in Skia in Google Chrome prior to 88.0.4324.146 allowed a local attacker to spoof the contents of the Omnibox (URL bar) via a crafted HTML page.
Insufficient policy enforcement in File System API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass file extension policy via a crafted HTML page.
Inappropriate implementation in iframe sandbox in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass navigation restrictions via a crafted HTML page.
Inappropriate implementation in DevTools in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to obtain potentially sensitive information from disk via a crafted HTML page.
Insufficient policy enforcement in WebView in Google Chrome on Android prior to 88.0.4324.96 allowed a remote attacker to leak cross-origin data via a crafted HTML page.
Inappropriate implementation in Performance API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to leak cross-origin data via a crafted HTML page.
Incorrect security UI in Page Info in Google Chrome on iOS prior to 88.0.4324.96 allowed a remote attacker to spoof security UI via a crafted HTML page.
Insufficient policy enforcement in Downloads in Google Chrome prior to 88.0.4324.96 allowed an attacker who convinced a user to download files to bypass navigation restrictions via a crafted HTML page.
Insufficient policy enforcement in File System API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass filesystem restrictions via a crafted HTML page.
Insufficient policy enforcement in File System API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass filesystem restrictions via a crafted HTML page.
Insufficient policy enforcement in File System API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass filesystem restrictions via a crafted HTML page.
Insufficient policy enforcement in extensions in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass site isolation via a crafted Chrome Extension.
Insufficient data validation in File System API in Google Chrome prior to 88.0.4324.96 allowed a remote attacker to bypass filesystem restrictions via a crafted HTML page.
A directory traversal issue was discovered in Gradle gradle-enterprise-test-distribution-agent before 1.3.2, test-distribution-gradle-plugin before 1.3.2, and gradle-enterprise-maven-extension before 1.8.2. A malicious actor (with certain credentials) can perform a registration step such that crafted TAR archives lead to extraction of files into arbitrary filesystem locations.
Helm is open-source software which is essentially “The Kubernetes Package Manager”. Helm is a tool for managing Charts. Charts are packages of pre-configured Kubernetes resources. In Helm from version 3.0 and before version 3.5.2, there a few cases where data loaded from potentially untrusted sources was not properly sanitized. When a SemVer in the `version` field of a chart is invalid, in some cases Helm allows the string to be used “as is” without sanitizing. Helm fails to properly sanitized some fields present on Helm repository `index.yaml` files. Helm does not properly sanitized some fields in the `plugin.yaml` file for plugins In some cases, Helm does not properly sanitize the fields in the `Chart.yaml` file. By exploiting these attack vectors, core maintainers were able to send deceptive information to a terminal screen running the `helm` command, as well as obscure or alter information on the screen. In some cases, we could send codes that terminals used to execute higher-order logic, like clearing a terminal screen. Further, during evaluation, the Helm maintainers discovered a few other fields that were not properly sanitized when read out of repository index files. This fix remedies all such cases, and once again enforces SemVer2 policies on version fields. All users of the Helm 3 should upgrade to the fixed version 3.5.2 or later. Those who use Helm as a library should verify that they either sanitize this data on their own, or use the proper Helm API calls to sanitize the data.
httplib2 is a comprehensive HTTP client library for Python. In httplib2 before version 0.19.0, a malicious server which responds with long series of “xa0” characters in the “www-authenticate” header may cause Denial of Service (CPU burn while parsing header) of the httplib2 client accessing said server. This is fixed in version 0.19.0 which contains a new implementation of auth headers parsing using the pyparsing library.
There is an insufficient integrity check vulnerability in Huawei Sound X Product. The system does not check certain software package’s integrity sufficiently. Successful exploit could allow an attacker to load a crafted software package to the device. Affected product versions include:AIS-BW80H-00 versions 9.0.3.1(H100SP13C00),9.0.3.1(H100SP18C00),9.0.3.1(H100SP3C00),9.0.3.1(H100SP9C00),9.0.3.2(H100SP1C00),9.0.3.2(H100SP2C00),9.0.3.2(H100SP5C00),9.0.3.2(H100SP8C00),9.0.3.3(H100SP1C00).
Some Huawei products have an inconsistent interpretation of HTTP requests vulnerability. Attackers can exploit this vulnerability to cause information leak. Affected product versions include: CampusInsight versions V100R019C10; ManageOne versions 6.5.1.1, 6.5.1.SPC100, 6.5.1.SPC200, 6.5.1RC1, 6.5.1RC2, 8.0.RC2. Affected product versions include: Taurus-AL00A versions 10.0.0.1(C00E1R1P1).
There is a local privilege escalation vulnerability in some Huawei products. A local, authenticated attacker could craft specific commands to exploit this vulnerability. Successful exploitation may cause the attacker to obtain a higher privilege. Affected product versions include: ManageOne versions 6.5.0,6.5.0.SPC100.B210,6.5.1.1.B010,6.5.1.1.B020,6.5.1.1.B030,6.5.1.1.B040,6.5.1.SPC100.B050,6.5.1.SPC101.B010,6.5.1.SPC101.B040,6.5.1.SPC200,6.5.1.SPC200.B010,6.5.1.SPC200.B030,6.5.1.SPC200.B040,6.5.1.SPC200.B050,6.5.1.SPC200.B060,6.5.1.SPC200.B070,6.5.1RC1.B060,6.5.1RC2.B020,6.5.1RC2.B030,6.5.1RC2.B040,6.5.1RC2.B050,6.5.1RC2.B060,6.5.1RC2.B070,6.5.1RC2.B080,6.5.1RC2.B090,6.5.RC2.B050,8.0.0,8.0.0-LCND81,8.0.0.SPC100,8.0.1,8.0.RC2,8.0.RC3,8.0.RC3.B041,8.0.RC3.SPC100; NFV_FusionSphere versions 6.5.1.SPC23,8.0.0.SPC12; SMC2.0 versions V600R019C00,V600R019C10; iMaster MAE-M versions MAE-TOOL(FusionSphereBasicTemplate_Euler_X86)V100R020C10SPC220.
There is a logic vulnerability in Huawei Gauss100 OLTP Product. An attacker with certain permissions could perform specific SQL statement to exploit this vulnerability. Due to insufficient security design, successful exploit can cause service abnormal. Affected product versions include: ManageOne versions 6.5.1.1.B020, 6.5.1.1.B030, 6.5.1.1.B040, 6.5.1.SPC100.B050, 6.5.1.SPC101.B010, 6.5.1.SPC101.B040, 6.5.1.SPC200, 6.5.1.SPC200.B010, 6.5.1.SPC200.B030, 6.5.1.SPC200.B040, 6.5.1.SPC200.B050, 6.5.1.SPC200.B060, 6.5.1.SPC200.B070, 6.5.1RC1.B070, 6.5.1RC1.B080, 6.5.1RC2.B040, 6.5.1RC2.B050, 6.5.1RC2.B060, 6.5.1RC2.B070, 6.5.1RC2.B080, 6.5.1RC2.B090.
There has a CSV injection vulnerability in ManageOne 8.0.1. An attacker with common privilege may exploit this vulnerability through some operations to inject the CSV files. Due to insufficient input validation of some parameters, the attacker can exploit this vulnerability to inject CSV files to the target device.
Mate 30 10.0.0.203(C00E201R7P2) have a buffer overflow vulnerability. After obtaining the root permission, an attacker can exploit the vulnerability to cause buffer overflow.
There is a pointer double free vulnerability in Taurus-AL00A 10.0.0.1(C00E1R1P1). There is a lack of muti-thread protection when a function is called. Attackers can exploit this vulnerability by performing malicious operation to cause pointer double free. This may lead to module crash, compromising normal service.
IBM Cloud Pak for Automation 20.0.3, 20.0.2-IF002 – Business Automation Application Designer Component stores potentially sensitive information in log files that could be obtained by an unauthorized user. IBM X-Force ID: 194966.
IBM Cloud Pak for Automation 20.0.3, 20.0.2-IF002 stores potentially sensitive information in clear text in API connection log files. This information could be obtained by a user with permissions to read log files. IBM X-Force ID: 194965.
ibm — security_identity_governance_and_intelligence
IBM Security Identity Governance and Intelligence 5.2.6 could disclose sensitive information to an unauthorized user using a specially crafted HTTP request. IBM X-Force ID: 189446.
ibm — security_identity_governance_and_intelligence
IBM Security Identity Governance and Intelligence 5.2.6 does not invalidate session after logout which could allow a user to obtain sensitive information from another users’ session. IBM X-Force ID: 192912.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 could allow a user on the network to cause a denial of service due to an invalid cookie value that could prevent future logins. IBM X-Force ID: 196078.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 could allow a user to perform unauthorized activities due to improper encoding of output. IBM X-Force ID: 196183.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 discloses sensitive information in source code that could be used in further attacks against the system. IBM X-Force ID: 198185.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 could allow a remote attacker to obtain sensitive information, caused by the failure to properly enable HTTP Strict Transport Security. An attacker could exploit this vulnerability to obtain sensitive information using man in the middle techniques. IBM X-Force ID: 198188.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 is vulnerable to cross-site request forgery which could allow an attacker to execute malicious and unauthorized actions transmitted from a user that the website trusts.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 uses weaker than expected cryptographic algorithms that could allow an attacker to decrypt highly sensitive information. IBM X-Force ID: 198184.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 could allow a remote attacker to obtain sensitive information when a detailed technical error message is returned in the browser. This information could be used in further attacks against the system. IBM X-Force ID: 196076.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 contains hard-coded credentials, such as a password or cryptographic key, which it uses for its own inbound authentication, outbound communication to external components, or encryption of internal data. IBM X-Force ID: 198192.
IBM Security Verify Information Queue 1.0.6 and 1.0.7 could allow a user to impersonate another user on the system due to incorrectly updating the session identifier. IBM X-Force ID: 198191.
IBM Spectrum Protect Plus 10.1.0 through 10.1.7 could allow a remote user to inject arbitrary data iwhich could cause the serivce to crash due to excess resource consumption. IBM X-Force ID: 193659.
IBM WebSphere Application Server 7.0, 8.0, 8.5, and 9.0 is vulnerable to an XML External Entity Injection (XXE) attack when processing XML data. A remote attacker could exploit this vulnerability to expose sensitive information or consume memory resources. IBM X-Force ID: 194882.
A Cross-Site Request Forgery (CSRF) issue in the NextGEN Gallery plugin before 3.5.0 for WordPress allows File Upload. (It is possible to bypass CSRF protection by simply not including a nonce parameter.)
A Cross-Site Request Forgery (CSRF) issue in the NextGEN Gallery plugin before 3.5.0 for WordPress allows File Upload and Local File Inclusion via settings modification, leading to Remote Code Execution and XSS. (It is possible to bypass CSRF protection by simply not including a nonce parameter.)
A flaw was found in ImageMagick in MagickCore/gem.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of math division by zero. This would most likely lead to an impact to application availability, but could potentially cause other problems related to undefined behavior. This flaw affects ImageMagick versions prior to 7.0.10-56.
The AscRegistryFilter.sys kernel driver in IObit Advanced SystemCare 13.2 allows an unprivileged user to send an IOCTL to the device driver. If the user provides a NULL entry for the dwIoControlCode parameter, a kernel panic (aka BSOD) follows. The IOCTL codes can be found in the dispatch function: 0x8001E000, 0x8001E004, 0x8001E008, 0x8001E00C, 0x8001E010, 0x8001E014, 0x8001E020, 0x8001E024, 0x8001E040, 0x8001E044, and 0x8001E048. DosDevicesAscRegistryFilter and DeviceAscRegistryFilter are affected.
A second-order SQL injection issue in Widgets/TopDevicesController.php (aka the Top Devices dashboard widget) of LibreNMS before 21.1.0 allows remote authenticated attackers to execute arbitrary SQL commands via the sort_order parameter against the /ajax/form/widget-settings endpoint.
A local privilege escalation was discovered in the Linux kernel before 5.10.13. Multiple race conditions in the AF_VSOCK implementation are caused by wrong locking in net/vmw_vsock/af_vsock.c. The race conditions were implicitly introduced in the commits that added VSOCK multi-transport support.
Marked is an open-source markdown parser and compiler (npm package “marked”). In marked from version 1.1.1 and before version 2.0.0, there is a Regular expression Denial of Service vulnerability. This vulnerability can affect anyone who runs user generated code through marked. This vulnerability is fixed in version 2.0.0.
In Max Secure Max Spyware Detector 1.0.0.044, the driver file (MaxProc64.sys) allows local users to cause a denial of service (BSOD) or possibly have unspecified other impact because of not validating input values from IOCtl 0x2200019. (This also extends to the various other products from Max Secure that include MaxProc64.sys.)
A Null Pointer Dereference vulnerability in McAfee Endpoint Security (ENS) for Windows prior to 10.7.0 February 2021 Update allows a local administrator to cause Windows to crash via a specific system call which is not handled correctly. This varies by machine and had partial protection prior to this update.
Arbitrary Process Execution vulnerability in McAfee Total Protection (MTP) prior to 16.0.30 allows a local user to gain elevated privileges and execute arbitrary code bypassing MTP self-defense.
Cross Site Request Forgery vulnerability in Micro Focus Application Performance Management product, affecting versions 9.40, 9.50 and 9.51. The vulnerability could be exploited by attacker to trick the users into executing actions of the attacker’s choosing.
Millennium Millewin (also known as “Cartella clinica”) 13.39.028, 13.39.28.3342, and 13.39.146.1 has insecure folder permissions allowing a malicious user for a local privilege escalation.
An issue was discovered in the ms3d crate before 0.1.3 for Rust. It might allow attackers to obtain sensitive information from uninitialized memory locations via IoReader::read.
Cross-site request forgery (CSRF) vulnerability in Name Directory 1.17.4 and earlier allows remote attackers to hijack the authentication of administrators via unspecified vectors.
NeDi 1.9C allows an authenticated user to inject PHP code in the System Files function on the endpoint /System-Files.php via the txt HTTP POST parameter. This allows an attacker to obtain access to the operating system where NeDi is installed and to all application data.
NeDi 1.9C allows an authenticated user to perform a SQL Injection in the Monitoring History function on the endpoint /Monitoring-History.php via the det HTTP GET parameter. This allows an attacker to access all the data in the database and obtain access to the NeDi application.
NeDi 1.9C allows an authenticated user to execute operating system commands in the Nodes Traffic function on the endpoint /Nodes-Traffic.php via the md or ag HTTP GET parameter. This allows an attacker to obtain access to the operating system where NeDi is installed and to all application data.
In nopCommerce 4.30, a Reflected XSS issue in the Discount Coupon component allows remote attackers to inject arbitrary web script or HTML through the Filters/CheckDiscountCouponAttribute.cs discountcode parameter.
An issue was discovered in October through build 471. It reactivates an old session ID (which had been invalid after a logout) once a new login occurs. NOTE: this violates the intended Auth/Manager.php authentication behavior but, admittedly, is only relevant if an old session ID is known to an attacker.
The Omron CX-One Version 4.60 and prior is vulnerable to a stack-based buffer overflow, which may allow an attacker to remotely execute arbitrary code.
The Omron CX-One Version 4.60 and prior may allow an attacker to supply a pointer to arbitrary memory locations, which may allow an attacker to remotely execute arbitrary code.
This vulnerability allows local attackers to execute arbitrary code due to the lack of proper validation of user-supplied data, which can result in a type-confusion condition in the Omron CX-One Version 4.60 and prior devices.
Opmantek Open-AudIT 4.0.1 is affected by cross-site scripting (XSS). When outputting SQL statements for debugging, a maliciously crafted query can trigger an XSS attack. This attack only succeeds if the user is already logged in to Open-AudIT before they click the malicious link.
Agents are able to see and link Config Items without permissions, which are defined in General Catalog. This issue affects: OTRS AG OTRSCIsInCustomerFrontend 7.0.x version 7.0.14 and prior versions.
Article Bcc fields and agent personal information are shown when customer prints the ticket (PDF) via external interface. This issue affects: OTRS AG OTRS 7.0.x version 7.0.23 and prior versions; 8.0.x version 8.0.10 and prior versions.
When dynamic templates are used (OTRSTicketForms), admin can use OTRS tags which are not masked properly and can reveal sensitive information. This issue affects: OTRS AG OTRSTicketForms 6.0.x version 6.0.40 and prior versions; 7.0.x version 7.0.29 and prior versions; 8.0.x version 8.0.3 and prior versions.
Multiple SQL Injection vulnerabilities in PHPSHE 1.7 in phpshe/admin.php via the (1) ad_id, (2) menu_id, and (3) cashout_id parameters, which could let a remote malicious user execute arbitrary code.
picoquic (before 3rd of July 2020) allows attackers to cause a denial of service (infinite loop) via a crafted QUIC frame, related to the picoquic_decode_frames and picoquic_decode_stream_frame functions and epoch==3.
An issue was discovered in Psyprax beforee 3.2.2. Passwords used to encrypt the data are stored in the database in an obfuscated format, which can be easily reverted. For example, the password AAAAAAAA is stored in the database as MMMMMMMM.
An issue was discovered in Psyprax before 3.2.2. The Firebird database is accessible with the default user sysdba and password masterke after installation. This allows any user to access it and read and modify the contents, including passwords. Local database files can be accessed directly as well.
A cross-site scripting (XSS) issue in the login panel in Redwood Report2Web 4.3.4.5 and 4.5.3 allows remote attackers to inject JavaScript via the signIn.do urll parameter.
A frame-injection issue in the online help in Redwood Report2Web 4.3.4.5 allows remote attackers to render an external resource inside a frame via the help/Online_Help/NetHelp/default.htm turl parameter.
Cscape (All versions prior to 9.90 SP3.5) lacks proper validation of user-supplied data when parsing project files. This could lead to an out-of-bounds read. An attacker could leverage this vulnerability to execute code in the context of the current process.
A vulnerability has been identified in JT2Go (All versions < V13.1.0.1), Teamcenter Visualization (All versions < V13.1.0.1). Affected applications lack proper validation of user-supplied data when parsing BMP files. This can result in a memory corruption condition. An attacker could leverage this vulnerability to execute code in the context of the current process. (ZDI-CAN-12018)
A vulnerability has been identified in JT2Go (All versions < V13.1.0.1), Teamcenter Visualization (All versions < V13.1.0.1). Affected applications lack proper validation of user-supplied data when parsing of PAR files. This could result in a stack based buffer overflow. An attacker could leverage this vulnerability to execute code in the context of the current process. (ZDI-CAN-12041)
A vulnerability has been identified in JT2Go (All versions < V13.1.0.1), Teamcenter Visualization (All versions < V13.1.0.1). Affected applications lack proper validation of user-supplied data when parsing TIFF files. This could lead to pointer dereferences of a value obtained from untrusted source. An attacker could leverage this vulnerability to execute code in the context of the current process. (ZDI-CAN-12158)
A vulnerability has been identified in JT2Go (All versions < V13.1.0.1), Teamcenter Visualization (All versions < V13.1.0.1). Affected applications lack proper validation of user-supplied data when parsing of TGA files. This could result in an out of bounds write past the end of an allocated structure. An attacker could leverage this vulnerability to execute code in the context of the current process. (ZDI-CAN-12178)
A vulnerability has been identified in JT2Go (All versions < V13.1.0.1), Teamcenter Visualization (All versions < V13.1.0.1). Affected applications lack proper validation of user-supplied data when parsing of PCT files. This could result in a memory corruption condition. An attacker could leverage this vulnerability to execute code in the context of the current process. (ZDI-CAN-12182)
A vulnerability has been identified in Nucleus NET (All versions < V5.2), Nucleus ReadyStart for ARM, MIPS, and PPC (All versions < V2012.12). Initial Sequence Numbers (ISNs) for TCP connections are derived from an insufficiently random source. As a result, the ISN of current and future TCP connections could be predictable. An attacker could hijack existing sessions or spoof future ones.
A vulnerability has been identified in SIMARIS configuration (All versions). During installation to default target folder, incorrect permissions are configured for the application folder and subfolders which could allow an attacker to gain persistence or potentially escalate privileges should a user with elevated credentials log onto the machine.
An issue was discovered in sthttpd through 2.27.1. On systems where the strcpy function is implemented with memcpy, the de_dotdot function may cause a Denial-of-Service (daemon crash) due to overlapping memory ranges being passed to memcpy. This can triggered with an HTTP GET request for a crafted filename. NOTE: this is similar to CVE-2017-10671, but occurs in a different part of the de_dotdot function.
An issue was discovered in Svakom Siime Eye 14.1.00000001.3.330.0.0.3.14. By sending a set_params.cgi?telnetd=1&save=1&reboot=1 request to the webserver, it is possible to enable the telnet interface on the device. The telnet interface can then be used to obtain access to the device with root privileges via a reecam4debug default password. This default telnet password is the same across all Siime Eye devices. In order for the attack to be exploited, an attacker must be physically close in order to connect to the device’s Wi-Fi access point.
Incorrect handling of input data in verifyAttribute function in the libmysofa library 0.5 – 1.1 will lead to NULL pointer dereference and segmentation fault error in case of restrictive memory protection or near NULL pointer overwrite in case of no memory restrictions (e.g. in embedded environments).
Incorrect handling of input data in changeAttribute function in the libmysofa library 0.5 – 1.1 will lead to NULL pointer dereference and segmentation fault error in case of restrictive memory protection or near NULL pointer overwrite in case of no memory restrictions (e.g. in embedded environments).
Incorrect handling of input data in loudness function in the libmysofa library 0.5 – 1.1 will lead to heap buffer overflow and access to unallocated memory block.
Incorrect handling of input data in mysofa_resampler_reset_mem function in the libmysofa library 0.5 – 1.1 will lead to heap buffer overflow and overwriting large memory block.
Nessus AMI versions 8.12.0 and earlier were found to either not validate, or incorrectly validate, a certificate which could allow an attacker to spoof a trusted entity by using a man-in-the-middle (MITM) attack.
Cross-site scripting (XSS) vulnerability in admin/wp-security-blacklist-menu.php in the Tips and Tricks HQ All In One WP Security & Firewall (all-in-one-wp-security-and-firewall) plugin before 4.4.6 for WordPress.
Tufin SecureTrack < R20-2 GA contains reflected + stored XSS (as in, the value is reflected back to the user, but is also stored within the DB and can be later triggered again by the same victim, or also later by different users). Both stored, and reflected payloads are triggerable by admin, so malicious non-authenticated user could get admin level access. Even malicious low-privileged user can inject XSS, which can be executed by admin, potentially elevating privileges and obtaining admin access. (issue 1 of 3)
Tufin SecureTrack < R20-2 GA contains reflected + stored XSS (as in, the value is reflected back to the user, but is also stored within the DB and can be later triggered again by the same victim, or also later by different users). Both stored, and reflected payloads are triggerable by admin, so malicious non-authenticated user could get admin level access. Even malicious low-privileged user can inject XSS, which can be executed by admin, potentially elevating privileges and obtaining admin access. (issue 2 of 3)
Tufin SecureTrack < R20-2 GA contains reflected + stored XSS (as in, the value is reflected back to the user, but is also stored within the DB and can be later triggered again by the same victim, or also later by different users). Both stored, and reflected payloads are triggerable by admin, so malicious non-authenticated user could get admin level access. Even malicious low-privileged user can inject XSS, which can be executed by admin, potentially elevating privileges and obtaining admin access. (issue 3 of 3)
doFilter in com.adventnet.appmanager.filter.UriCollector in Zoho ManageEngine Applications Manager through 14930 allows an authenticated SQL Injection via the resourceid parameter to showresource.do.
This article is contributed. See the original author and article here.
Some customer asked me about the following query around their creating system.
Query
“We are creating some system and using paired regions to secure redundancy of it. Each database instance (primary and replicated) is located on both regions and these instances belongs to a fail over group. When we tested database failover, applications in their environment could not access primary database instance. We don’t think network is reachable since global peering is configured between virtual networks in each region. What is the root cause of this issue? How do we fix this issue?”
Backgrounds
As backgrounds are not clear, I asked the customer to share details about their system and the facing issue.
Their system is deployed to paired regions to secure the redundancy of their system.
Traffic Manager works in front of their system to load balance incoming traffic. They use priority-based traffic-routing for load balancing. If some failure occurs in active region, Traffic Manager changes route of incoming traffic to the another region.
Global peering between virtual networks in both regions is configured.
They use App Service to host their applications. Their App Service instances are integrated with virtual networks, and service endpoints for SQL Database instances are configured at the subnets where these app services are integrated. Also, service endpoints for App Service instances are configured in order to interact each App Service instance.
They use SQL Database in this system and instances on both regions belongs to automatic failover group. As read-write/read-only listener is geo-independent, they don’t have to modify database connection string used in applications whenever database failover occurs.
As of now, they don’t mind that primary database region should be the same as the one where Traffic Manager routes incoming traffic. In other words, they think cross region connection is fine.
The following diagram reflects their comments and our hearing results.
Root cause
If you are familiar with Azure, you can detect the root cause of this issue easily. This is due to service endpoint limitation. For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network’s region.
The following case works fine.
However, the following case does not work even if global peering is configured.
Solutions
In this case, we can choose two options listed below.
Using private link
Modifying traffic routing rule
1. Using private link
If cross region connection is still fine, they can fix this issue by using private link instead of service endpoint.
Azure Private Link for Azure SQL Database and Azure Synapse Analytics
Network latency gets longer in case of cross region connection.
2. Modifying traffic routing rule
In some cases, private link does not meet requirements. In this case, we should configure Traffic Manger to match between the region where Traffic Manager routes incoming traffic and database primary region. The diagram looks like this.
To achieve this, the following configuration is required.
First of all, priority for active region is set smaller value (e.g. 50) , and the priority for the other region is set much bigger value (e.g. 1000).number. This configuration allows incoming traffic to be routed to active region. For more details, see the following document.
Then, healthcheck API should be configured. The API checks if access between applications and databases is healthy. If heathy, the API returns HTTP 200, otherwise, it returns 503.
Following the document, traffic Manager is configured in order to use this API to monitor endpoint. If healthcheck API returns 503, Traffic Manger modifies routing route.
Needless to say, healthcheck API should be created.
It takes some time to change routing region. Precisely, the minimum number of trials (from 0 to 9) to monitor endpoint by healthcheck API and trial interval (default is 30 second interval, and 10 second interval is also available, but additional cost is required). For more details, see the following document.
In this case, I suggested both ways and asked this customer to make their decision. And last but not least, Traffic manager is used in this case, but this solution is applicable when using Azure Front Door.
This article is contributed. See the original author and article here.
Overview
We are happy to share with the SAP on Azure community one solution to automate your SAP system start / stop in Azure.
This is ready to use, flexible, end-to-end solution (including PaaS Azure automation runtime environment, scripts, and runbooks, tagging process etc.) that enables you to automatically:
Start / Stop your SAP systems, DBMSs, and VMs.
SAP application servers
If you use managed disks (Premium and Standard), you can decide to convert them to Standard during the stop procedure, and to Premium during the start procedure.
This way you achieve cost savings both on the compute and on the storage side!
SAP systems stop and SAP application servers stop is specially designed for a graceful shutdown, allowing SAP users and batch jobs to finish. This way you can minimize the SAP system or the SAP application server’s downtime impact. The approach is similar on the DBMS side.
This solution is the product of a joint effort of the SAP on Azure CATTeam (Cloud Advisory Team) andthe SAP on Azure FastTrack Team (Robert Biro and Martin Pankraz).
Solution Capabilities
In details, with this solution you can do the following:
Start / Stop a complete SAP NetWeaverCentral System on Linux or Windows, and the VM. The typical scenario here is a non prod SAP systems.
Start / Stop of a complete SAP NetWeaverDistributed System on Linux or Windows and VMs. The typical scenario here is a non prod SAP systems.
Here you have:
One DBMS VM (HA is currently not implemented)
One ASCS/SCS or DVEBMGS VM (HA is currently not implemented)
One or more SAP application servers
It is assumed that SAPMNT file share is located on SAP ASCS or DVEBMGS VM.
In a distributed SAP landscape, you can deploy your SAP instances across multiple VMs, and those VMs can be placed in different Azure resources groups.
In a distributed SAP landscape, the SAP application instances (application server and SAP ASCS/SCS instance) can be on Windows and DBMS on Linux (this is so called heterogenous landscape), for DBMS that support such scenario for example SAP HANA, Oracle, IBM DB2, SAP Sybase and SAP MaxDB.
On the DBMS side, starting, stopping, getting the status of DBMS itself is implemented for:
SAP HANA DB
Microsoft SQL Server DB
Currently, starting, stopping and getting the status of DBMS is NOT implemented for Oracle, IBM DB2, Sybase and MaxDB DBMSs.
You can use the solution with these DBMSs, but you need to make sure that:
DBMS is configured to automatically start with OS start.
SAP system in startup procedure start first DBMS (which is default SAP start order in SAP instance profile)
Although we did not test with all these DBMSs, the expectation is that this approach will work.
Start / Stop Standalone HANA system and VM
Start / Stop SAP Application Server(s) and VM(s) This functionality can be used for SAP application servers scale out -scale in process.
One meaningful scenario for production SAP systems is that you, as an SAP system administrator, identify upcomming peaks in the system load (for example Black Friday or Year-End close), where you know in advance how many SAPS / application servers you would need to meet the additional load requirements, and for how long. Then you can either schedule start / stop or manually start/stop a number of already prepared SAP application servers, that will cover the load peak.
Converting the disks from Premium to Standard managed disks during the stop process, and the other way around during the start process to reduce the storage costs.
Start / Stop actions can be executed manually, or can be scheduled.
PaaS Solution Architecture
The solution is using Azure automation accountPaaS as an automation platform to execute the SAP shutdown/startup jobs.
Runbooks are written in PowerShell. A PowerShell module SAPAzurePowerShellModules is used by all runbooks. These runbooks and the module are stored in PowerShell Gallery, and are easy to import.
Information about the SAP landscape and instances are stored in VM Tags. VM tagging can be done either manually or even better – using prepared tag runbooks.
<sid>adm password is needed on Windows OS and is stored securely in the Credentials area of the Azure automation account.
Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are encrypted and stored in Azure Automation using a unique key that is generated for each Automation account. Azure Automation stores the key in the system-managed Key Vault.
The Starting and Stopping of:
An SAP system and an SAP Application server is implemented using scripts (calling SAP sapcontrol executable).
An SAP HANA DB is implemented using scripts (calling SAP sapcontrol executable).
SQL Server DB start / stop / monitoring is implemented using scripts (calling SAP Host Agent executable).
All scripts are executed at OS level, in a secure way via Azure VM agent.
All SAP system and DBMS start / stop / monitor scrips are generated on the fly during the runtime, therefore there is no need to store them anywhere.
Soft / Graceful Shutdown of SAP System, Application Servers, and DBMS
SAP System and SAP Application Servers
The stopping of the SAP system or SAP application server will be done using an SAP soft shutdown or graceful shutdown procedure, within a specified timeout. The SAP soft shutdown is handling gracefully SAP processes, users, etc. within the specified downtime time, during the stop of the whole SAP system or one SAP application server.
Users will get a popup to log off, SAP application server(s) will be removed from different logon groups (users, batch, RFC, etc.), the procedure will wait for SAP batch jobs to complete (until the specified timeout is reached). This functionality is implemented in the SAP kernel.
INFO: You can specify the value for SAP soft shutdown time as a parameter. The default value is 300.
DBMS Shutdown
For SAP HANA and SQL Server, DB soft shutdown is also implemented, which will gracefully stop these DBMS, so DB will have time to flush consistently all content from memory to storage and stop all DB processes.
User Interface Possibilities
A user can trigger start / stop in two ways:
using Azure Automation account portal UI
or via modern SAP Azure Power App, which can be consumed in a browser, smart phones or Teams:
The SAP PowerApp application is fully integrated with backend start / stop functionality with Azure automation account. It will automatically collect information on all available SAP SIDs (via SAPSID tag) and offer Start / Stop / SAP system status functionality!
Cost Optimization Potential
Cost Savings on Compute for non-Productive SAP Systems
The non-production SAP systems, like dev, test, demo, training etc., typically do not run 24/7. Lets assume you would run them 8 hours per day, Monday through Friday. This means you run and pay for each VM 8 hours x 5 days = 40 hours. The rest of the time of 128 hours per week you don’t need to pay, which translates into approximatelly 76 % of savings on compute!
Cost Savings on Compute for Productive SAP Systems
Productive SAP system run typically 24/7 and you never completely shut them down. Still, there is a huge potential for savings in the SAP application layer. The SAP application layer constitutes the largest portion of SAPS in an SAP system.
INFO:SAP Application Performance Standard (SAPS) is a hardware-independent unit of measurement that describes the performance of a system configuration in the SAP environment. It is derived from the Sales and Distribution (SD) benchmark, where 100 SAPS is defined as 2,000 fully business processed order line items per hour. SAPS is an SAP measure of compute process power.
In an on-premises environment, an SAP system is often oversized, so that it can process the required peak loads. But the reality is, these peaks are rare (maybe few days in 3 months). Most of the time such systems are underutilized. I’ve seen prod systems that have 5 – 10 % of total CPU utilization most of the time.
In the cloud we have the possibility to run only what we need, and pay for what we used – hence, the SAP application servers’ layer is a perfect candidate to bring down the cost for SAP productive systems!
Here, this solution offers jobs to start / stop an SAP application server and VMs. It is using a soft shut down, allowing SAP users and processes enough time to complete.
Cost Savings on Storage
If you use Premium storage, there is an opportunity for cost savings, by converting such managed storage to standard, while the system is not running.
Let’s say you need to use Premium storage (especially for the DBMS layer) to get a good SAP performance during the runtime. But once the SAP system (and VMs) is stopped, if you choose to convert the disks from Premium to Standard disks, you will pay much less on the storage during the time the system and the VMs are stopped.
During the start procedure, you can decide to convert the disks back to premium, to have good performance while the SAP systems are running , and only pay for the more expensive Premium storage, while the system is running.
For example, if the SAP system would run 8 hours x 5 days = 40 hours, and the SAP system is stopped for 128 hours per week, that means that 128 hours per week, you will not pay the price for Premium storage, but the reduced price for Standard storage.
For example, price of 1 TB P30 disk approximately 2 times higher than 1 TB S30 Standard HDD disk. For above mentioned scenario, savings on 1 TB managed disk would be approximately 54 %!
For the exact updated pricing for managed disks, check here.
Cost of Azure Automation Account PaaS Service
The cost of using Azure automaton service is extremely low. Billing for jobs is based on the number of job run time minutes, used in the month. And for watchers, billing is based on the number of hours used in a month. Charges for process automation are incurred whenever a job or watcher runs. You will be billed only for minutes/hours that exceed the free included units (500 min for free). If you exceed the monthly free limit of 500 min, you will pay per minute €0.002/minute.
Let’s say one SAP system start or stop takes on average 15 minutes. And let’s assume you scheduled your SAP system to start every morning from Monday to Friday and stop in the evening as well. That will be 10 executions per week, and 40 per month for one SAP system.
This means that in 500 free minutes you can execute 33 start or stop procedures for free.
Everything extra you need to pay. For one start or stop (of 15 min), you would pay 15 min * €0.002 = €0.03. And for 40start / stop of ONE SAP system you would pay € 1.2 per month!
Solution Cost of Azure Automation Account Management
Often, when you use a solution which offers you specific functionality, you need to manage it as well, learn it, etc. All this generates additional costs, which can be quite high.
As Azure automation account is a PaaS service, you have here ZERO management costs!
Plus, it is easy to set it up and use it.
Documentation and Scrips
All documentation and scripts can be found here on GitHub. All job scripts and SAP on Azure PowerShell Module is available on PowerShell Gallery.
Recent Comments