Mozilla Releases Security Updates

This article is contributed. See the original author and article here.

Mozilla has released security updates to address vulnerabilities in Firefox, Firefox ESR, and Thunderbird. An attacker could exploit some of these vulnerabilities to take control of an affected system. 

CISA encourages users and administrators to review the Mozilla security advisories for Firefox 98Firefox ESR 91.7, and Thunderbird 91.7 and apply the necessary updates.

Viva Learning news and feature update

Viva Learning news and feature update

This article is contributed. See the original author and article here.

As we celebrate the first year of Microsoft Viva, it’s exciting to see so many companies seeking to foster human connection, align on a sense of mission, ensure employee wellbeing, and retain their best people. These are areas of critical important for business of any size as we collectively navigate the new world of Hybrid Work.


 


Since Microsoft Viva Learning’s general availability in November 2021 we’ve seen how employee upskilling, growth, and learning is becoming a top priority for organizational leaders. If you missed it, make sure to watch our announcement at Ignite.


 


But as we look back with pride, we also look forward with excitement. And there’s plenty on the horizon to be excited about with Viva Learning! Below we have a set of partner announcements, early adopter evidence, and new resources to share.


 


Let’s get to the good stuff!


 


Partner Announcements


 


In addition to the Viva Learning partner integrations we’ve previously announced, we’re thrilled to add another big name to the list – Workday. With the planned integration of Viva Learning and Workday, we’ll deliver a personalized learning experience right in the flow of work with Viva and Microsoft 365. We expect this integration to be live in the coming months 


 


“Together, Workday and Microsoft will empower employees with a simpler, more connected learning experience in their natural flow of work for greater engagement and productivity.” 


          -Stuart Bowness, Vice President, Software Development, Experience Design & Development, Workday


 


We’re also pleased to announce that our existing Learning Management System (LMS) integrations have reached a new milestone with learner record and assignment syncing. This means assignments and completion records from Cornerstone OnDemand, Saba Cloud, and SAP SuccessFactors now surface directly in Viva Learning. This builds on our existing content catalog integration which provides users the ability to search, discover, recommend, and share content hosted on their LMS right within Teams.


 


For detailed setup instructions to integrate your LMS with Viva Learning, visit the Viva Learning docs page.


 


John_Mighell_0-1646161480674.png


 


Finally, we’re excited to share that our integrations with Edcast and OpenSesame are live in Viva Learning! This means you can now import content libraries from Edcast and OpenSesame into Viva Learning so users can search for, discover, share, and recommend their content throughout the platform.  


 


In addition to the dedicated integrations mentioned above, we’re also hard at work building our APIs to deliver an open and extensible Employee Experience platform with Viva. Expect more details on Viva Learning APIs in the coming months.  


 


John_Mighell_1-1646161530271.png


 


Early adopter spotlight – Music Tribe 


 


Music Tribe is a multi-national leader for professional audio products and musical instruments with operations across the globe. Recognizing the importance of learning in attracting and retaining talent, Music Tribe decided to prioritize the growth and development of their Tribers’ (employees’) by seeking a personalized, social, and easy to use learning system.  


 


Music Tribe already uses Teams, so bringing learning to their users within their existing platform was a key decision point to deliver an engaging, inclusive, and inspiring learning experience aligned to their core values.  


 


By deploying Viva Learning and Go1 together, Music Tribe employees are now able to define their own learning journeys in a social and collaborative experience – seamlessly integrated into their flow of work.  


 


“As part of our Vision “We Empower. You Create”, we wanted to empower our Tribers (employees) by means of learning and self-improvement. We looked for tools that would aggregate content from different providers and deliver it to our Tribers in an easily accessible way. The Go1 + Viva Learning solution does exactly that, with Go1 bringing the content together and Viva Learning delivering it as a natural part of the workday. This tremendously helped our Tribers sourcing content while giving more time developing their learning pathways and making the most of their professional development on our Employee Experience platform,” 


          -Uli Behringer, Founder and CEO, Music Tribe 


John_Mighell_2-1646161649875.png


 


“We are thrilled to deliver the world’s largest workplace online learning library to Music Tribe employees in the flow of their work, via Microsoft Viva Learning. With Go1 and Viva Learning together, Music Tribe’s employees are able to access relevant learning content from hundreds of different learning providers, all from the Microsoft tools like Teams that they use every day.” 


          -Andrew Barnes, CEO, Go1 


John_Mighell_3-1646161685584.png


 


Data from Go1 shows that in less than a month from launch over 40% of Music Tribe employees engaged in learning content, well over the industry standard for levels of employee engagement in learning content. Read more about Music Tribe’s Viva Learning journey in Go1’s blog.   


 


New Viva Learning resources 


 


For a comprehensive product walkthrough, see our guide at aka.ms/VivaLearningDemo 


For setup and admin documentation, see our docs pages at aka.ms/VivaLearningDocs 


For the list of 125 free LinkedIn Learning courses included with Viva Learning, see this page 


For adoption guidance and best practices, see our new page at aka.ms/VivaLearningAdoption 


For overview and license and pricing, or to start a free trial, see aka.ms/VivaLearning 


 


Please add your questions and thoughts in the comments section and as always we will see you soon – learning in the flow of work! 


 

FBI Releases Indicators of Compromise for RagnarLocker Ransomware

This article is contributed. See the original author and article here.

The Federal Bureau of Investigation (FBI) has released a Flash report detailing indicators of compromise (IOCs) associated with ransomware attacks by RagnarLocker, a group of a ransomware actors targeting critical infrastructure sectors.

CISA encourages users and administrators to review the IOCs and technical details in FBI Flash CU-000163-MW and apply the recommended mitigations.

CISA Releases Security Advisory on PTC Axeda Agent and Desktop Server

This article is contributed. See the original author and article here.

CISA has released an Industrial Controls Systems Advisory (ICSA), detailing vulnerabilities in PTC Axeda agent and Axeda Desktop Server. Successful exploitation of these vulnerabilities—collectively known as “Access:7”—could result in full system access, remote code execution, read/change configuration, file system read access, log information access, or a denial-of-service condition.

CISA encourages users and administrators to review ICS Advisory ICSA-22-067-01 PTC Axeda Agent and Axeda Desktop Server for technical details and mitigations and the Food and Drug Administration statement for additional information.

LGBTQ+ community: The FTC wants to hear from you

LGBTQ+ community: The FTC wants to hear from you

This article was originally posted by the FTC. See the original article here.

National Consumer Protection Week March 6 - 12 #NCPW2022; LGBTQ+ community: Have you had any consumer issues or spotted any scams recently? The FTC wants to hear from you. ReportFraud.ftc.gov Reportefraude.ftc.gov

This National Consumer Protection Week (NCPW), we’re focused on how scams affect every community — including the LGBTQ+ community. Scammers often like to impersonate familiar people, organizations, and companies that we know and trust. For the LGBTQ+ community, that can include “safe spaces” — or places where the LGBTQ+ community is free to proudly be ourselves.

Scammers can take advantage of those safe spaces — especially online. For example:

  • Scammers target people on LGBTQ+ dating apps. Here, a scammer poses as a potential romantic partner on an LGBTQ+ dating app, chats with you, quickly sends explicit photos, and asks for similar photos in return. If you send photos, the blackmail begins.
  • Job scams are everywhere. There are job boards particularly for LGBTQ+ applicants to find jobs with welcoming employers. But even if a job says it’s LGBTQ+ friendly, it might not be legit. Unfortunately, job scams can show up anywhere — including on those community boards.

And those are just two examples, and we know there are more. A lot more. So, we’re asking for your help: if you see a scam, or a friend or family member tells you about a scam — tell them to report it at ReportFraud.ftc.gov or, in Spanish, at ReporteFraude.ftc.gov. Every report helps the FTC fulfill its mission to protect every community from scammers. When you tell your story to the FTC, it’s shared with more than 3,000 law enforcers — and it helps the FTC and other law enforcement spread the word so others can avoid scams.

Check out what’s going on during #NCPW2022 at ftc.gov/ncpw. We hope to see you at some of the events.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Protegendo o backend com Azure API Management

Protegendo o backend com Azure API Management

This article is contributed. See the original author and article here.

Protegendo o backend com o Azure API Management


 


O Azure API Management é uma excelente opção para projetos que lidam com APIs. Estratégias como centralização, monitoria, gerenciamento e documentação são características que o APIM ajuda você a entregar. saiba mais.


 


No entanto muitas vezes esquecemos que nossos backends precisam estar protegidos de acessos externos. Pensando nisso vamos mostrar uma forma muito simples de proteger seu backend usando o recurso de private endpoint, VNETS e sub-redes. Assim podemos impedir chamadas públicas da internet no seu backend, porém permitindo que o APIM o acesse de forma simples e transparente.


 


O primeiro passo é entender a VNET, que é a representação virtual de uma rede de computadores, ela permite que os recursos do Azure se comuniquem com segurança entre si, com a Internet e com redes locais. Como em qualquer rede, ela pode ser segmentada em partes menores chamadas sub-redes. Essa segmentação nos ajuda a definir a quantidade de endereços disponíveis em cada sub-redes, evitando conflitos entre essas redes e diminuindo o tráfego delas. Observe o desenho abaixo:


 


Diagrama com uma VNET de CIDR 10.0.0.0/16 e duas sub-redes com CIDR 10.0.0.0/24 e 10.0.1.0/24.Diagrama com uma VNET de CIDR 10.0.0.0/16 e duas sub-redes com CIDR 10.0.0.0/24 e 10.0.1.0/24.


 


É importante entender quais opções de conexão com uma VNET (Modos) o APIM oferece:



  1. Off: Esse é o padrão sem rede virtual com os endpoints abertos para a internet.

  2. Externa: O portal de desenvolvedor e o gateway podem ser acessados pela Internet pública, e o gateway pode acessar recursos dentro da rede virtual e da internet.

  3. Interna: O portal de desenvolvedor e o gateway só podem ser acessados pela rede interna. e o gateway pode acessar recursos dentro da rede virtual e da internet.


 


Em um ambiente de produção essa arquitetura contaria com um firewall de borda, tal como um Application Gateway ou um Front Door, essas ferramentas aumentam a segurança do circuito oferecendo proteções automáticas contra os ataques comuns, por exemplo, SQL Injection, XSS Attack (cross-site scripting) entre outros. No entanto para fins de simplificação vamos ficar sem essa proteção.


 


A configuração de rede do APIM pode ser feita no menu lateral Virtual Networking, no portal de gerenciamento do Azure.


APIM com configuração externa na VNET vnet_apim e sub-rede SUBNET_APIMAPIM com configuração externa na VNET vnet_apim e sub-rede SUBNET_APIM


 


Depois de configurar o APIM, devemos configurar o App Services. Nele vamos até o menu Networking e configuramos um private endpoint, é com esse recurso que associamos um App Services a uma VNET e uma sub-rede. Durante essa configuração é importante marcamos a opção que integra com um DNS privado, para garantir a resolução de nomes dentro da rede privada.


 


 


Portal do Azure, configuração do private endpoint  do App ServicesPortal do Azure, configuração do private endpoint do App Services


 


Agora podemos conferir que foi criado um private zone para o domínio azure.websites.net apontando para o IP privado do App Services, isso permite que o APIM acesse o App Services de forma transparente, assim como era antes da implementação da VNET.


 


Para realmente termos certeza de que nosso App Services está protegido, podemos tentar acessar seu endereço pelo navegador, algo como a imagem abaixo deve acontecer. Não vamos conseguir resolver esse DNS.


 


 


Acessando URL do APP Services pelo navegador, e recebendo um erro de resolução de DNS.Acessando URL do APP Services pelo navegador, e recebendo um erro de resolução de DNS.


 


No entanto as APIs do APIM continuam funcionando com o mesmo endereço, já que o APIM está na mesma VNET que o App Services e o DNS privado resolve os nomes para endereços dessa VNET.


 


Tela do APIM com mesmo backend que não pode ser acessado pelo browser.Tela do APIM com mesmo backend que não pode ser acessado pelo browser.


 


 


Usando a ferramenta de Teste do APIM podemos confirmar que o APIM consegue acessar o back-end.


 


Explorando a opção de Teste do APIM.Explorando a opção de Teste do APIM.


 


Conclusão


 


A implantação da Rede Virtual do Azure fornece segurança aprimorada, isolamento e permite que você coloque seu serviço de gerenciamento de API em uma rede protegida da Internet. Todos os acessos como portas e serviços podem ser controlados pelo NSG da VNET, dessa forma você garante que suas APIS serão acessadas apenas pelo endpoint de gateway do APIM. 


Referências


 



  1. https://docs.microsoft.com/pt-br/azure/api-management/api-management-using-with-vnet

  2. https://docs.microsoft.com/en-us/azure/api-management/api-management-using-with-internal-vnet

  3. https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

This NCPW, help reach every community

This NCPW, help reach every community

This article was originally posted by the FTC. See the original article here.

National Consumer Protection Week March 6 - 12 Help fight fraud and other consumer problems in Black communities. Tell the FTC your story at: ReportFraude.ftc.gov/ ReporteFraude.ftc.gov #NCPW2022

National Consumer Protection Week March 6 - March 12 Latino community: The FTC wants to hear from you. ReportFraud.ftc.gov/ ReporteFraude.ftc.gov #NCPW2022

We know scammers target people everywhere. So this National Consumer Protection Week, we’re focusing on how fraud affects every community. In blog posts and events this week, we’ll highlight scams that affect some of those communities, including older adults, college students, servicemembers, and LGBTQ+ communities. Since scammers target every community, including yours, you can make a difference this NCPW: recruit your friends, family, and neighbors across all communities to report the scams they’re seeing to us.

The way fraud affects every community can look different across different demographic groups. For instance, the FTC’s Serving Communities of Color Report highlights some of the unique ways that people experience fraud in Black and Latino communities.

Here are just two examples of the differences we saw:

  • Fraud and bad business practices play out differently in different communities. The FTC’s reporting data showed that the top percentage of reports by people living in majority White and majority Latino communities were about impersonator scams. In majority Black communities, the top percentage of reports were about credit bureaus.
  • Scammers tell people to pay in different ways. Reports from majority Black and Latino communities show that people are more likely to end up paying scammers in ways that have few, if any, fraud protections ― so: cash, cryptocurrency, money orders, and debit cards. In contrast, reports from majority White communities show that people are more likely to pay scammers with credit cards.

Throughout the week, we’ll talk more about how fraud looks different across different communities. But today is about your community. Please remind your friends, family, and neighbors: if they see a scam, tell the FTC at ReportFraud.ftc.gov. And tune in for the rest of the week’s posts, and check out #NPCW2022 events at ftc.gov/ncpw. We hope to see you at some of them.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Build high confidence migration plans using Azure Migrate

Build high confidence migration plans using Azure Migrate

This article is contributed. See the original author and article here.

Build high confidence migration plans using Azure Migrate’s software inventory and agentless dependency analysis


 


Authored by Vikram Bansal, Senior PM, Azure Migrate


 


Migrating a large & complex IT environment from on-premises to the cloud can be quite daunting. Customers are often challenged with the problem of unknown where they may not have complete visibility of applications running on their servers or the dependencies between them, as they start planning their migration to cloud. This results not only in leaving behind dependent servers causing the application to break, but also adds up to the migration cost that the customers want to reduce. Azure Migrate aims at helping customers build a high-confidence migration plan with features like software inventory and agentless dependency analysis.


 


Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. Agentless dependency analysis helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure.


 


Today, we are announcing the public preview of at-scale, software inventory and agentless dependency analysis for Hyper-V virtual machines and bare-metal servers.


 


How to get started?



  • To get started, create a new Azure Migrate project or use an existing one.

  • Deploy and configure the Azure Migrate appliance for Hyper-V or for bare-metal servers.

  • Enable software inventory by providing server credentials on the appliance and start discovery. For Hyper-V virtual machines, appliance lets you enter multiple credentials and will automatically map each server to the appropriate credential.


 


VB_AzMig8_0-1646571567962.png


 


The credentials provided on the appliance are encrypted and stored on the appliance server locally and are never sent to Microsoft.



  • As servers start getting discovered, you can view them in the Azure Portal.


VB_AzMig8_1-1646571567968.jpeg


 


 


Software inventory



  • Using the credentials provided, the appliance gathers information on the installed applications, enabled roles and features on the on-premises Windows and Linux servers.

  • Software inventory is completely agentless and does not require installing any agents on the servers.

  • Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. The appliance gathers the information about the software inventory from Windows servers using PS remoting and from Linux servers using SSH connectivity.

  • Azure Migrate directly connects to the servers to execute a list of queries and pull the required data once every 12 hours.

  • A single Azure Migrate appliance can discover up to 5000 Hyper-V VMs or 1000 physical servers and perform software inventory across all of them.


 


VB_AzMig8_2-1646571567998.png


 


 


Agentless dependency analysis



  • Agentless dependency analysis feature helps in visualizing the dependencies between your servers and can be used to determine servers that should be migrated together.

  • The dependency analysis is completely agentless and does not require installing any agents on the servers.

  • You can enable dependency analysis on those servers where the prerequisite validation checks succeed during software inventory.

  • Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. The appliance gathers the dependency information from Windows servers using PS remoting and from Linux servers using SSH connection.

  • Azure Migrate directly connects to the servers to execute a list of ‘ls’ and ‘netstat’ queries and pull the required data every 5 mins. The appliance aggregates the 5 min data points and sends it to Azure every 6 hours.

  • Using the built-in dependency map view, you can easily visualize dependencies between servers. You can also download the dependency data including process, application, and port information in a CSV format for offline analysis.

  • Dependency analysis can be performed concurrently on up to 1000 servers discovered from one appliance in a project. To analyze dependencies on more than 1000 servers from the same appliance, you can sequence the analysis in multiple batches of 1000.


 


VB_AzMig8_3-1646571568000.jpeg


 


Workflow and architecture


The architecture diagram below shows how software inventory and agentless dependency analysis works. The appliance:



  1. discovers the Windows and Linux servers using the source details provided on configuration manager

  2. collects software inventory (installed applications, roles and features) information from discovered servers

  3. performs a validation of all prerequisites required to enable dependency analysis on a server. The validation is done when appliance performs software inventory. Users can enable dependency analysis only on those servers where the validation succeeds, so that they are less prone to hit errors after enabling the dependency analysis.

  4. collects the dependency data from servers where dependency analysis was enabled from the portal.

  5. periodically sends collected information to the Azure Migrate project via HTTPS port 443 over a secure encrypted connection.


 


VB_AzMig8_4-1646571568009.png


 


Resources to get started



  1. Tutorial on how to perform software inventory using Azure Migrate: Discovery and assessment.

  2. Tutorial on how to perform agentless dependency analysis using Azure Migrate: Discovery and assessment.

Using secretless Azure Functions from within AKS

Using secretless Azure Functions from within AKS

This article is contributed. See the original author and article here.

I recently implemented a change in KEDA (currently evaluated as a potential pull request), consisting of leveraging managed identities in a more granular way, in order to adhere to the least privilege principle. While I was testing my changes, I wanted to use managed identities not only for KEDA itself but also for the Azure Functions I was using in my tests. I found out that although there are quite a few docs on the topic, none is targeting AKS:


 


https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=csharp#identity-based-connections


https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference?tabs=blob#connecting-to-host-storage-with-an-identity-preview


 


You can find many articles showing how to grab a token from an HTTP triggered function, or using identity-based triggers, but in the context of a function hosted in Azure itself. It’s not rocket science to make this work in AKS but I thought it was a good idea to recap it here as I couldn’t find anything on that.


 


Quick intro to managed identities


Here is a quick reminder for those who would still not know about MI. The value proposition of MI is: no password in code (or config). MI are considered best practices because the credentials used by identities are entirely managed by Azure itself. Workloads can refer to identities without the need to store credentials anywhere. On top of this, you can manage authorization with Azure AD (single pane of glasses), unlike shared access signatures and alternate authorization methods.


AKS & MI


For MI to work in AKS, you need to enable them. You can find a comprehensive explanation on how to do this here. In a nutshell, MI works the following way in AKS:


 


aksnmi.png


 


 


An AzureIdentity and AzureIdentityBinding resource must be defined. They target a user-assigned identity, which is attached to the cluster’s VM scale set. The identity can be referred to by deployments through the aadpodbinding annotation. The function (or anything else) container makes a call to the MI system endpoint http://169…, that is intercepted by the NMI pod, which in turn, performs a call to Azure Active Directory to get an access token for the calling container.  The calling container can present the returned token to the Azure resource to gain access.


 


Using the right packages for the function


The packages you have to use depend on the Azure resource you interact with. In my example, I used storage account queues as well as service bus queues. To leverage MI from within the function, you must:



  • use the Microsoft.Azure.WebJobs.Extensions.Storage >= 5.0.0

  • use the Microsoft.Azure.WebJobs.Extensions.ServiceBus >= 5.0.0

  • use the Microsoft.NET.Sdk.Functions >= 4.1.0


Note that the storage package is not really an option because Azure Functions need an Azure Storage account for the most part.


Passing the right settings to the function


Azure functions takes their configuration from the local settings and from their host’s configuration. When using Azure Functions hosted on Azure, we can simply use the function app settings. In AKS, this is slightly different as we have to pass the settings through a ConfigMap or a Secret. To target both the Azure Storage account and the Service Bus, you’ll have to define a secret like the following:


 



data:
  AzureWebJobsStorage__accountName: <base64 value of storage account name>
  ServiceBusConnection__fullyQualifiedNamespace: <base64 value of the service bus FQDN> 
  FUNCTIONS_WORKER_RUNTIME: <base64 value of the function language>
apiVersion: v1
kind: Secret
metadata:
  name: <secret name>
---


In the above example, I use the same storage account for my storage-queue trigger as well as the storage account that is required by functions to work. In case I was using a different storage account for the queue trigger, I’d declare an extra setting with the account name. The service bus queue-triggered function relies on the __fullyQualifiedNamespace to start listening to the service bus. Paradoxally, although I create a K8s secret, there is no secret information here, thanks to the MI.

 

For your reference, I’m pasting the entire YAML here:

 

data:
  AzureWebJobsStorage__accountName: <base64 value of the storage account name>
  ServiceBusConnection__fullyQualifiedNamespace: <base64 value of the service bus FQDN>
  FUNCTIONS_WORKER_RUNTIME: <base64 value of the function code>
apiVersion: v1
kind: Secret
metadata:
  name: misecret
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
  name: storageandbushandler
  annotations:
    aadpodidentity.k8s.io/Behavior: namespaced
spec:
  type: 0
  resourceID: /subscriptions/.../resourceGroups/.../providers/Microsoft.ManagedIdentity/userAssignedIdentities/storageandbushandler
  clientID: <client ID of the user-assigned identity>
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
  name: storageandbushandler-binding  
spec:
  azureIdentity: storageandbushandler
  selector: storageandbushandler
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busandstoragemessagehandlers
  labels:
    app: busandstoragemessagehandlers    
spec:
  selector:
    matchLabels:
      app: busandstoragemessagehandlers
  template:
    metadata:
      labels:
        app: busandstoragemessagehandlers
        aadpodidbinding: storageandbushandler
    spec:
      containers:
      - name: secretlessfunc
        image: stephaneey/secretlessfunc:dev
        imagePullPolicy: Always
        envFrom:
        - secretRef:
            name: misecret
---


You can see that the secret is passed to the function through the envFrom attribute. If you want to give it a test, you can use the docker image I pushed to Docker Hub.

 

and the code of both functions, embedded in the above docker image (nothing special):

[FunctionName("StorageQueue")]
        public void StorageQueue([QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")]string myQueueItem, ILogger log)
        {
            log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
        }

        [FunctionName("ServiceBusQueue")]
        public void ServiceBusQueue([ServiceBusTrigger("myqueue-items", Connection = "ServiceBusConnection")] string myQueueItem, ILogger log)
        {
            log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
        }


 

You just need to make sure the connection string names you mention in the triggers correspond to the settings you specify in the K8s secret.

Kurbo by WW charged with collecting kids’ personal info without parents’ permission

Kurbo by WW charged with collecting kids’ personal info without parents’ permission

This article was originally posted by the FTC. See the original article here.

Advertised as a weight management service for kids, teens, and families, the Kurbo by WW app and website let kids as young as 8 track their weight, food intake, activity, and more. The problem? Many parents didn’t know their kids were using it, while the app and website were collecting and keeping information about kids without their parents’ permission.

Today the Department of Justice and FTC announced that Kurbo and its parent company WW International (formerly Weight Watchers) have agreed to settle charges they collected personal information from kids under 13 without notifying parents or getting their permission — something the Children’s Online Privacy Protection Rule (COPPA Rule) requires. That personal information included name, phone number, birth date, and persistent identifiers, including device IDs corresponding to specific accounts.

To settle the charges, the companies have agreed to pay a $1.5 million civil penalty, delete all personal information collected from kids under 13 without parental permission, and destroy any algorithms that used this illegally collected information. In the future, they must destroy any information they collect from kids under 13 if it’s been more than a year since the kid used their app.

Read How To Protect Your Privacy on Apps or visit ftc.gov/YourPrivacy to learn more about protecting your family’s privacy online.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.