Enable simulation scenarios by ingesting large graphs in Azure Digital Twins through the Jobs API

Enable simulation scenarios by ingesting large graphs in Azure Digital Twins through the Jobs API

This article is contributed. See the original author and article here.

As more and more industries are digitizing their operations, there is a need for simulation to enable these digital transformations. Simulation helps industries meet their business and operations goals by changing the environment variables and predicting the outcomes.


 


Azure Digital Twins (ADT) is a powerful way of simulating changes in the real world to reduce costs and operational overhead. For example, a manufacturing factory can have a representation in Azure Digital Twins, and customers can use the digital representation to observe its operations with the existing setup. However, if customers want to simulate changes and compare the cost of operation, quality of product, or time taken to build a product, they could use ADT to tweak their digital representations’ models, properties, and to observe the impact of these changes on the simulation.


 


Azure Digital Twins already supports APIs to create new models, twins, and relationships. But now, with the public preview release of the Jobs API, you can ingest large twin graphs into Azure Digital Twins with enriched logging and higher throughput. This in turn enables simulation scenarios, faster setup of new instances, and automate the model and import workflows for customers. It eliminates the need for multiple API requests to ingest a large twin graph, and the need for handling errors and retries across these multiple requests.


 


What’s new with the Jobs API?


 



  • Quickly populate an Azure Digital Twins instance: Import twins and relationships at a much faster rate than our existing APIs.  Typically, the Jobs API allows import of:

    • 1M twins in about 10 mins, and 1M relationships in about 15 mins.

    • 12M entities consisting of 4M twins and 8M relationships in 90 to 120 mins.

    • 12M entities consisting of 1M twins and 11M relationships in 135 to 180 mins, where most twins have 10 relationships, and 20 twins have 50k relationships.
      Note: The Jobs API for import today scales out for performance, based on the usage pattern of the customer. The numbers shown above take the time for this auto scale into account.



  • Ingestion Limits: Import up to 2M twins and 10M relationships in one import job.

  • Structured Output logs: The Jobs API produces structured and informative output logs indicating job state, progress, and more detailed error messages with line numbers. 


 


ShipraMishra_0-1676918539088.png


 



  • Metrics: Additional metrics for your ADT instance indicating the number of entities ingested through import jobs are now available in the Azure portal.


 


ShipraMishra_1-1676918539098.png


 



  • RBAC (role-based access control):  The built-in role that provides all of these permissions is Azure Digital Twins Data Owner. You can also use a custom role to grant granular access to only the data types that you need.

  • Same billing model for public preview: The billing model for the Jobs API matches the existing billing for models/twins APIs. The import of entities is equivalent to create operations in Azure Digital Twins.


 


Import Job Workflow


 


Here are the steps to execute an import job.



  1. The user creates a data file in the ndjson format containing models, twins, and relationships. We have a code sample that you can use to convert existing models, twins, and relationships into the ndjson format. This code is written for .NET and can be downloaded or adapted to help you create your own import files.

  2. The user copies this data file to an Azure Blob Storage container.

  3. The user specifies permissions for the input storage container and output storage container.

  4. The user creates an import job, specifying the storage location of the file (input), as well as a storage location for error and log information (output).  User also provides the name of the output log file. The service will automatically create the output blob to store progress logs. There are two ways of scheduling and executing import of a twin graph using the Jobs API:


  5. Azure Digital Twins sends a new system event and changes the state of the job to Succeeded or Failed, based on how the job progressed.

  6. The user can review the output log information in the output folder for details on the job execution. 


 





 


Important points to note


 


Please keep the following points in mind while using the Jobs API.



  • Import is not an atomic operation, i.e., there is no roll back if partial import has been executed or the API execution has been canceled.

  • You can cancel or delete the job itself, but you cannot update or delete twins using this API today.

  • You can only run one job at a time per ADT instance.


Learn more


 


CISA Adds Three Known Exploited Vulnerabilities to Catalog

This article is contributed. See the original author and article here.

CISA has added three new vulnerabilities to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation.

These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise. Note: To view other newly added vulnerabilities in the catalog, click on the arrow in the “Date Added to Catalog” column—which will sort by descending dates.

Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities established the Known Exploited Vulnerabilities Catalog as a living list of known Common Vulnerabilities and Exposures (CVEs) that carry significant risk to the federal enterprise. BOD 22-01 requires Federal Civilian Executive Branch (FCEB) agencies to remediate identified vulnerabilities by the due date to protect FCEB networks against active threats. See the BOD 22-01 Fact Sheet for more information.

Although BOD 22-01 only applies to FCEB agencies, CISA strongly urges all organizations to reduce their exposure to cyberattacks by prioritizing timely remediation of Catalog vulnerabilities as part of their vulnerability management practice. CISA will continue to add vulnerabilities to the catalog that meet the specified criteria.

Join Brendan Burns, Donovan Brown and others for Azure Open Source Day on March 7th!

This article is contributed. See the original author and article here.

We hope you will join us on Tuesday, March 7th to learn how to build intelligent, scalable apps faster and easier at this deep dive into open source and Azure. See the latest open-source technology in action—while connecting with the community of industry leaders, innovators, and open-source enthusiasts.


 


Register now for this free digital event to:



  • See app-building demos using Azure and the latest in open-source technologies, cloud-native architectures, and microservices.

  • Get tips and best practices for open source from industry experts at companies like HashiCorp, GitHub, and Redis.

  • Learn to build cloud-native apps for relational and nonrelational data with Azure Cosmos DB, now supporting native PostgreSQL.

  • Discover new capabilities in IaaS, PaaS, containers, and serverless computing, including Azure Kubernetes Service (AKS).

  • Explore practical ways to optimize your open-source investments and gain more time for innovation.

  • Learn how to protect your data and business assets by building on a highly secure cloud platform designed to meet your open-source security and compliance needs.


 


Plus, ask your questions during the live chat Q&A.


 


Azure Open Source Day


Tuesday, March 7, 2023


9:00 AM–10:30 AM Pacific Time

CISA Releases Two Industrial Control Systems Advisories

This article is contributed. See the original author and article here.

CISA released two (2) Industrial Control Systems (ICS) advisories on February 21, 2023. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations:

Educator, Open AI is about to make you more efficient, here’s how

This article is contributed. See the original author and article here.

Have you heard about AI or specific terms like ChatGPT or OpenAI?

You might even have formed an opinion whether to allow it or not for your students.


 


> But have you thought about it for your own part, what you could use it for?


 


In this article, we explore a specific use case, generating a summary for a course, something that takes time to do as you need to write the text, find proper links and other things. Let’s be honest, you love teaching, and this is just boring “admin” work.


 


References:


– What is Azure OpenAI Service? – Azure Cognitive Services | Microsoft Learn
– Introduction to Azure OpenAI Service – Training | Microsoft Learn


– OpenAI API playground


 


 


Create the course summary


Let’s pick a subject to teach, “causation algorithms” and next let’s decide on some reasonable prompts that we will ask our Open AI about:


 


Promptdescribe why causation algorithms is so interesting compared to correlation-based ones  


OpenAI output


Causation algorithms are gaining more attention due to their ability to identify the causal relationships between two or more variables. Unlike correlation-based algorithms, which merely measure the strength of the association between two variables, causation algorithms can determine the cause-and-effect relationship between those variables. This is a powerful tool for predictive analysis, as it allows for the identification of factors that lead to a certain outcome and the development of strategies to influence that outcome. In other words, causation algorithms can provide a much more sophisticated level of insight into a data set than correlation-based algorithms.


 


Promptwrite me a  500 word article explaining causation algorithms for beginners



OpenAI Output:
 


Causation algorithms are a type of machine learning algorithm that uses statistical techniques to evaluate causal relationships between two or more variables. They are becoming increasingly popular in the world of data science because of their ability to make predictions about the future based on past data. Causal algorithms are used to identify patterns in datasets and then use those patterns to make predictions about future events.  


   


In order to understand how causation algorithms work, it is important to understand the concept of causality. Causality is the relationship between two events, where one event causes the other to happen. For example, if the weather is hot, then the ice cream sales in the store will likely increase. In this case, the hot weather is the cause and the increased ice cream sales is the effect.  


   


Causation algorithms use advanced statistical techniques to identify these causal relationships in datasets. They can look at data from a variety of sources such as weather data, economic data, and customer data to identify patterns and then use those patterns to make predictions. For example, a causation algorithm could look at past sales data and weather data and then make a prediction about future sales based on the relationship between the two datasets.  


   


Causation algorithms can be used for a variety of purposes. They can be used to identify patterns in customer data to better understand consumer behavior. They can also be used to predict future trends in the stock market or in the economy. They can even be used to predict the success of a new product before it is released.  


   


In order for causation algorithms to make accurate predictions, they need to be trained with large amounts of data. This is where machine learning comes in. Machine learning is a type of artificial intelligence that uses algorithms to learn from data and then use that data to make predictions. By training causation algorithms with large datasets, they can become more accurate in their predictions.  


   


Causation algorithms are a powerful tool that can be used to make predictions about the future.  


 


prompt: suggest me some good exercises on causation algorithms 



OpenAI Output:



  1. Create a program that simulates a decision tree.  

  2. Create a program that implements a Naive Bayes classifier.  

  3. Create a program that implements a Markov Decision Process.  

  4. Develop a program that implements a logistic regression classifier.  

  5. Create a program that implements a reinforcement learning algorithm.  

  6. Develop a program that uses Bayesian networks to make predictions.  

  7. Create a program that implements a Hidden Markov Model.  

  8. Build a program that uses a Monte Carlo simulation to predict outcomes.  

  9. Develop a program that uses a Genetic Algorithm to optimize a solution.  

  10. Create a program that uses a neural network to solve a problem.  


prompt: show me some references on causation algorithms either research papers or books 



OpenAI Output:



  1. Causality: Models, Reasoning, and Inference, Judea Pearl (2000)  

  2. Algorithmic Learning Theory, Eyke Hüllermeier (2006)  

  3. Causal Inference in Statistics: A Primer, Judea Pearl (2009)  

  4. Causality: Philosophical Theory Meets Scientific Practice, Kevin Hoover (2009)  

  5. Causality and Prediction in Economics, Marc R. Reinganum (2011)  

  6. Understanding Causality: From Probabilistic Graphical Models to Causal Networks, Judith M. Rolfe and David C. Hogg (2013)  

  7. Causal Inference in Statistics: A Primer, Pearl, J. (2016)  

  8. Causality in Statistics, Thomas S. Richardson and Vincent Dorie (2017)  

  9. Causality: A Very Short Introduction, Stephen Mumford (2018) 


 


Great, we have some raw data at this point that we can assess. However, we still need to check for correctness and that is sounds good, but this was a lot of help in just seconds right?


 


Editorial


Next, let’s do some editorial by assigning some reasonable headers and see if we want to adjust the text somewhat.


 


Why Causation algorithms is such an interesting area right now 


Causation algorithms are gaining more attention due to their ability to identify the causal relationships between two or more variables. Unlike correlation-based algorithms, which merely measure the strength of the association between two variables, causation algorithms can determine the cause-and-effect relationship between those variables. This is a powerful tool for predictive analysis, as it allows for the identification of factors that lead to a certain outcome and the development of strategies to influence that outcome. In other words, causation algorithms can provide a much more sophisticated level of insight into a data set than correlation-based algorithms.


 


What is Causation algorithms


Causation algorithms are a type of machine learning algorithm that uses statistical techniques to evaluate causal relationships between two or more variables. They are becoming increasingly popular in the world of data science because of their ability to make predictions about the future based on past data. Causal algorithms are used to identify patterns in datasets and then use those patterns to make predictions about future events.  


   


In order to understand how causation algorithms work, it is important to understand the concept of causality. Causality is the relationship between two events, where one event causes the other to happen. For example, if the weather is hot, then the ice cream sales in the store will likely increase. In this case, the hot weather is the cause and the increased ice cream sales is the effect.  


   


Causation algorithms use advanced statistical techniques to identify these causal relationships in datasets. They can look at data from a variety of sources such as weather data, economic data, and customer data to identify patterns and then use those patterns to make predictions. For example, a causation algorithm could look at past sales data and weather data and then make a prediction about future sales based on the relationship between the two datasets.  


   


Causation algorithms can be used for a variety of purposes. They can be used to identify patterns in customer data to better understand consumer behavior. They can also be used to predict future trends in the stock market or in the economy. They can even be used to predict the success of a new product before it is released.  


   


In order for causation algorithms to make accurate predictions, they need to be trained with large amounts of data. This is where machine learning comes in. Machine learning is a type of artificial intelligence that uses algorithms to learn from data and then use that data to make predictions. By training causation algorithms with large datasets, they can become more accurate in their predictions.  


   


Causation algorithms are a powerful tool that can be used to make predictions about the future.  


 


Some problems we will study throughout the course



  1. Create a program that simulates a decision tree.  

  2. Create a program that implements a Naive Bayes classifier.  

  3. Create a program that implements a Markov Decision Process.  

  4. Develop a program that implements a logistic regression classifier.  

  5. Create a program that implements a reinforcement learning algorithm.  

  6. Develop a program that uses Bayesian networks to make predictions.  

  7. Create a program that implements a Hidden Markov Model.  

  8. Build a program that uses a Monte Carlo simulation to predict outcomes.  

  9. Develop a program that uses a Genetic Algorithm to optimize a solution.  

  10. Create a program that uses a neural network to solve a problem.  


Syllabus



  1. Causality: Models, Reasoning, and Inference, Judea Pearl (2000)  

  2. Algorithmic Learning Theory, Eyke Hüllermeier (2006)  

  3. Causal Inference in Statistics: A Primer, Judea Pearl (2009)  

  4. Causality: Philosophical Theory Meets Scientific Practice, Kevin Hoover (2009)  

  5. Causality and Prediction in Economics, Marc R. Reinganum (2011)  

  6. Understanding Causality: From Probabilistic Graphical Models to Causal Networks, Judith M. Rolfe and David C. Hogg (2013)  

  7. Causal Inference in Statistics: A Primer, Pearl, J. (2016)  

  8. Causality in Statistics, Thomas S. Richardson and Vincent Dorie (2017)  

  9. Causality: A Very Short Introduction, Stephen Mumford (2018) 


 


There you have it, a course summary. Depending on what you’re trying to create, you might need to undertake some verification on the output and some additional editorial. Hopefully, you’ve seen the potential of this powerful tool, what you as an educator can use it for.