Extracting SAP data using OData – Part 7 – Delta extraction using SAP Extractors and CDS Views

Extracting SAP data using OData – Part 7 – Delta extraction using SAP Extractors and CDS Views

This article is contributed. See the original author and article here.







Before implementing data extraction from SAP systems please always verify your licensing agreement.

 


Seven weeks passed in a blink of an eye, and we are at the end of the Summer with OData-based extraction using Synapse Pipeline. Each week I published a new episode that reveals best practices on copying SAP data to the lake, making it available for further processing and analytics. Today’s episode is a special one. Not only it is the last one from the series, but I’m going to show you some cool features around data extraction that pushed me into writing the whole series. Since I have started working on the series, it was the main topic I wanted to describe. Initially, I planned to cover it as part of my Your SAP on Azure series that I’m running for the last couple of years. But as there are many intriguing concepts in OData-based extraction, and I wanted to show you as much as I can, I decided to run a separate set of posts. I hope you enjoyed it and learnt something new.


 


Last week I described how you could design a pipeline to extract only new and changed data using timestamps available in many OData services. By using filters, we can only select a subset of information which makes the processing much faster. But the solution I’ve shared works fine for just a part of services, where the timestamp is available as a single field. For others, you have to enhance the pipeline and make the complex expressions even more complicated.


 


There is a much better approach. Instead of storing the watermark in the data store and then using it as filter criteria, you can convince the SAP system to manage the delta changes for you. This way, without writing any expression to compare timestamps, you can extract recently updated information.


 


The concept isn’t new. SAP Extractors are available since I remember and are commonly used in SAP Business Warehouse. Nowadays, in recent SAP system releases, there are even analytical CDS views that support data extraction scenarios, including delta management! And the most important information is that you can expose both SAP extractors and CDS Views as OData services making them ideal data sources.


 


EXPOSE EXTRACTORS AND CDS VIEWS AS ODATA


 









There is a GitHub repository with source code for each episode. Learn more:


https://github.com/BJarkowski/synapse-pipelines-sap-odata-public



 


The process of exposing extractors and CDS views as OData is pretty straightforward. I think a bigger challenge is identifying the right source of data. 


 


You can list available extractors in your system in transaction RSA5. Some of them may require further processing before using.


 


image001.png


 


When you double click on the extractor name, you can list exposed fields together with the information if the data source supports delta extraction.


 


image003.png


 


In the previous episode, I mentioned that there is no timestamp information in OData service API_SALES_ORDER_SRV for entity A_SalesOrderItem. Therefore, each time we had to extract a full dataset, which was not ideal. The SAP extractor 2LIS_11_VAITM, which I’m going to use today, should solve that problem.


 


I found it much more difficult to find CDS views that support data extraction and delta management. There is a View Browser Fiori application that lists available CDS Views in the system, but it lacks some functionality to make use of it – for example, you can’t set filters on annotations. The only workaround I found was to enter @Analytics.dataextraction.enabled:true in the search field. This way you can at least identify CDS Views that can be used for data extraction. But to check if they support delta management you have to manually check view properties.


 


image004.png


 


Some CDS Views are still using the timestamp column to identify new and changed information, but as my source system is SAP S/4HANA 1909, I can benefit from the enhanced Change Data Capture capabilities, which use the SLT framework and database triggers to identify delta changes. I think it’s pretty cool. If you consider using CDS Views to extract SAP data, please check fantastic blog posts published by Simon Kranig. He nicely explains the mechanics of data extraction using CDS Views.


https://blogs.sap.com/2019/12/13/cds-based-data-extraction-part-i-overview/


 


I’ll be using the extractor 2LIS_11_VAITM to get item details and the I_GLAccountLineItemRawData to read GL documents. To expose the object as an OData service create a new project in transaction SEGW:


image006.png


 


Then select Data Model and open the context menu. Choose Redefine -> ODP Extraction.


 


image007.png


 


Select the object to expose. If you want to use an extractor, select DataSources / Extractors as the ODP context and provide the name in the ODP Name field:


 


image008.png


 


To expose a CDS View, we need to identify the SQL Name. I found it the easiest to use the View Browser and check the SQLViewName annotation:


image010.png


 


Then in the transaction SEGW create a new project and follow the exact same steps as for exposing extractors. The only difference is the Context, which should be set to ABAP Core Data Services.


 


image012.png


 


Further steps are the same, no matter if you work with an extractor or CDS view. Click Next. The wizard automatically creates the data model and OData service, and you only have to provide the description.


 


image014.png


 


Click Next again to confirm. In the pop-up window select all artefacts and click Finish.


 


image016.png


 


The last step is to Generate Runtime Object which you can do from the menu: Project -> Generate. Confirm model definition and after a minute your OData service will be ready for registration.


 


image018.png


 


Open the Activate and Maintain Services report (/n/iwfnd/maint_service) to activate created OData services. Click Add button and provide the SEGW project name as Technical Service Name:


 


image019.png


 


Click Add Selected Services and confirm your input. You should see a popup window saying the OData service was created successfully. Verify the system alias is correctly assigned and the ICF node is active:


 


image021.png


 


OData service is now published and we can start testing it.


 


EXTRACTING DATA FROM DELTA-ENABLED ODATA SERVICES


 


Let’s take a closer look at how does data extraction works in delta-enabled OData service. 


You have probably noticed during the service creation, that extractors and CDS views give you two entities to use:



  • Representing the data source model, with the name starting with EntityOf<objectName>, FactsOf<objectName> or AttrOf<objectName> depending on the type of extractor or view 

  • Exposing information about current and past delta tokens, with the name starting with DeltaLinksOf<objectName>


By default, if you send a request to the first service, you will retrieve a full dataset. Just like you’d work with any other OData services we covered in previous episodes. The magic happens if you add a special request header:


 


 

Prefer: odata.track-changes

 


 


It tells the system that you want it to keep track of delta changes for this OData source. Then, as a result, in the response content, together with the initial full dataset, you can find an additional field __delta with the link you can use to retrieve only new and changed information.


 


image023.png


The additional header subscribes you to the delta queue, which tracks data changes. If you follow the __delta link, which is basically the OData URL with extra query parameter !deltatoken, you will retrieve only updated information and not the full data set.


image025.png


In the SAP system, there is a transaction ODQMON that lets you monitor and manage subscriptions to the delta queue.


image027.png


You can query the second entity, with the name starting with DeltaLinksOf<EntityName>, to receive a list of the current and past delta tokens.


image029.png


We will use both entities to implement a pipeline in Synapse. Firstly, we will check if there are already open subscriptions. If not, then we’ll proceed with the initial full data extraction. Otherwise, we will use the latest delta token to retrieve changes made since the previous extraction.


 


IMPLEMENTATION


 


Open Synapse Studio and create a new pipeline. It will be triggered by the metadata one based on the ExtractionType field. Previously we have used the keywords Delta and Full to distinguish which pipeline should be started. We will use the same logic, but we’ll define a new keyword Deltatoken to distinguish delta-enabled OData services.


 


I have added both exposed OData services to the metadata store together with the entity name. We won’t implement any additional selection or filtering here (and I’m sure you know how to do it if you need it), so you can leave the fields Select and Filter empty. Don’t forget to enter the batch size, as it’s going to be helpful in the case of large datasets.


 


image031.png


 


Excellent. As I mentioned earlier, to subscribe to the delta queue, we have to pass an additional request header. Unfortunately, we can’t do it at the dataset level (like we would do for REST type connection), but there is a workaround we can use. When you define an OData linked service, you have an option of passing additional authentication headers. The main purpose of this functionality is to provide API Key for services that require this sort of authentication. But it doesn’t stop us from re-using this functionality to pass our custom headers.


 


There is just one tiny inconvenience that you should know. As the field should store an authentication key, the value is protected against unauthorized access. It means that every time you edit the linked service, you have to retype the header value, exactly the same as you would do with the password. Therefore if you ever have to edit the Linked Service again, remember to provide the header value again.


 


Let’s make changes to the Linked Service. We need to create a parameter that we will use to pass the header value:


 


 

"Header": {
	"type": "String"
}

 


 


Then to define authentication header add the following code under the typeProperties:


 


 

"authHeaders": {
    "Prefer": {
        "type": "SecureString",
        "value": "@{linkedService().Header}"
    }
},

 


 


For reference, below, you can find the full definition of my OData linked service.


 


 

{
    "name": "ls_odata_sap",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "type": "OData",
        "annotations": [],
        "parameters": {
            "ODataURL": {
                "type": "String"
            },
            "Header": {
                "type": "String"
            }
        },
        "typeProperties": {
            "url": "@{linkedService().ODataURL}",
            "authenticationType": "Basic",
            "userName": "bjarkowski",
            "authHeaders": {
                "Prefer": {
                    "type": "SecureString",
                    "value": "@{linkedService().Header}"
                }
            },
            "password": {
                "type": "AzureKeyVaultSecret",
                "store": {
                    "referenceName": "ls_keyvault",
                    "type": "LinkedServiceReference"
                },
                "secretName": "s4hana"
            }
        },
        "connectVia": {
            "referenceName": "SH-IR",
            "type": "IntegrationRuntimeReference"
        }
    }
}

 


 


The above change requires us to provide the header every time we use the linked service. Therefore we need to create a new parameter in the OData dataset to pass the value. Then we can reference it using an expression:


image035.png


image037.png


 


In Synapse, every parameter is mandatory, and we can’t make them optional. As we use the same dataset in every pipeline, we have to provide the parameter value in every activity that uses the dataset. I use the following expression to pass an empty string.


 


 


 

@coalesce(null)

 


 


 


Once we enhanced the linked service and make corrections to all activities that use the affected dataset it’s time to add Lookup activity to the new pipeline. We will use it to check if there are any open subscriptions in the delta queue. The request should be sent to the DeltaLinksOf entity. Provide following expressions:


 


 

ODataURL: @concat(pipeline().parameters.URL, pipeline().parameters.ODataService, '/')
Entity: @concat('DeltaLinksOf', pipeline().parameters.Entity)
Header: @coalesce(null)

 


 


 


image039.png


 


To get the OData service name to read delta tokens I concatenate ‘DeltaLinkOf’ with the entity name that’s defined in the metadata store.


 


Ideally, to retrieve the latest delta token, we would pass the $orderby query parameter to sort the dataset by the CreatedAt field. But surprisingly, it is not supported in this OData service. Instead, we’ll pull all records and use an expression to read the most recent delta token.


 


Create a new variable in the pipeline and add Set Variable activity. The below expression checks if there are any delta tokens available and then assigns the latest one to the variable.


 


image041.png


Add the Copy Data activity to the pipeline. The ODataURL and Entity parameters on the Source tab use the same expression as in other pipelines, so you can copy them and I won’t repeat it here. As we want to enable the delta capabilities, provide the following value as the header:


 


 

odata.track-changes

 


 


 Change the Use Query setting to Query. The following expression checks the content of the deltatoken variable. If it’s not empty, its value is concatenated with the !deltatoken query parameter and passed to the SAP system. Simple and working!


 


 

@if(empty(variables('deltatoken')), '', concat('!deltatoken=''', variables('deltatoken'), ''''))

 


 


image043.png


 


Don’t forget to configure the target datastore in the Sink tab. You can copy all settings from one of the other pipelines – they are all the same.


 


We’re almost done! The last thing is to add another case in the Switch activity on the metadata pipeline to trigger the newly created flow whenever it finds delta token value in the metadata store.


 


image045.png


 


We could finish here and start testing. But there is one more awesome thing I want to show you!


 


The fourth part of the series focuses on paging. To deal with very large datasets, we implemented a special routine to split requests into smaller chunks. With SAP extractors and CDS views exposed as OData, we don’t have to implement a similar architecture. They support server-side pagination and we just have to pass another header value to enable it.


 


Currently, in the Copy Data activity, we’re sending odata.track-chages as the header value. To enable server-side paging we have to extend it with odata.maxpagesize=<batch_size>.
Let’s make the correction in the Copy Data activity. Replace the Header parameter with the following expression:


 


 

@concat('odata.track-changes, odata.maxpagesize=', pipeline().parameters.Batch)

 


 


 


image047.png


Server-side pagination is a great improvement comparing with the solution I described in episode four.


 


EXECUTION AND MONITORING


 


I will run two tests to verify the solution works as expected. Firstly, after ensuring there are no open subscriptions in the delta queue, I will extract all records and initialize the delta load. Then I’ll change a couple of sales order line items and run the extraction process again. 


 


Let’s check it!


 


image049.png


 


The first extraction went fine. Out of 6 child OData services, two were processed by the pipeline supporting delta token. That fits what I have defined in the database. Let’s take a closer look at the extraction details. I fetched 379 sales order line items and 23 316 general ledger line items, which seems to be the correct amount.


 


image051.png


 


In the ODQMON transaction, I can see two open delta queue subscriptions for both objects, which proves the request header was attached to the request. I changed one sales order line item and added an extra one. Let’s see if the pipeline picks them up.


 


image053.png


Wait! Three records? How is that possible if I only made two changes?


 


Some delta-enabled OData services provide the functionality not only to track new items but also records deleted information. That’s especially useful in the case of sales orders. Unlike a posted accounting document, which can be only ‘removed’ by reversal posting, a sales order is open for changes much longer. Therefore, to have consistent data in the lake, we should also include deleted information.


 


But still, why did I extract three changes if I only made two changes? Because that’s how this extractor works. Instead of only sending the updated row, it firstly marks the whole row for deletion and then creates a new one with the correct data.


 


image055.png


 


So the only thing left is the validation of the server-side paging. And I have to admit it was a struggle as I couldn’t find a place in Synapse Pipelines to verify the number of chunks. Eventually, I had to use ICM Monitor to check logs at the SAP application servers. I found there an entry suggesting the paging actually took place – can you see the !skiptoken query parameter received by the SAP system?


 


image057.png


Do you remember that when you run delta-enabled extraction, there is an additional field __delta with a link to the next set of data? Server-side paging works in a very similar way. At the end of each response, there is an extra field __skip with the link to the next chunk of data. Both solutions use tokens passed as the query parameters. As we can see, the URL contains the token, which proves Synapse used server-side pagination to read all data.


 


It seems everything is working fine! Great job!


 


EPILOGUE


 


Next week there won’t be another episode of the OData extraction series. During the last seven weeks, I covered all topics I considered essential to create a reliable data extraction process using OData services. Initially, we built a simple pipeline that could only process a single (and not containing much data) OData service per execution. It worked well but was quite annoying. Whenever we wanted to extract data from a couple of services, we had to modify the pipeline. Not an ideal solution.


 


But I would be lying if I said we didn’t improve! Things got much better over time. In the second episode, we introduced pipeline parameters that eliminated the need for manual changes. Then, another episode brought metadata store to manage all services from a single place. The next two episodes focus on performance. I introduced the concept of paging to deal with large datasets, and we also discussed selects and filters to reduce the amount of data to replicate. The last two parts were all about delta extraction. I especially wanted to cover delta processing using extractors and CDS views as I think it’s powerful, yet not commonly known.


 


Of course, the series doesn’t cover all aspects of data extraction. But I hope this blog series gives you a strong foundation to find solutions and improvements on your own. I had a great time writing the series, and I learnt a lot! Thank you!


 

Learning from Expertise #6: Where is my server storage taken – Azure MySQL?

Learning from Expertise #6: Where is my server storage taken – Azure MySQL?

This article is contributed. See the original author and article here.

Overview:


We sometimes see customers asking questions related to a discrepancy between the server storage usage and their expectations on the actual data usage. In this blog we will go through what can cause that and how to overcome from this situation.


 


Solution:


In this section, I am listing down some thoughtful insights and recommendations to breakdown the storage usage to some extent.


 


1) First and foremost, monitor the server storage usage using the available Azure MySQL Metrics:


 



































Storage percentage Percent The percentage of storage used out of the server’s maximum.
Storage used Bytes The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.
Server Log storage percent Percent The percentage of server log storage used out of the server’s maximum server log storage.
Server Log storage used Bytes The amount of server log storage in use.
Server Log storage limit Bytes The maximum server log storage for this server.
Storage limit Bytes The maximum storage for this server.

 


Ahmed_S_Mahmoud_1-1640341878019.png


2) The following queries can help you to have insights upon the database storage usage:



  • run below query to know each schema usage with respect to data and index space


 


 

SELECT table_schema, SUM(data_length + index_length)/1024/1024 AS total_mb, SUM(data_length)/1024/1024 AS data_mb, SUM(index_length)/1024/1024 AS index_mb, COUNT(*) AS tables, CURDATE() AS today 
FROM information_schema.tables 
GROUP BY table_schema ORDER BY 2 DESC;

 


 




  • Leverage below query to get insights on tablespaces capacity

SELECT FILE_NAME, TABLESPACE_NAME, TABLE_NAME, ENGINE, INDEX_LENGTH, TOTAL_EXTENTS, EXTENT_SIZE, (TOTAL_EXTENTS * EXTENT_SIZE)/1024/1024 AS "size in MB" 
from INFORMATION_SCHEMA.FILES
ORDER BY 8 DESC;




  • Filter out to get temporary tablespaces information


SELECT FILE_NAME, TABLESPACE_NAME, TABLE_NAME, ENGINE, INDEX_LENGTH, TOTAL_EXTENTS, EXTENT_SIZE, (TOTAL_EXTENTS * EXTENT_SIZE)/1024/1024 AS "size in MB" 
from INFORMATION_SCHEMA.FILES 
where file_name like '%ibtmp%';​




  • To get the actual file size on the disk, run below query against INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES:

SELECT * FROM information_schema.INNODB_SYS_TABLESPACES order by file_size desc;


  • Look for the top 10 tables using below query.




SELECT CONCAT(table_schema, '.', table_name),
        CONCAT(ROUND(table_rows / 1000000, 2), 'M')                                    rows,
        CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G')                    DATA,
        CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G')                   idx,
        CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size,
        ROUND(index_length / data_length, 2)                                           idxPct
 FROM   information_schema.TABLES
 ORDER  BY data_length + index_length DESC
 LIMIT  10;​

 


3) Examine the following server parameters which might contribute into the storage usage growth



This setting will tell InnoDB if it should store data and indexes in the shared tablespace or in a separate .ibd file for each table. Having a file per table enables the server to reclaim space when tables are dropped, truncated, or rebuilt. Databases containing a large number of tables should not use the table per file configuration. More information, see MySQL :: MySQL 5.7 Reference Manual :: 14.6.3.2 File-Per-Table Tablespaces.


 



In case you set the binlog_expire_logs_seconds to a higher value, then the binary logs will not get purged soon enough and can lead to increase in the storage billing. More information, see MySQL documentationYou can monitor the binary logs usage using MySQL command: 

show binary logs;


 


It worth to mention that when you configure slow-query-log log_output parameter to “File”, slow query logs will be written to both the local server storage as well as Azure Monitor Diagnostic Logs, however, there are 7 GB storage limit for the server logs which is available free of cost and cannot be extended. More information in Azure MySQL documentation: Slow query logs – Azure Database for MySQL | Microsoft Docs.

 

4) Leverage MySQL OPTIMIZE TABLE or Rebuild Tables/Indexes to reclaim the unused space.


The bloated data can be cleaned by calling OPTIMIZE TABLE or Rebuild to reclaim some unused space.

 








Note:- OPTMIZE TABLE  will trigger an exclusive table lock.  it’s recommended that you DO NOT run in peak hours.


 


5)  Enable Storage Auto-grow and set up an alert


Last but not least, we always recommend that you enable storage auto-grow or set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on how to set up an alert.


 


Ahmed_S_Mahmoud_0-1640338983257.png

 








Note:- Keep in mind that storage can only be scaled up, not down.

 


I hope you find this article helpful. If you have any feedback, please do not hesitate to provide it in the comment section below.


 


Ahmed S. Mazrouh


Festive Friday Five: C# 10, Teams Tips, More!

Festive Friday Five: C# 10, Teams Tips, More!

This article is contributed. See the original author and article here.

norm.jpg


When a List Isn’t Enough: A Case for Dataverse and Model-Driven Apps


Norm Young is an Office Apps & Services MVP working as the Director of Collaborative Analytics at UnlimitedViz, the makers of tyGraph. He is focused on SharePoint and the Power Platform and shares his passion through his blog and speaking at conferences. Norm is also an active community contributor and helps to organize the Citizen Developers User Group. Follow him on Twitter @stormin_30 and visit his blog.


image.png


How to use Azure proximity placement groups #Azure #SAP #Latency


Robert Smit is a EMEA Cloud Solution Architect at Insight.de and is a current Microsoft MVP Cloud and Datacenter as of 2009. Robert has over 20 years experience in IT with experience in the educational, health-care and finance industries. Robert’s past IT experience in the trenches of IT gives him the knowledge and insight that allows him to communicate effectively with IT professionals. Follow him on Twitter at @clusterMVP


jaliya.jpg


C# 10.0: Nice Little Features


Jaliya Udagedara is a Developer Technologies MVP based in Auckland, New Zealand. Originally from Sri Lanka, Jaliya is currently working as a Technical Lead for a US-based software development company. He has found his most significant interest in the world of Microsoft Software Development Technologies and Azure Stack. He likes to write blog posts and maintains his personal blog. Follow him on Twitter @JaliyaUdagedara


lee-englestone-headshot.jpg


Building IsChristmasTree with CustomVision.ai


Lee Englestone is an innovative Dev Manager who likes to operate in the area where technology, product, people and business strategy converge. Lee, from the UK, is constantly working on side projects, building things and looking for ways to educate the .NET community in great technologies. He is the creator of Visual Studio Tips, Hackathon Tips and Xamarin Arkit. For more, see Lee’s blog and Twitter @LeeEnglestone.


ChrisH-1Edit.PNG


Teams Real Simple with Pictures: Aligning Teams Preview with the Office Current Channel Preview


Chris Hoard is a Microsoft Certified Trainer Regional Lead (MCT RL), Educator (MCEd) and Teams MVP. With over 10 years of cloud computing experience, he is currently building an education practice for Vuzion (Tier 2 UK CSP). His focus areas are Microsoft Teams, Microsoft 365 and entry-level Azure. Follow Chris on Twitter at @Microsoft365Pro and check out his blog here.

Microsoft 365 Developer Community Call recording – 23rd of December, 2021

Microsoft 365 Developer Community Call recording – 23rd of December, 2021

This article is contributed. See the original author and article here.

recording-23rd-dec.png


 


Call Summary


It’s the perfect time to visit the Microsoft 365 tenant – script samples gallery (123 scenarios and 168 scripts) today.  Sign up and attend an AMA and other events in January-February hosted by Sharing is Caring.  At the same time, sign up for the PnP Recognition Program. PnP project version releases this week include – PnP .NET Libraries – PnP Framework v1.8.0 GA, PnP Core SDK v1.5.0 GA, PnP PowerShell v1.9.0 GA and Yo teams – yoteams-build-core Next: v1.6.0-next.1.  To see current releases and latest updates/nightly builds, access the Repos via the links in table below.  8 new/updated script samples, 1 Microsoft Teams sample and 5 Power Apps samples were delivered this week!  


 


Open-source project status: (Bold indicates new this call)


















































Project Current Version Release/Status
PnP .NET Libraries – PnP Framework v1.8.0 GA with .NET 6.0 support added  
PnP .NET Libraries – PnP Core SDK v1.5.0 GA with .NET 6.0 support added  
PnP PowerShell v1.9.0 GA In progress: V2 POC, Prepping for v1.8, nightly releases
Yo teams – generator-teams v3.5.0 GA v4.0.0-next
Yo teams – yoteams-build-core v1.5.0 GA, Next: v1.6.0-next.1  
Yo teams – yoteams-deploy v1.1.0 GA  
Yo teams – msteams-react-base-component v3.1.1  
Microsoft Graph Toolkit (MGT) v2.3.1 GA Preparing v2.3.1 release, working on v3.0.0 – Aligning all Toolkit components to Fluent UI Web Components

 


* Note:  While version releases are periodic, nightly releases are nightly!  Subscribe to nightly releases for the latest capabilities. 


    


The host of this call was David Warner II (Catapult Systems) | @DavidWarnerII.   Q&A takes place in chat throughout the call.


 


 


Actions:  



 


Microsoft Teams Development Samples: (https://aka.ms/TeamsSampleBrowser)



 


Microsoft Power Platform Samples: (https://aka.ms/powerplatform-samples)



 


Script Samples: (https://aka.ms/script-samples)


5 new and 3 updated scenario samples contributed by


 



 


– Many thanks!


 


Together Mode!


 


PnP-Calls-TogetherMode-700W.gif


 


Together here during the holiday’s because – why not?  Great seeing everyone today.  Happy holidays and new year to you and family.   


 


Demos delivered in this session


Teams Meetings Apps: Emoji feedback with bot and Adaptive Card universal action model – enables meeting participants to provide feedback at the end of the meeting using a simple emoji.  Action is triggered by end of meeting event in Activity Handler.  The bot sends an adaptive card to the meeting’s chat with 5 emoji buttons requesting feedback.  Once voted, voters see current sentiment of all voters.  Uses adaptive card universal action model (UAM).


Introduction to Microsoft 365 Universal Sample Gallery – opens with a cleaver Night Before Christmas story of expectations, deadlines, and miracles.  A wish granted – a single curated access point for samples from GitHub Repos encompassing the Microsoft 365 suite of products.  Samples are vetted, metadata tagged/refinable by product, technology, author, compatibility…, include supporting documents, author profile, and demo video, if exist.  Site is launched, instruction on how to request and/or deliver samples.    


 


Thank you for your work. Samples are often showcased in Demos.    Request a Demo spot on the call https://aka.ms/m365pnp/request/demo


 


Topics covered in this call



  • PnP .NET library updates – Paolo Pialorsi (PiaSys.com) | @paolopia 6:08

  • PnP PowerShell updates – Paolo Pialorsi (PiaSys.com) | @paolopia – 7:46

  • yo Teams updates – David Warner II (Catapult Systems) | @DavidWarnerII – 8:40

  • Microsoft Graph Toolkit updates – David Warner II (Catapult Systems) | @DavidWarnerII – 9:14

  • Microsoft Script Samples – Paul Bullock (CaPa Creative Ltd) | @pkbullock 2:22

  • Microsoft Teams Samples – Bob German (Microsoft) | @Bob1German  9:55

  • Microsoft Power Platform Samples – April Dunnam (Microsoft) | @aprildunnam – 11:22

  • Demo 1:  Teams Meetings Apps: Emoji feedback with bot and Adaptive Card universal action model – Markus Möller (Avanade) | @Moeller2_0 – 13:36

  • Demo 2: Introduction to Microsoft 365 Universal Sample Gallery – Hugo Bernier | @bernierh & Bob German | @Bob1German – 25:09


 


Resources:


Additional resources around the covered topics and links from the slides.



 


General resources:



 


Upcoming Calls | Recurrent Invites:



 


General Microsoft 365 Dev Special Interest Group bi-weekly calls are targeted at anyone who’s interested in the general Microsoft 365 development topics. This includes Microsoft Teams, Bots, Microsoft Graph, CSOM, REST, site provisioning, PnP PowerShell, PnP Sites Core, Site Designs, Microsoft Flow, PowerApps, Column Formatting, list formatting, etc. topics. More details on the Microsoft 365 community from http://aka.ms/m365pnp. We also welcome community demos, if you are interested in doing a live demo in these calls!


 


You can download recurrent invite from http://aka.ms/m365-dev-sig. Welcome and join in the discussion. If you have any questions, comments, or feedback, feel free to provide your input as comments to this post as well. More details on the Microsoft 365 community and options to get involved are available from http://aka.ms/m365pnp.




 


“Sharing is caring”


Microsoft 365 PnP team, Microsoft – 24th of December 2021

Get creation dates of all Azure resources under an Azure subscription

Get creation dates of all Azure resources under an Azure subscription

This article is contributed. See the original author and article here.

Overview:


One of our Azure customers raised a support ticket to find out the creation dates for all their resources on their Azure subscription.


This blog shows you one of the ways to achieve that.


 


Step 1:


Create an Azure service principal with the az ad sp create-for-rbac command. Make sure to copy the output, as it is required in the next steps.


 


Input


az ad sp create-for-rbac –name serviceprincipalname –role reader


 


Output


Creating ‘reader’ role assignment under scope ‘/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’


The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli


‘name’ property in the output is deprecated and will be removed in the future. Use ‘appId’ instead.


{


  “appId”: “xxxxxxx”,


  “displayName”: “serviceprincipalname”,


  “name”: “xxxxxxx”,


  “password”: “xxxxxxx”,


  “tenant”: “xxxxxxx”


}


 


 


Step 2:


Generate the bearer token using Postman client – Postman API Platform


 


Type in the below URL with your tenant ID for a POST call


https://login.microsoftonline.com/xxxxtenant-IDxxxxxx/oauth2/token


 


RoshnaNazir_0-1640261795209.png


 


Click on “Body” and type in the details from the output of Step 1 as following.


Note: Client ID = App ID.


 


Content-Type: application/x-www-form-urlencoded


grant_type=client_credentials


client_id=xxxxxxxxxxxxxxxxxxxxxx


client_secret=xxxxxxxxxxxxxxxxxxxxxxxxxx


resource=https://management.azure.com/


 


RoshnaNazir_1-1640261795214.png


 


Click on “Send” and you will see a JSON response as below with a bearer/access token


RoshnaNazir_2-1640261795219.png


 


Copy the access token which will now be used in Step 3 for a Get call.


 


 


Step 3:


Make the get call to get the creation dates of your resources on the subscription. You may also do it for a single resource by filtering as needed in the URL.


Get URL – https://management.azure.com/subscriptions/XXXX-Your-Subscription- IDXXXX/resources?api-version=2020-06-01&$expand=createdTime&$select=name,createdTime 


 


Select “Bearer Token” in the Authorization tab and paste the access token copied from Step 2.


RoshnaNazir_3-1640261795222.png


 


Click on Send and enjoy the results you wanted!


RoshnaNazir_4-1640261795224.png


 


Credits to @P V SUHAS for the guidance.

Office 365 receives Multi-Tier Cloud Security (MTCS) SS584:2020 Level-3 Certification (2021)

Office 365 receives Multi-Tier Cloud Security (MTCS) SS584:2020 Level-3 Certification (2021)

This article is contributed. See the original author and article here.

Multi-Tier Cloud Security (MTCS) SS584:2020 Overview


 


MTCS, a cloud security standard, was developed by the Information Technology Standards Committee (ITSC) in Singapore and published in November 2013 for its first version. The ITSC promotes and facilitates national programs to standardize IT and communications, and Singapore’s participation in international standardization activities. Since 2014, Microsoft became one of the first cloud service providers that has received the MTCS certification, for both Microsoft Azure cloud platform and Office 365 services.


 


In November 2021, Microsoft again successfully attained the Multi-Tier Cloud Security (MTCS) Standard for Singapore Level-3 High Impact certification for Office 365 family of services, this time with the renewed version SS 584:2020. Office 365 services included in scope are:


 



  • Exchange Online

  • SharePoint

  • Information Protection

  • Microsoft Teams (including Azure Communication Services)

  • Skype for Business

  • Office Online

  • Office Services Infrastructure

  • Microsoft/Office 365 Suite user experience

  • Delve/Loki


MTCS certification blog image.png


 


This renewed SS 584:2020 standard was approved and published in October 2020. Compared with the last SS 584:2015 standard, the renewed version has major updated requirements including:


 



  1. List of applicability and compensatory controls with justifications.

  2. Detailed Risk Assessment Requirements that may apply to cloud services.

  3. Third-party providers must receive compliance or attestations to international standards and provide access to the evidence associated.

  4. Security hardening requirements and service availability for Edge Node services that are used for performance enhancement.


 


By providing the implementation details of the management and technical controls in place along with their supporting evidence, Office 365 was able to demonstrate how its information systems can support the Level 3 confidentiality, integrity, and availability requirements from the standard. This Level 3 certification means that in-scope Office 365 cloud services can host high-impact data for regulated organizations with much stricter security requirements. It’s required for certain cloud solution implementations by the Singapore government.


 


Certification is valid for three years with a yearly surveillance audit conducted:



 


To whom does the standard apply?


 


It applies to businesses in Singapore that purchase cloud services requiring compliance with the MTCS standard.


 


What are the differences between MTCS security levels?


 


MTCS has a total of 535 controls that cover three levels of security:



  • Level 1 is low cost with a minimum number of required baseline security controls. It is suitable for website hosting, testing and development work, simulation, and non-critical business applications.

  • Level 2 addresses the needs of most organizations that are concerned about data security with a set of more stringent controls targeted at security risks and threats to data. Level 2 is applicable for most cloud usage, including mission-critical business applications.

  • Level 3 is designed for regulated organizations with specific requirements and those willing to pay for stricter security requirements. Level 3 adds a set of security controls to supplement those in Levels 1 and 2. They address security risks and threats in high-impact information systems using cloud services, such as hosting applications with sensitive information and in regulated systems.


 


How do I get started with my organization’s own compliance effort?


 


The MTCS Certification Scheme provides guidance on audit controls and security requirements.


 


Can I use Microsoft’s compliance in my organization’s certification process?


 


Yes. If you have a requirement to certify your services built on these Microsoft cloud services, you can use the MTCS certification to reduce the impact of auditing your IT infrastructure. However, you are responsible for engaging an assessor to evaluate your implementation for compliance, and for the controls and processes within your own organization.


 


Continue the conversation by joining us in the Microsoft 365 Tech Community! Whether you have product questions or just want to stay informed with the latest updates on new releases, tools, and blogs, Microsoft 365 Tech Community is your go-to resource to stay connected