This article is contributed. See the original author and article here.
As a non-developer (please read this as a disclaimer) I still try to make my life as easy as possible (yes, I am that lazy). PnP Powershell is a big component of that goal. A customer had the requirement to create a page for each of their 86 folders in a document library so they could add more information on those topics. That meant creating 86 pages, each with a document library webpart on it that showed a specific folder. No chance I was going to do that manually!
Creating the page wasn’t really difficult. Showing the document library and just the items in the folder was the hard part that I couldn’t find any examples of. The idea of this blog post is to help future people like me to just copy/paste the code.
The goal
We started with a document library containing 86 folders, each having a few documents. The goal was to create 86 pages, with each page showing a block of text on the left and the document library webpart showing only the files from that folder.
How to do this in the user interface
Using the user interface, following steps were required:
Create a new page (with the same name as the folder)
Add a section to the page with 2 columns
Add a text webpart to the left column
Add a document library webpart to the right column
As a subrequirement, only show the files from the necessary folder. This can be set up from the web part properties
Document library UI properties
That would definitely be a lot of work to do manually, so I decided that PnP PowerShell needed to come to the rescue.
The code
Lets dig in to the code. I imagine that you have already dabbled with PnP Powershell and I will not explain how to install and configure it to run.
First we need to connect to the site. Replace the url with the correct url of your site. I am using -UseWebLogin in this example because I am using 2factor authentication.
Create the page
First thing to do is to create the page, using theAdd-PnPClientSidePagecommand. I am using the $name variable here to give it a name.
Disabling the comments section on a modern SharePoint Page I couldn’t figure out how to disable the comments section on the modern client page. I tried setting it to false, or 0, but that didn’t work.
The correct way to do is to use:
-CommentsEnabled:$false
Adding sections to the page
To add a new section to the page, I am using theAdd-PnPClientSidePageSectioncommand. I can just add a TwoColumn section on the page.
The hard part: adding an existing document library as a webpart to the page
This was the easy bit, in my opinion. Adding a document library to a page is surprisingly hard in PnP Powershell (unless I am missing something big.. in that case please call me out on this!)
What you need to do, is to use theAdd-PnPClientSideWebPart command. With this command you can add all kinds of webparts to the page. Document library isn’t one of them.
You need to add a List webparttype, and in the WebpartProperties you need to mention that it is a document library AND what the ID is.
Where can I find the SharePoint document library Id ?
I didn’t have a clue how to get this Id via code, so I resorted to the UI: If you go to the library settings, the document library Id is shown in the url:
SharePoint document library ID in the url of the library settings page
Just cut out the %7B in the front, and the %7D on the back. In this example, the document library Id is 4683b239-caf6-40a3-96c4-a02dedfa3418.
Bonus: Only show a specific folder from the document library
I couldn’t figure out how to show only documents from a specific folder. Doing this in the UI is supereasy. But there wasn’t any example code out there. So here it is:
In the WebPartProperties, add selectedFolderPath=”/yourfoldername”;
Bonus 2: hide the command bar on the SharePoint Document Library Webpart
In the UI, there is a way to simply hide the command bar. Because we are showing this information in a nice looking page, there is no need for all that extra fluff of “new”, “upload” and so on.
In the same way as showing just files from a specific folder, you can use the hideCommandBar=”false”; in the WebPartProperties:
All the parts we need are now on the page. The only thing now is to publish the page so it is visible to all visitors. For that, we need to grab the page again and publish it.
The last part of the code was to make this repeatable, for all 86 folders. There is probably a really nice way to , in code, get all folders from the doclib and loop through them, but as stated a gazillion times.. I am not a developer.
So I exported the document library to Excel and copied the foldernames. I added some quotes and a comma (in an Excel formula using =CHAR(34) & A2 & CHAR(34) &”,”) and added an array to store these.
“Students and instructors began using the solution right away, following online instructions to get started. “Azure Virtual Desktop is pretty self-explanatory to use,” says Neil Hanham. “After running it, you’re in Windows, so there really wasn’t any training needed. With Lab Services, I onboarded the instructors, who onboarded their students.“
“One Earth Sciences professor uses Lab Services to deliver specific Linux environments remotely. Because Lab Services delivers computing power to students’ devices, it doesn’t matter if they have specialized computers with extra graphics processing units (GPUs) or other enhancements. And instead of making appointments to help each student, instructors make their own reusable templates for each environment, saving significant configuration time with this self-service capability.”
This article is contributed. See the original author and article here.
Introduction
This is John Barbare and I am a Sr Customer Engineer at Microsoft focusing on all things in the Cybersecurity space. With my large customer base in the Microsoft Federal space and having to comply with internal security baselines and moving to a cloud-centric platform to manage devices, it is important to know if the baselines/settings will carry over. In this article, I will explain and show how to import an on-premises baseline Group Policy Objects (GPO) into Microsoft Endpoint Manager (MEM) and see the settings that directly carry over and how to create a policy for the ones that are not MDM compliant. With that said, let’s import several baselines and see the correlation from on-premises to MEM mapping and see how we can make the move to the cloud that much easier.
What is Microsoft Security Baselines and/or STIGs?
Security baselines are a group of Microsoft-recommended configuration settings which explain their security impact. These settings are based on feedback from Microsoft security engineering teams, product groups, partners, and customers. Certain Federal agencies and other Department of Defense (DoD) entities have created their own internal and also publicly available baselines or better known as Security Technical Implementation Guides (STIGs). At the end of this article, I will reference several publicly available Federal baselines/STIGs to download and implement in your organization if you are not already using a stringent baseline as of today. If you are a State/Federal/DoD agency and use MEM, feel free to follow along with your tenant as this demo was performed in IL5 before writing this article below in my private Microsoft tenant.
Importing STIGs in Microsoft Endpoint Manager
This article assumes you have enrolled or are going to enroll devices in MEM and we want to check to make sure your tenant status is green on the home page before continuing. Navigate to Microsoft Endpoint Manager and log in with your credentials. Once logged in you will arrive at the home page.
Select “Devices” and then “Group Policy analytics” to land on the policy page to perform the import of the STIGs we are going to analyze. This feature will allow you or your enterprise to analyze your on-premises GPOs and determine the level of MEM support.
Next, I will go into the DoD Windows 10 V2R2 folder and locate and confirm the gpreport.xml file is present as we will be using this file for the import. Two GPOs exist in this folder and we will be importing both (User and Computer). I will also go into the DoD Microsoft Edge V1R1 folder and locate and confirm the gpreport.xml file is present as I will also use this file for the import in addition to the other STIGs.
If your enterprise has its own internal STIGs, you would just open GPMC.msc, right-click on the STIGed GPO, and then do a “save report” and name “gpreport” and then selecting “XML” as the output and not HTML. DISA is nice enough to provide the STIGed gpreport.xml file for what we want to accomplish in each folder, so it makes it that much easier.
Selecting the gpreport.xml
Next, we will import the three STIGs in the next several steps.
(Step 1) I will go back to the Group Policy Analytics page in MEM and (step 2) select the import icon at the top. (Step 3) This will bring out the flyout card and I will select the folder icon to import each gpreport.xml. (Step 4) I will locate and select each gpreport.xml in the three folders and (Step 5) select open each time.
Importing the STIGs
Note: Check the sizes of any GPO XML files that you import (STIGs or any baseline XML file). A single GPO cannot be larger than 750 kB. If the GPO is larger than 750 kB, the import process will fail. Any XML files without the appropriate Unicode ending will also fail the process. See below for failure errors.
Errors
When all three STIGs from the respective GPO folders I targeted are successfully imported, it will list the following information:
Group Policy name: This name is automatically generated using the information inside the GPO.
Active Directory Target: The target is automatically generated using the organizational unit (OU) target information in inside the GPO.
MDM Support: Displays the percentage of group policy settings in the GPO that has the same setting in MEM.
Targeted in AD:Yes, means the GPO is linked to an OU in on-premises group policy. No means the GPO is not linked to an on-premises OU.
Last imported: Shows the date/time stamp of the last import.
Delete: Three dots on the end to delete the imported GPO (RBAC dependent).
After Importing the STOGs
As one can see, all three STIGs were successfully imported in MEM Group Policy analytics showing the percentage of MDM support. Next, we will have to see what STIG settings do not have MDM support and then add them in.
We will select the second STIG, DoD Windows 10 STIG Computer v2r2, by clicking on the blue 87% under MDM Support. This will show which STIGs are mapped and which are not and more detail about each GPO. The details will display the following:
Setting Name: The name is automatically generated using the information in the GPO setting.
Group Policy Setting Category: This shows the setting category for ADMX settings, such as Internet Explorer and Microsoft Edge. Not all settings have a setting category.
ADMX Support: Yes, means there is an ADMX template for this setting. No means there is not an ADMX template for the specific setting.
MDM Support: Yes, means there is a matching setting available in Endpoint Manager. You can configure this setting in a device configuration profile. Settings in device configuration profiles are mapped to Windows CSPs. No means there is not a matching setting available to MDM providers, including Intune.
Value: This shows the value imported from the GPO. It shows different values, true, false, enabled, disabled, etc.
Scope: This shows if the imported GPO targets users or targets devices.
Min OS Version: This shows the minimum Windows OS version build numbers that the GPO setting applies. It may show 18362 (1903), 17130 (1803), and other Windows 10 versions. For example, if a policy setting shows 18362, then the setting supports build 18362 and newer builds.
CSP Name: A Configuration Service Provider (CSP) exposes device configuration settings in Windows 10. This column shows the CSP that includes the setting. For example, you may see Policy, BitLocker, PassportforWork, etc.
CSP Mapping: This shows the OMA-URI path for the on-premises policy. You can relate this to the MDM version of GPOs.
STIGs and MDM Support
Under the MDM support column, we can see several that are not mapped in MEM/no MDM support. To add these into MEM, we need to create a custom configuration profile.
Creating a Custom Configuration Profile for Non-Mapped STIGed GPOs
After you have created the direct mapping of all the STIGed GPOs in a Configuration policy, you will need to create a custom policy for the ones that did not match or either do not have MDM support.
Select Configuration profiles, Create a profile, and for Platform select Windows 10 and later. For profile type, we will select Templates and choose Custom from the list and select create.
Creating a Custom Profile
This will bring us to the custom policy page to create the policy so we can map the STIG to MEM/MDM. Go ahead create a name for the policy and select next. For Configuration settings, select Add, and then we will need to fill in the appropriate information for the policy. The name and description should be the policy you are creating. Next, we need to find the correct OMA-URI path and data type as this must match perfectly or it will not map.
Selecting the Data Type
To find the OMA-URI path to map, you will need to use the Policy configuration service provider page from Microsoft Docs to find the setting for the path. Since this a Windows 10 policy, it will start with ./Device/Vendor/MSFT/Policy/Config/ but we will need the path after the Config/. After we go to the link, we search for the setting for “Windows Defender SmartScreen” and we can find the rest of the path as seen below. The full value for the OMA-URI path will be:
Down at the bottom, we have values of 0 and 1 and this tells me this will be an integer value for the Data Type drop-down menu and we use 1 as the value.
Finding the path on Microsoft Docs
With these pieces of information, we can apply these values found from the docs page into the correct settings as seen below.
Confirming the Rows
Go ahead and select save and then continue to add more for the ones that are not MDM compliant by selecting add again. When finished, it will display a list after you have added the ones needed and also to confirm. Go ahead and select next.
Select the groups you want the policy to apply to and select next.
Selecting the Assignments for the Policy
Select any custom Applicability Rules to apply the policy and select next. Review and then create the policy to apply.
Selecting any Applicability Rules
What About Conflicting Settings in MEM from STIGed GPOs? Who Wins?
If anyone has applied multiple STIGs on top of other GPOs or other baselines (I have a customer that uses three STIGs), the big question I always get is “who wins?” Is it the first baseline policy I created or the strongest GPO setting that will win once everything is synced? Let’s go ahead a make sure that does not happen and create a policy that is called “ControlPolicyConflict policies” or “ControlPolicyConflict/MDMWinsOverGP.” This feature was added in Windows 10, version 1803 and allows the IT admin to control which policy will be used whenever both the MDM policy and its equivalent GPO are set on the device. MDMWinsOverGP only applies to policies in Policy CSP. MDM policies win over Group Policies where applicable; not all Group Policies are available via MDM or CSP. It does not apply to other MDM settings with equivalent Group Policy settings that are defined in other CSPs. This policy is used to ensure that MDM policy wins over Group Policy when the policy is configured on the MDM channel. The default value is 0. The MDM policies in Policy CSP will behave as described if this policy value is set 1. This policy does not support the Delete command and does not support setting the value to 0 again after it was previously set to 1. In Windows 10 version 1809 it will support using the Delete command to set the value to 0 again if it was previously set to 1.
You would perform the same steps as above to create a custom configuration profile as seen below. Select Configuration profiles, Create a profile, and for Platform select Windows 10 and later. For profile type, we will select Templates and choose Custom from the list and select create.
For the configuration settings, use the below values:
With importing all the STIGs and seeing what we can migrate from on-premises, every IT Manager needs a report that will determine the status of the policies for your journey to the cloud. Select reports and then Group policy analytics.
STIGed GPO Migration Report for MEM
Select the reports tab next to the summary to see a more detailed report about the readiness of your Group Policy for modern management. Export out the results for planning purposes or to send to a certain IT Team.
Export the Report
Conclusion
Thanks for taking the time to read this article and I hope you better understand the new Group Policy analytics in MEM as you use this function in your enterprise or in any Government IL5 tenant. Using the new Group Policy analytics will further show the value for any IT Manager the value of seeing what STIGs can be brought over, the mapping, and then create custom policies for the ones that are not MDM. Then finally, seeing how MEM battles the age-old question of which STIG/GPO wins for the finale! Hope to see you in the next blog and always protect your endpoints! STIG away!
Thanks for reading and have a great Cybersecurity day!
This article is contributed. See the original author and article here.
In March, we announced the general availability of .NET 5.0 support on Azure Functions with the new .NET isolated process worker. Today, we’re happy to share more exciting news for .NET on Azure Functions! Visual Studio 2019 v16.10 now includes full support for .NET 5.0 isolated function apps. In addition, we’re announcing an early preview of Azure Functions running on .NET 6.0 in both the isolated process and in-process programming models.
Visual Studio support for .NET 5.0 isolated function apps
Visual Studio 2019 v16.10, released in May, includes full support for creating, local debugging, and deploying .NET 5.0 isolated process Azure Functions apps. Update your Visual Studio now and try the new tutorial.
Sneak peek: .NET 6.0 on Azure Functions
The Azure Functions team is committed to providing full support for .NET 6.0 as soon as .NET 6.0 is generally available in November 2021. Azure Functions supports two programming models for .NET—isolated process and in-process (learn more). Today, we’re providing early previews of .NET 6.0 for both programming models.
To run .NET 6.0, you need Azure Functions V4. Local tooling support for creating, running, and deploying .NET 6.0 function apps is currently limited to a preview release of Azure Functions Core Tools V4. You can still use Visual Studio or Visual Studio Code to write your functions.
A few important points to keep in mind for this preview:
Use the latest release of Azure Functions Core Tools V4 preview.
At this time, you can only deploy .NET 6.0 function apps to Azure Functions V4 preview on Windows hosting plans.
Currently, there are no optimizations enabled for .NET 6.0 apps in Azure. You may notice increased cold-start times.
While other language workers are included in the current Core Tools V4 preview, they are unsupported at this time. Continue to use Azure Functions V3 and Core Tools V3 for other languages and production workloads.
Note that .NET 6.0 on Azure Functions is currently offered as an early preview and there is no official support. Try it out and let us know what you think using GitHub Issues. Watch for a public preview announcement later this year.
Run isolated process functions on .NET 6.0
.NET functions using the isolated model run in a separate process from the Azure Functions host. This decoupling allows you to install dependencies without worrying about conflicts with host dependencies. Its programming model also provides greater flexibility in how you configure your app’s startup.
.NET 5.0 is the first .NET version supported by the isolated model. Today, you can also run isolated process functions on .NET 6.0 Preview 4!
See the following wiki article to learn more and build your first .NET 6 Azure Functions app.
The isolated process model for .NET is new and there are some differences compared to the in-process model. Many will be addressed as the programming model continues to evolve. If you need access to features that are currently missing, such as Durable Functions, use the in-process model.
Run .NET 6.0 in-process functions with Azure Functions V4 preview
Starting today, you can also run in-process .NET 6.0 functions with an early preview of Azure Functions V4. You have access to the same features and capabilities as V3—including support for rich binding types and Durable Functions.
See the following wiki article to learn more and build your first .NET 6 Azure Functions app.
Thanks for checking out our announcements and we’re excited for you to try them out. And in case you missed it, Azure App Service also announced early access for .NET 6.0 Preview today.
There’s a lot more planned for Azure Functions in the coming months. We’re bringing .NET 6.0 Azure Functions support to tools like Visual Studio and Visual Studio Code and to all hosting plans in Azure. Expect a public preview of Azure Functions V4 in the third quarter of this year and general availability in November. Follow our twitter account for the latest updates: @AzureFunctions.
The blog will not include deploying the self-hosted gateway as the thesis is to explain some common issues that we will get when an App gateway is in front of it.
One easy way to find out if your self-hosted gateway is pulling API configurations from Azure is to check this ‘Status’.
A green light means the heartbeat connectivity is successfully established.
Configure the custom domain here at the ‘Hostname’ blade.
We will need to upload the pfx format first to the ‘Certificate’ blade of the APIM. So we can list it out here.
Once we made the hostname change, if we are following the docker container log for the self-hosted gateway, we can see an event being created. That means the hostname has taken effect.
Case Study: 502 errors returned by V1 Application gateway when using HTTPS
Background:
My V1 AG’s backend setting is targeting my SH-gateway’s hosting VM’s public IP.
I used a self-signed certificate for my SH-gateway’s domain. And have uploaded the cer format of this certificate to the AG.
I have over-written the hostname in http setting and custom probe.
Test HTTPS with postman, got 502 error.
HTTP however is working as expected, so I am suspecting the issue could be with the certificate. However, I have already uploaded the cer format to AG.
To narrow down and test HTTPS connectivity, I bypassed the AG and accessed the SH-gateway directly with IP:port. The IP is the docker container hosting VM’s public IP.
As in the screenshot, the certificate that the server returned is not the custom domain certificate I configured for the SH-gateway. It is returning test.apim.net, the default self-signed cert.
I realized that when we access the APIM with IP, there is no SNI header in the request. Based on the APIM documentation:
In other words, for Managed APIM, we need to choose a default custom certificate for APIM to return, and the catch here is that SH-gateway will not let us choose defaultSslBinding. So, if there is no SNI header in the requests, SH-gateway always return the default test.apim.net cert.
But I was expecting the AG overwrites the hostname header and the SNI supposed to be set. After some research, I found the answer in AG’s documentation.
In conclusion, V1 AG set requests’ SNI uses backend pool setting, which means the hostname overwrites does not modify the SNI header, hence if we put IP as a backend pool target, SH-gateway’s server cert always mismatches with the cert we uploaded to the AG.
There are two solution options for this issue with AGv1:
Use FQDN as a backend target. Don’t use IP.
Use IP as a backend target, and download the test.apim.net cert, upload it to AG’s http setting. We need to give up custom domain with this solution.
For the solution No.2, I navigated to SH-gateway by IP with browser and downloaded the certificate from the browser.
V2 AG’s difference when verify server certificate
Since the AG will verify server cert’s root cert, and the test.apim.net does not have a root cert. The HTTPS requests will get 502 even if we uploaded this cert to APIM.
Therefore, if we configure custom domain for SH-gateway and use V2 AG in front of it,
1)The custom domain cert needs to be issued with well-known CA
2)Or it can be self-signed with a root cert, with the root cert uploaded to AG.
Hope this blog will help you identify the reason we get 502 when configure AG with APIM SH-gateway.
Recent Comments