Father-Daughter Duo Takes Over Global Azure 2021

This article is contributed. See the original author and article here.

Are you bewildered by the possibilities of the Microsoft Cloud when it comes to SharePoint? You’re not alone.


 


The Microsoft Cloud brings a huge range of opportunities for how users can leverage Azure with SharePoint – and one father-daughter duo is preparing to educate and elevate other developers on how Azure fits into their Office 365 and SharePoint world.


 


Office Apps & Services MVP David Patrick is set to present with daughter Sarah Patrick at this week’s Global Azure 2021. Over three days from April 15-17, communities around the world are organizing localized live streams for anyone and everyone to learn about Azure from best-in-class community leaders.


 


David says he is excited to share the virtual stage with his daughter, a student at the University of Maryland, and share all the tips and tricks the pair has learned. “[My learning journey started] in Visual Basic a long time ago,” David says. “Then I moved on to C# when .NET came out and now I’m learning, along with my daughter, all the good cloud technologies like SharePoint Online and Power Apps and how to integrate everything using Flows in Power Automate.”


 


“All along the way, Microsoft’s great community support has always enabled Sarah and me to quickly learn new technologies without a lot of financial investment,” David adds. “Tools like the Learning Paths, Quick Starts, Tutorials and free Azure Sandboxes available from Microsoft Learn are extremely helpful in getting up to speed on things like Azure SQL and Web Apps really quickly.”


 


David says his daughter is very active in learning new technologies and sharing that knowledge with others. For example, Sarah has taken part in internships with the Project Management Institute and the Smithsonian Institution and regularly instructs at conferences like SharePoint Saturdays and Teams Day Online.


 


“Sarah started young, she gave her first talk at age 16 to a group of middle school girls during a workshop called TechGirlz,” David says. “She was ‘bitten’ by the speaking bug because she continues to lead these workshops today five years later!”


 


Going forward, David suggests other MVPs can help spread the word about all things tech by directly working with and inspiring young people. “They can volunteer at colleges, like a bunch of MVPs did when we organized a ‘College Tour’ in 2018 for Towson University, or they can partner with an organization like TechGirlz, which always needs volunteers to help lead and run their workshops,” David says.


 


“Sarah and I have been doing these for the last five years and they continue to be a source of energy and enthusiasm.”


 


In their session titled Intro to Azure for the SharePoint Developer, the duo is set to offer an overview of how cloud computing relates to Azure and Office 365, give tips on deploying SharePoint on-premises to Azure, and demonstrate how to quickly stand up multiple versions of SharePoint in Azure using Azure templates.


 


For more, check out Sarah’s Twitter @sarahepatrick or David’s Twitter @DavidEPatrick

Let’s get moving (your data) – Azure Storage Day Session Highlight

Let’s get moving (your data) – Azure Storage Day Session Highlight

This article is contributed. See the original author and article here.

Throughout my career as both an IT Pro and supporting customers as a vendor, my most challenging projects (and causes of sleep deprivation) were always tied to migration. I have been personally responsible for migrating applications and data – and data centers – from one physical location to another. Between homogeneous and heterogeneous platforms. Physical and virtual. SAN and NAS. On-premises to Cloud. Anything and everything. Anywhere and everywhere.


 


The Azure Migrate team has done a tremendous job of building an extensible service platform that gives our customers and partners the ability to migrate the vast majority of servers, applications, and databases to Azure quickly, reliably, and safely. Within the Azure Storage team we have invested time and resources in developing solutions like Azure Data Box, AzCopy, and Azure File Sync to help our customers migrate file data to Azure from on-premises and Cloud-based Windows File Servers, NAS devices, and object storage platforms.


 


As I know personally, there are cases where additional tools are necessary when you migrate between heterogeneous storage platforms. This can be due to your business uptime requirements, the technical details of the source and target systems, complexity requiring  expertise from specialists, or risk management and mitigation.


 


Migration Session during Azure Storage Day


If you are considering, planning, or executing a migration to Azure I would encourage you to attend our upcoming Azure Storage Day on April 29th. I will be delivering a session covering migration best practices, an overview of validated partner solutions, and a few demos that provide a complete solution for migrating all your current data sets. We will be following that session with a series on data migration process and solution best practices right here on Tech Community!


 


Migration Session .png


Want to get started building your foundation prior to the 29th?


Start with the excellent new overview document from my friend and colleague Niko Dukic. What types of solutions will you see and hear about as we kick off a deep dive on data migration during Azure Storage Day? You can get a sneak preview by taking a look at our list of verified storage migration partners and viewing demo videos from these partners and more on the Storage Bytes channel in the Azure Video Resource Center.


 


Partners are key and our partner ecosystem can help you to address data sets that are not covered with our native tools.


Here are some examples:



  • Assess your current storage and backup environments to get ready for the migration to Azure and set your company up for success with Datavoss

  • Migrate Files and Objects to Azure Blob Storage with Scality Zenko

  • Move your big data held in HDFS to Azure Data Lake Storage with WANDisco LiveData Migrator for Azure

  • Transfer multipath attached LUNs (and a LOT more from SANs) to Azure disks or Azure SAN Partners like Pure Storage with Cirrus Data Galaxy

  • Move enterprise NAS volumes (NFS/SMB/Multiprotocol) to Azure Files, Azure NetApp Files, or Azure NAS Partners with Datadobi, Data Dynamics, or Komprise

    • Niko also compiled the comparison of file migration solutions represented during Storage Day that you can view here.




But wait, there’s more!


I know, you are dying for a tech deep dive covering these migration solutions and migration methodology. Watch this space for more following the Azure Storage Day event including interviews with industry experts, tech deep dives, and longer demos than I will have time for on the 29th!


 


More coming soon!


 


Karl

Better together: Register your Azure Synapse workspace in Azure Purview for at scale governance

Better together: Register your Azure Synapse workspace in Azure Purview for at scale governance

This article is contributed. See the original author and article here.

Azure Purview now supports registering your Azure synapse workspace as a data source. You can scan all the Dedicated and Serverless SQL databases within your workspace in a matter of a few clicks. You can also choose to scan your Synapse workspace under a subscription or resource group data source that you have already registered.


 


Setting up authentication to enumerate resources within your Synapse workspace


 


Navigate to the Resource group or Subscription that the Synapse workspace is in, in the Azure portal and select Access Control (IAM) from the left navigation menu. You must be owner or user access administrator to add a role on the Resource group or Subscription. Select +Add button and set the Reader Role and enter your Azure Purview account name (which represents its MSI) under Select input box. Follow the same steps as above to also add Storage blob data reader Role for the Azure Purview MSI on the resource group or subscription that the Synapse workspace is in.


 


Now, navigate to your Synapse workspace, and under the Data section, click on one of your Serverless SQL databases. Click on the ellipses icon and start a New SQL script. Add the Azure Purview account MSI (represented by the account name) as sysadmin on the serverless SQL databases by running the command below in your SQL script:


 


CREATE LOGIN [PurviewAccountName] FROM EXTERNAL PROVIDER;


ALTER SERVER ROLE sysadmin ADD MEMBER [PurviewAccountName];


 


You must also set up authentication on your Dedicated and Serverless databases to run scans on them. To learn how, read our full documentation here.


 


Register and scan your Azure Synapse data source


 


You can register all your Azure Synapse Analytics (multiple) workspaces.


 


register-synapse-source.png


 


You can set up scans on your Synapse workspace using our secure credential mechanism for authentication.


 


synapse-scan-setup.png


 


You can also select a scan rule set and set up a schedule for your scans.


 


Once your scan completes successfully, you can navigate to source details to view all relevant information pertaining to that source and its scans.


 


synapse-source-details.png


 


synapse-scan-details.png


 


And you can discover assets along with their metadata, schema, and classifications because of these scans on your Synapse workspace, via the Purview catalog.


 


synapseBrowse.PNG


 


Get started today! 


Read our full documentation here today!

Using Power Automate to notify admins on Intune Connector health

Using Power Automate to notify admins on Intune Connector health

This article is contributed. See the original author and article here.

By Mark Hopper – Program Manager II | Microsoft Endpoint Manager – Intune


 


Microsoft Intune has the capability to integrate and connect with numerous external services. These connectors can include Microsoft services such as Microsoft Defender for Endpoint, thirdparty services such as Apple Business Manager, onpremises integrations such as the Certificate Connector for Intune, and many more.


 


Monitoring the health of an Intune environment is often a common focus for Microsoft Endpoint Manager customers. Today, admins can check their Intune tenant’s connector health using the Tenant Status page in the Microsoft Endpoint Manager admin center. However, many customers have expressed interest in exploring what options are available to proactively notify their teams when an Intune connector is determined to be unhealthy.


 


This blog will walk through the configuration steps to create an automated cloud flow using Power Automate that will notify a team or an individual when an Intune connector is unhealthy. The walkthrough will use the NDES Certificate Connector as an example, but this same flow logic can be leveraged across all Intune connectors in your environment. If you are not familiar with Power Automate, it’s a “low-code” Microsoft service that can be used to automate repetitive tasks to improve efficiencies for organizations. These automated tasks are called flows.


 


While the flow outlined in this blog will use email as the example notification method, keep in mind that flexibility and customization is key here. You can implement alternative notification methods that best aligns with your organization’s workflows such as mobile push notifications to the Power Automate app, Microsoft Teams channel posts, or even generating a ticket in your Helpdesk system if it can integrate with Power Automate. You can find a list of services that have published Power Automate connectors here.


 


Requirements



  • Azure Active Directory 

  • Microsoft Intune 

  • Microsoft Power Automate


NoteThe example flow in this blog leverages the HTTP action, which is a premium connection action. For more information on Power Automate licensing, see the docs page here.


 


Register an enterprise application in Azure Active Directory



  1. Create a new enterprise application registration in Azure Active Directory. In this example, the application is named “Flow Intune Connector Health Check.”

    Redirect URI should be set to https://global.consent.azure-apim.net/redirect.

    Registering a new application in Azure Active Directory.Registering a new application in Azure Active Directory.

  2. Under API permissions, add the appropriate read-only Graph API application permissions to the enterprise appThe table below outlines the minimum permissions required to read the Graph endpoints for some commonly used Intune connectors.





















    Permission 



    Graph Endpoints 



    Intune Connector 



    DeviceManagementConfiguration.Read.All 



    ndesConnector


    androidManagedStoreAccountEnterpriseSettings 



    NDES 


    Managed Google Play 



    DeviceManagementServiceConfig.Read.All 



    applePushNotificationCertificate 


    vppToken 


    depOnboardingSettings 


    windowsAutopilotSettings 


    mobileThreatDefenseConnector 



    APNS Certificate 


    VPP Tokens 


    DEP Tokens 


    Autopilot 


    MTD 






  3. After adding these permissions, be sure to grant admin consent for the organization.

    Granting admin consent for the organization in Azure Active Directory.Granting admin consent for the organization in Azure Active Directory.


  4. Under Certificates & secrets, generate a new client secret. Temporarily copy the secret value into Notepad since it will be used in another step soon, and you will not be able to retrieve it after you perform another operation or leave this blade.

    Example screenshot of the Client secrets "Value" and "ID".Example screenshot of the Client secrets “Value” and “ID”.


That should complete your Azure AD Enterprise App configuration. Next, you will be creating a Power Automate cloud flow that performs the following actions every hour for the NDES connectors in your Intune environment:



  1. Perform an HTTP GET request to each connector’s Microsoft Graph REST API endpoint. 

  2. Parse the JSON response returned from Graph API. 

  3. If there can be multiple connectors for a given Graph endpoint, use an Apply to each step. For example, only one APNS cert can be configured per Intune tenant, so an Apply to each would not be required. However, there can be numerous VPP tokens or NDES Connectors in a given tenant, so this step will loop through each connector returned in the response. 

  4. Evaluate each connector’s health state. 

  5. If determined to be unhealthy, send an email notification to a specified email address to notify the relevant admin or team.


 


Create a Power Automate flow to evaluate Intune Connector health



  1. To begin, open the Power Automate admin console, create a new scheduled cloud flow. For this examplethe flow is configured to run once an hour.

    Creating a new Power Automate flow in the Power Automate admin console.Creating a new Power Automate flow in the Power Automate admin console.
    NoteEnsure to not run this flow on an overly aggressive schedule to reduce the risk of throttling! Graph API and Intune service-specific throttling limits can be found here: Microsoft Graph throttling guidance. Power Platform request limits and allocations can be found here: Requests limits and allocations.


  2. Create a new HTTP action under the reoccurrence trigger using Active Directory OAuth as your authentication methodThis action will retrieve the NDES Connectors by querying the https://graph.microsoft.com/beta/deviceManagement/ndesConnectors endpointIn this example, this step is named “Get NDES Connectors”.

    HTTP action properties for the ndesConnectors flow.HTTP action properties for the ndesConnectors flow.

    Method: GET 


    URIhttps://graph.microsoft.com/beta/deviceManagement/ndesConnectors


    Authentication: Active Directory OAuth


    Authorityhttps://login.microsoft.com


    Tenant: Directory (tenant) ID from Overview blade in your Azure AD App Registration


    Authority: https://graph.microsoft.com 


    Client ID: Application (client) ID from Overview blade in your Azure AD App Registration


    Credential type: Secret


    Secret: Secret key value generated while configuring the Azure AD App Registration.



    Overview of the Flow Intune Connector Health Check.Overview of the Flow Intune Connector Health Check.
    Secret key value generated while configuring the Azure AD App Registration.Secret key value generated while configuring the Azure AD App Registration.

  3. Create a new step to parse the JSON response returned from the GET request using the Parse JSON action. This will allow the flow to use values returned from our HTTP request for our connector health evaluation, as well as our notification message.

    Content: Use the ‘Body’ dynamic content value generated from the previous step. 


    Schema: You can find the JSON schema by running a test GET request in Graph Explorer, and using the response to generate the schema. For example, run the following query in Graph Explorer: https://graph.microsoft.com/beta/deviceManagement/ndesConnectors.

    Example GET request in Graph Explorer for the "ndesConnectors" query.Example GET request in Graph Explorer for the “ndesConnectors” query.
    This should return a JSON response. Copy this JSON response and paste it into Generate from sample in your Parse JSON step. This should generate the following schema, which will allow the flow to use the values in the JSON response such as state and lastConnectionDateTime as Dynamic Values in future steps to check if our connector is healthy. Here is what the JSON schema generated from the response should look like:

    {
        "type": "object",
        "properties": {
            "@@odata.context": {
                "type": "string"
            },
            "value": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "id": {
                            "type": "string"
                        },
                        "lastConnectionDateTime": {
                            "type": "string"
                        },
                        "state": {
                            "type": "string"
                        },
                        "displayName": {
                            "type": "string"
                        }
                    },
                    "required": [
                        "id",
                        "lastConnectionDateTime",
                        "state",
                        "displayName"
                    ]
                }
            }
        }
    }

     



  4. Create a Condition step to check the NDES Connector health. For this step, the only condition to check is to see if state is not equal to active. Your health check should look similar to this:

    Condition step to check the NDES Connector health.Condition step to check the NDES Connector health.
    Note: When you set the Condition step, the flow will automatically create an Apply to each step (think of it as a for-each loop). The reason for this behavior is that the “Parse NDES Connector Response” step returns an array which could contain multiple NDES connectors. The Apply to each step ensures each NDES Connector in the response has ran through the health check.


  5. Next, create a step to send an email to your specified email address if the connector is determined to be unhealthy using the Send an email notification (V3) action. In this example, the email body is customized to include details such as the display name of the connector that is unhealthy, last connection time, and additional troubleshooting resources.

    Email notification check to send a customized email notification.Email notification check to send a customized email notification.

  6. Save, and test the flow. If your NDES connector is in an unhealthy state, the email addresses specified should receive a message similar to this:

    Example screenshot of an email notification sent to an admin.Example screenshot of an email notification sent to an admin.
    Note: If your connector is currently active and healthy, but you want to test the email notification, temporarily set your health check condition to check for a state that would return “Yes”. For example, state is equal to active. Make sure to switch this back once you have confirmed the notification is sent as expected.

    You should now have a working automated cloud flow that scans Graph for NDES connector details, checks the connector’s health, and sends out an email notification if the connector is determined to be in an unhealthy state on an hourly schedule. The completed flow should look like this:

    Overview of a working automated cloud flow that scans Graph for NDES connector details.Overview of a working automated cloud flow that scans Graph for NDES connector details.

  7. Now, you can apply these same set of steps for the remaining Intune connectors in your environment. Either in different flows, or as parallel branches under the same recurrence.

    Additional Intune connector resources you could add in your environment.Additional Intune connector resources you could add in your environment.


Connector Health Check Examples


Properties that can be used to determine each connector’s health status can be found in Microsoft’s Graph API documentation for Intune.


 


For commonly used Intune connectors, here are some health check examples that can be used or built on for the health check Condition step, as well as their Graph URI endpoints for the HTTP step:


 


Apple Push Notification Certificate


URI: https://graph.microsoft.com/beta/deviceManagement/applePushNotificationCertificate


expirationDateTime is less than addToTime(utcNow(), 61, ’day’)


 


Apple VPP Tokens


URI: https://graph.microsoft.com/beta/deviceAppManagement/vppTokens 


lastSyncStatus is equal to failed


or


state is not equal to valid


 


Apple DEP Tokens


URI: https://graph.microsoft.com/beta/deviceManagement/depOnboardingSettings 


lastSyncErrorCode is not equal to 0


or


tokenExpirationDateTime is less than addToTime(utcNow(), 61, ’day’)


 


Managed Google Play


URI: https://graph.microsoft.com/beta/deviceManagement/androidManagedStoreAccountEnterpriseSettings 


bindStatus is not equal to boundAndValidated


or


lastAppSyncStatus is not equal to success


 


Autopilot


URI: https://graph.microsoft.com/beta/deviceManagement/windowsAutopilotSettings 


syncStatus is not equal to completed


or


syncStatus is not equal to inProgress


 


Mobile Threat Defense Connectors


URI: https://graph.microsoft.com/beta/deviceManagement/mobileThreatDefenseConnectors 


partnerState is not equal to enabled


or


partnerState is not equal to available


 


Considerations




  • You can create a flow for each individual Intune connector, or you can create parallel branches under your recurrence trigger that check multiple connectors’ health in the same flow.


    For example, you may not want to check your Managed Google Play connector in the same flow as your NDES connector. Or, you may want to check your DEP token expiration on a different cadence than your DEP token sync status. Flexibility is key here- use what works best for your organization!
















  • Without custom connector 



    With custom connector 




    • Leverages Azure AD application permissions.

    • HTTP requests will run without administrative user credentials. These will continue to operate as long as the Azure AD Enterprise App secret key being used is valid. 


     




    • Leverages Azure AD delegated permissions. 

    • HTTP requests will be run using an administrator account who has proper permissions to check the respective Intune connector health. The connector may fail to run and require reauthentication if the account used for the connection has a password change, access tokens revoked, or needs to satisfy an MFA requirement.  In this blog, we executed Graph API requests using the HTTP action without creating a Graph API custom connector. However, an alternative method could be creating a Power Automate custom connector for Graph, and configuring the HTTP requests as custom actions to read the Graph Intune connector endpoints. Here are some considerations when deciding which method may work best for your organization:




    More information can be found here: 30DaysMSGraph – Day 12 – Authentication and authorization scenarios.


  • Not all connectors can have multiple instances. For example, an Apply to each step will not be necessary for the APNS certificate health check. Since only one APNS can be configured in a tenant at a time, an array would not be returned in the JSON response.

  • For health checks where you are evaluating connector or token expirations, you should customize your health checks based on your organization’s needs. For example, the Microsoft Endpoint Manager admin center will start flagging a DEP token or APNS certificate as nearing expiration when the expiration date is 60 days away. However, you may prefer to check for and send these notifications a few weeks or a month in advance rather than 60 days, every hour until it is renewed.

  • Consider leveraging secure inputs and outputs for steps in the flow that handle your Azure AD app’s secret key. By default, in Power Automate, you can see inputs and outputs in the run history for a flow. When you enable secure inputs and outputs, you can protect this data when someone tries to view the inputs and outputs and instead display the message “Content not shown due to security configuration.”

  • In addition to secure inputs and outputs, consider leveraging Azure Key Vault and the Azure Key Vault Power Automate Connector to handle storage and retrieval of your Azure AD app’s secret key. Keep in mind that actions for this connector will be run using an administrator account who has proper permissions to check the respective Key Vault. The connector may fail to run and require reauthentication if the account used for the connection has a password change, access tokens revoked, or needs to satisfy an MFA requirement.


 


You should now have an understanding of how you can leverage Power Automate and Graph API to proactively notify your team when an Intune connector is in an unhealthy state. Please let us know if you have any additional questions by replying to this post or by reaching out to @IntuneSuppTeam on Twitter.


 


Additional Resources


For further resources on Graph API, Power Automate, and Intune connectors, please see the links below.


Select Universal Print Beta APIs to be removed July 1, 2021

This article is contributed. See the original author and article here.

As we continue to evolve the Universal Print Microsoft Graph API, we’ve changed and replaced some API endpoints and data models to refine the way apps and services interact with the Universal Print platform. 


Additionally, all Universal Print APIs now require permission scopes. The documentation for each API endpoint lists permission scopes that grant access. 


For a full list of available permissions, see the Universal Print permissions section of the Microsoft Graph permissions reference. Delegated permissions grant capabilities on behalf of the user currently logged in, and application permissions grant permissions that can access the data of all users in a tenant. 


What you need to know


These changes may break any applications and services that rely on the Beta version of Microsoft Graph. If your application is running in production and relies on the Universal Print Microsoft Graph API, we recommend using v1.0 which will never have breaking changes. 


Time to take action


Please review the API usage of your applications and ensure that you are not relying on any of the deprecated endpoints. Removal of deprecated endpoints will begin on July 1, 2021. 


To see the list of deprecated and changed APIs, see the Microsoft Graph Changelog. Use the filtering controls to select the following options: 



  • Versions: beta

  • Change type: “Change” and “Deletion” 

  • Services: Devices and apps > Cloud printing 


Learn more


To learn more about Universal Print, visit out the Universal Print site.


 

Use Power Automate to automatically create SharePoint News Links from an RSS feed

Use Power Automate to automatically create SharePoint News Links from an RSS feed

This article is contributed. See the original author and article here.

There’s a blog for that


A somewhat common complaint I’ve heard from organizations I’ve worked with is that folks within the organization frequently are unaware of press releases, blogs, or other information the organization is publicly sharing. In fact, I’m guilty of it as well. On numerous occasions, I’ve gone to a coworker for some quick troubleshooting only to be told “I wrote a blog for that”.


 


Now that Microsoft Viva Connections is here, I’ve been putting a lot of energy into my company’s SharePoint home site and trying to come up with ways to break down the information silo’s we’ve just naturally accrued over the years.


 


Fortunately, it turned out that our company blog already had an RSS feed setup, which opened up some opportunities, one of which was to create a flow in Power Automate that automatically creates a SharePoint “News Link” in our home site whenever a new blog post is published to our public site.


 


So, with this blog, we’ll walk through the steps used to accomplish that feat.


 


FlowOverview.png


 


Triggered


As with any flow, we need something to kick things off. I was afraid that this was going to be the biggest technical challenge but, thankfully, it turns out that there is a trigger purpose built to do exactly what we need: the When a feed item is published trigger!


 


1-trigger.png


 


As you can see, the configuration here is dead simple. You simply provide it the URL to an RSS feed and select either the PublishDate or UpdatedOn values. We’ll stick with the default PublishDate setting so that we’re only being triggered by brand new articles.


 


So, with this configuration, our flow will be executed anytime a new article is published to the XBOX news RSS feed.


 


Once triggered, seemingly regardless of the specific RSS feeds schema, a standardized JSON object is returned to the flow that gives us most of what we need.


 


{
“body”: {
“id”: “https://news.xbox.com/en-us/?p=152438”,
“title”: “Wasteland 3: The Battle of Steeltown Releasing June 3 “,
“primaryLink”: “https://news.xbox.com/en-us/2021/04/15/wasteland-3-the-battle-of-steeltown-releasing-june-3/”,
“links”: [
“https://news.xbox.com/en-us/2021/04/15/wasteland-3-the-battle-of-steeltown-releasing-june-3/”
],
“updatedOn”: “0001-01-01 00:00:00Z”,
“publishDate”: “2021-04-15 14:00:00Z”,
“summary”: “The Wasteland 3 team here at inXile is very excited to announce the first narrative expansion for Wasteland 3: The Battle of Steeltown will be releasing June 3. Since the game’s launch last August, we’ve been working on adding new features, quality of life changes, and fixing bugs and improving game stability and performance. But […]”,
“copyright”: “”,
“categories”: []
}
}

 


Even better, this data gets turned into variables we can access through the Dynamic Content selector in Power Automate.


 


1-blog-properties.png


 


Take a picture, it’ll last longer


One thing we don’t get is any sort of image to show, which is a bummer because without them, all of our News Links would end up looking like the below image.


 


3-blog-no-image.png


 


Thankfully, SharePoint has a handy-dandy little service hidden away that can help.


 


If you ever created a new “News Link”, you’ll know that you simply give SharePoint the URL to your article and it auto-magically snags the title, summary and a thumbnail image to use. If you open up your browser’s developer tools, you can see that SharePoint calls this _api/SP.Publishing.EmbedService/EmbedData endpoint, passing along an encoded URL and some additional query strings. It turns out that this is what handles all that ‘magic’ and it’s also something we can leverage for our own ends here!


 


Thanks to the output of our trigger, we know the URL of the blog post we’re working with, and we can access it through the 5-primaryfeedlink.png variable. However, we do need to make sure that the URL is in the right format, so we’ll create our own variable to make it so.


 


4-primarylinkencoded.png


 


We’ll call it PrimaryLinkEncoded, make it a string, and initialize its value using the following expression: concat(‘%27’,encodeUriComponent(triggerOutputs()?[‘body/primaryLink’]),’%27′)


 


Once run, we’ll end up with an encoded URL surrounded by apostrophes, which is what the EmbedData service expects.


 


Now that we have that we just need to call the aforementioned service using the Send an HTTP request to SharePoint action.


 


6-getthumbnail.png


 


We’ll be making a GET request to the root of our SharePoint site. Technically, this could be any SharePoint site you have access to, but since we’ll be posting news articles to our home site, we’ll just stick with that.


 


For the Uri configuration, we’re calling the previously mentioned service with a few required query string parameters like so: _api/SP.Publishing.EmbedService/EmbedData?url=@{variables(‘PrimaryLinkEncoded’)}&version=1&bannerImageUrl=true


 


We’re passing along the encoded URL we created in the last step, specifying version 1 (which is required, despite their only being one version) and we’re asking for the bannerImageUrl to be included (otherwise we’re not getting )


 


We only need to include one header, the accept header, with a value of application/json;odata.metadata=minimal.


 


Finally, to make things a bit easier to use in a moment, we’ll capture the output of this request into a variable using the Initialize Variable action again, like so.


 


7-BannerImageUrl.png


 


We’re creating a new string variable named BannerImageUrl and we’re setting its value using the following expression: outputs(‘Get_Thumbnail’)?[‘body’]?[‘d’]?[‘ThumbnailUrl’]


 


Compose yourself


Now that we’ve got just about everything we can get, we need to put into the format that SharePoint expects when creating a News Link item, so it’s time to prepare our payload using the Compose action.


 


8-compose.png


 


It’s a fairly simply and (mostly) self-explanatory bit of JSON, so we won’t dwell on it much. Below is the exact JSON used in the above screenshot.


 


{
“BannerImageUrl”: @{variables(‘BannerImageUrl’)},
“Description”: @{triggerOutputs()?[‘body/summary’]},
“IsBannerImageUrlExternal”: true,
“OriginalSourceUrl”: @{triggerOutputs()?[‘body/primaryLink’]},
“ShouldSaveAsDraft”: false,
“Title”: @{triggerOutputs()?[‘body/title’]},
“__metadata”: {
“type”: “SP.Publishing.RepostPage”
}
}

 


Spread the word


The only thing left to do now is make our post, which will do by using another Send an HTTP request to SharePoint action, shown below.


 


9-post.png


 


This time, we’ll be making a POST to the _api/sitepages/pages/reposts endpoint (which is what SharePoint does when you post a news link).


 


Our headers are only slightly more involved. Our endpoint is expecting to receive and will return JSON, so we need to include the appropriate headers…


 


{
“accept”: “application/json”,
“content-type”: “application/json;odata=verbose;charset=utf-8”
}

 


Last but not least, we need to include the Output of the compose action we created in the previous step so that SharePoint knows what we’re sharing.


 


Once that’s all setup, go ahead and save.


 


Wrapping up


At this point, you’re done developing. The only thing left to do is wait, really. Once new items are published to the RSS feed, you’ll eventually see them start showing up in your News web parts!


 


10-done.png


 

Hidden Treasure Part 2: Mining Additional Insights

Hidden Treasure Part 2: Mining Additional Insights

This article is contributed. See the original author and article here.

Written by Jason Yi, PM on the Azure Edge & Platform team at Microsoft. 


Acknowledgements: Dan Lovinger, Principal Software Engineer


 


On the last episode of discovering hidden treasure, we took a closer look at what type of data lies within the DiskSpd XML output. Today, we will examine an example of how to take advantage of that data and create new and practical insights.


 


DiskSpd on Azure


Let’s say that we are using Azure VMs to simulate some workload using DiskSpd. To visualize the data, let’s go ahead and use a short script that takes the XML output and extracts the total IOs per bucket into a CSV file for a more graphical view.


Picture1.png


 


As you can see, the IOPS are relatively constant, with an occasional bump. The reason is because we are maxing out the total number of IOPS on our Azure environment (3-node cluster using Standard B2ms) can handle. Azure also artificially throttles the IOPS limit based on your VM size and drive type. In our case, the VM limit is 1920 IOPS and you can see that our peak is ~1950 IOPS. The occasional spike and drop in IOPS is likely due to Azure attempting to rebalance itself and locate the throttle limit.


Using Azure VMs, we can see that the IOPS values are relatively constant, but that’s not very interesting nor is it representative of a real workload. The workloads in the real world are much messier and random. Perhaps there is a way to replicate random IO activity to represent a typical day to day activity. Well, you are in luck, because there is a script for that – Let’s try it!


 


Randomize IOPS experiment


Note: The IOPS variance is purely artificial and for educational purposes only. By no means does this replicate any real-world IO scenario.


 


To help demonstrate this experiment, I’ve written a short script called “iops_randomizer.ps1”, to simulate random IO activity. The script uses a set of parameters to run DiskSpd in short, one second bursts. The IO values are randomized each second by using the (-g) parameter to throttle the throughput, which in turn affects the IOPS limit. Here are the parameters for the script:



  • -d (mandatory) = The number of DiskSpd tests. Because each test run corresponds to one second, you can think of this as the total duration of the script.

  • -path (mandatory) = the path to the test file.

  • -rw_flag = Takes in one of two options, zero or one. 0 represents that the user wants to input their custom read/write ratio whereas 1 represents that the user wants a randomized read/write ratio, without providing the -w parameter value. The default selection will be 0 and if the user does not provide a complementary -w parameter value, the script will use a default value of -w 0 (100% read).

  • -g_min = The minimum value possible when randomizing the throughput (defines the min range). The default value is 0 bytes per milliseconds.

  • -g_max = The maximum value possible when randomizing the throughput (defines the max range). The default value is 8000 bytes per milliseconds.

  • -b = The block size in bytes. The default is 4096 bytes (4KiB).

  • -r = The random I/O aligned to the specified size in bytes. The default is 4096 bytes (4KiB).

  • -o = The outstanding IO requests per target per thread. The default is 32.

  • -t = The number of threads per target file. The default is 4.

  • -w = The percentage of operations that are write requests. The default is 0% writes, 100% reads.


 


Note: You may find that your IOPS values are ridiculously small. This is because the default parameters are not optimized to your powerful environment. Consequently, you may need to experiment with the (-g) parameter range. Remember that because they are in bytes per milliseconds, you will need to perform some unit conversion to confirm that you are efficiently randomizing your values.


 


Here is the conversion I used:


Picture2.png


 


Let’s now try running the following script:


Picture3.PNG


 


After about 120 seconds, you should see 3 files in your current directory.



  • expand_profile.xml : This file is created when the script is first run and contains all the DiskSpd test runs with their respective parameters. This is later fed into DiskSpd as an input. As a result, the file only contains the <Profile> element. You may use this file to modify any parameters you desire and feed it back into DiskSpd.

  • output.xml : This is the finalized output file that is created after the DiskSpd test is complete.

  • iops_stat_seconds.csv : This file contains the clean data for the number of IOs for each second the DiskSpd test was run.


Now that we have the csv output, we can create a graph that plots total IO vs time (seconds). We now have some variance in the number of IOs!


Picture4.png


 


IO Percentiles


As you’ve just seen, there is potential in experimenting with the xml output. Perhaps you wish to derive other data that may be valuable for your situation. For example, maybe we want to examine the percentile values of the IO operations. Let’s actually try it, we have a second script called “get_iops_percentile.ps1” that takes the iops_stat_seconds.csv file and calculates the percentile scores for the IO values. After running the script, you should see a file called iops_percentiles.csv as well as a copy of the output on the PowerShell terminal.


Picture5.png


 


These percentile values can help us understand the different segmentations of IO values, gauge the average IO output for each second, and identify trends. In our example, we can see that 99% of the IOPS are less than ~1635.


 


Bonus: rw_flag


This section is to provide more information on the rw_flag to clear up any potential confusion. You may be wondering what is the difference between using 0 and 1?


 


The main difference is that with an rw_flag of 0, you the user, can provide an additional write to read ratio parameter (-w) value. For example, if you provide 30, this means 30% of the IO will be writes and 70% of the IO will be reads. This also means that every DiskSpd test will use 30 as the write to read ratio, producing a consistent result between read IOs and write IOs in the long run.


 


However, with an rw_flag of 1, the user does not need to specify any read/write ratio. Instead, the ratio is randomized each second between 0% and 100%.


 


Using the performance monitor within Windows Admin Center, the result may look something like this: (left side uses rw_flag=0, right side uses rw_flag=1)


Picture6.png


Final remarks


Today’s experiment was one example of extrapolating new data from the XML output. If you believe DiskSpd is not giving you a specific metric and wish to infer other data, this may be one method of manually discovering new “treasures.” Have fun!


 


*Script 1: iops_randomizer*

# Written by Jason Yi, PM

<#
.PARAMETER d
integer number of diskspd runs (can consider it as duration since each run is one second long)
.PARAMETER path
the path to the test file
.PARAMETER rw_flag
the default is 0. 0 represents that the user wants to input their custom read/write ratio whereas 1 represents that the user wants a randomized read/write ratio
.PARAMETER g_min
the minimum g parameter (g parameter is the throughput threshold)
.PARAMETER g_max
the maximum g parameter (g parameter is the throughput threshold)
.PARAMETER b
the block size in bytes
.PARAMETER r
random IO aligned to specified size in bytes
.PARAMETER o
the queue depth
.PARAMETER t
the number of threads
.PARAMETER w
the ratio of write tests to read tests
#>
Param (
[Parameter(Position=0,mandatory=$true)][int]$d,
[Parameter(Position=2,mandatory=$true)][string]$path, # C:ClusterStorageCSV01IO.dat
[int]$rw_flag = 0,
[int]$g_min = 0,
[int]$g_max = 8000,
[int]$b = 4096,
[int]$r = 4096,
[int]$o = 32,
[int]$t = 4,
[int]$w = 0)

Function Create-Timespans{
<#
.DESCRIPTION
This function takes the input number of diskspd runs (or duration) and lasts for that input number of seconds while randomizing
the throughput threshold within a specified range. Includes same parameters initially passed in by user.
#>
Param (
[int]$d,
[string]$path,
[int]$g_min,
[int]$g_max,
[int]$b,
[int]$r,
[int]$o,
[int]$t,
[int]$w,
[int]$rw_flag
)



[xml]$xml=@"
<Profile>
<Progress>0</Progress>
<ResultFormat>xml</ResultFormat>
<Verbose>false</Verbose>
<TimeSpans>
<TimeSpan>
<CompletionRoutines>false</CompletionRoutines>
<MeasureLatency>true</MeasureLatency>
<CalculateIopsStdDev>true</CalculateIopsStdDev>
<DisableAffinity>false</DisableAffinity>
<Duration>1</Duration>
<Warmup>0</Warmup>
<Cooldown>0</Cooldown>
<ThreadCount>0</ThreadCount>
<RequestCount>0</RequestCount>
<IoBucketDuration>1000</IoBucketDuration>
<RandSeed>0</RandSeed>
<Targets>
<Target>
<Path>$path</Path>
<BlockSize>$b</BlockSize>
<BaseFileOffset>0</BaseFileOffset>
<SequentialScan>false</SequentialScan>
<RandomAccess>false</RandomAccess>
<TemporaryFile>false</TemporaryFile>
<UseLargePages>false</UseLargePages>
<DisableOSCache>true</DisableOSCache>
<WriteThrough>true</WriteThrough>
<WriteBufferContent>
<Pattern>sequential</Pattern>
</WriteBufferContent>
<ParallelAsyncIO>false</ParallelAsyncIO>
<FileSize>1073741824</FileSize>
<Random>$r</Random>
<ThreadStride>0</ThreadStride>
<MaxFileSize>0</MaxFileSize>
<RequestCount>$o</RequestCount>
<WriteRatio>$w</WriteRatio>
<Throughput>0</Throughput>
<ThreadsPerFile>$t</ThreadsPerFile>
<IOPriority>3</IOPriority>
<Weight>1</Weight>
</Target>
</Targets>
</TimeSpan>
</TimeSpans>
</Profile>
"@


# 1 flag means that the user wishes to randomize the rw ratio
# 0 flag means that the user wishes to control the rw ratio
# Basically, throw an error when the flag is no 0 or 1
if ( ($rw_flag -ne 1) -and ($rw_flag -ne 0) ){
throw "Invalid rw_flag value. Please choose 0 to provide your own rw ratio, or 1 to randomize the rw ratio.
"
}

$path = Get-Location
# loop up until the number of runs (duration) and add new timespan elements
for($i = 1; $i -lt $d; $i++){

$g_param = Get-Random -Minimum $g_min -Maximum $g_max
$true_w = Get-Random -Minimum 0 -Maximum 100

# if there is only one timespan, add another
if ($xml.Profile.Timespans.ChildNodes.Count -eq 1){

# clone the current timespan element, modify it, and append it as a child
$new_t = $xml.Profile.Timespans.Timespan.Clone()
$new_t.Targets.Target.Throughput = "$g_param"
if ($rw_flag -eq 1){
$new_t.Targets.Target.WriteRatio = "$true_w"
}
$null = $xml.Profile.Timespans.AppendChild($new_t)

}
else{

# clone the current timespan element, modify it, and append it as a child
$new_t = $xml.Profile.Timespans.Timespan[1].Clone()
$new_t.Targets.Target.Throughput = "$g_param"
if ($rw_flag -eq 1){
$new_t.Targets.Target.WriteRatio = "$true_w"
}
$null = $xml.Profile.Timespans.AppendChild($new_t)

}
}

# show updated result
$xml.Profile.Timespans.Timespan
# save into xml file
$xml.Save("$pathexpand_profile.xml")

}
#
# SCRIPT BEGINS #
#


# create the xml file with diskspd parameters
Create-Timespans -d $d -g_min $g_min -g_max $g_max -path $path -b $b -r $r -o $o -t $t -w $w -rw_flag $rw_flag


# create path, input file, and node variables
$path = Get-Location
# feed profile xml to DISKSPD with -X parameter (Running DISKSPD)
Invoke-Expression ".diskspd.exe -X'$pathexpand_profile.xml' > output.xml"

$file = [xml] (Get-Content "$pathoutput.xml")


$nodelist = $file.SelectNodes("/Results/TimeSpan/Iops/Bucket")
$ms = $nodelist.getAttribute("SampleMillisecond")

# store the bucket objects into a variable
$buckets = $file.Results.TimeSpan.Iops.Bucket

# change the millisecond values to seconds
$time_arr = 1..$d
foreach ($t in $time_arr){
$buckets[$t-1].SampleMillisecond = "$t"
}

# select the objects you want in the csv file
$nodelist |
Select-Object @{n='Time (s)';e={[int]$_.SampleMillisecond}},
@{n='Total IOs';e={[int]$_.Total}} |
Export-Csv "$pathiops_stat_seconds.csv" -NoTypeInformation -Encoding UTF8 -Force # Have to force encoding to be UTF8 or data is in one column (UCS-2)

# import modified csv once more
$fileContent = Import-csv "$pathiops_stat_seconds.csv"

# if duration is less than 7 (number of percentile ranks), then add empty rows to fill that gap
if ($d -lt 7 ) {
for($i=$d; $i -lt 7; $i++) {
# add new row of values that are empty
$newRow = New-Object PsObject -Property @{ "Time (s)" = '' }
$fileContent += $newRow
}
}

# show output in the terminal
$fileContent | Format-Table -AutoSize

# export to a final csv file
$fileContent | Export-Csv "$pathiops_stat_seconds.csv" -NoTypeInformation -Encoding UTF8 -Force

 


*Script 2: get_iops_percentiles*

# Written by Jason Yi, PM

Function Get-IopsPercentiles{
<#
.DESCRIPTION
This function expects an array of sorted iops, length of the iops array, and an array of percentiles. For the given array of percentiles,
it returns the calculated percentile value for the set of iops numbers.

.PARAMETER sort_iops
array of sorted iops values from the input file
.PARAMETER iops_len
length of the sort_iops array
.PARAMETER percentiles
array of the percentiles you wish to find
#>
Param (
[array]$sort_iops,
[int]$iops_len,
[array]$percentiles)

$new_iops = New-Object System.Collections.ArrayList($null)
# loop through the percentiles array
foreach ($k in $percentiles) {

[Double]$num = ($iops_len - 1) * $k + 1

# if num is equal to 1 then add the first element to array
if ($num -eq 1) {

[void]$new_iops.Add( $sort_iops[0])
}

# if num is equal to the length of array then add the last element to array
elseif ($num -eq $iops_len) {
[void]$new_iops.Add( $sort_iops[$iops_len-1])
}

else {
$val = [Math]::Floor($Num)

#get decimal portion of the num
[Double]$dec = $num - $val

[void]$new_iops.Add( $sort_iops[$val - 1] + $dec * ($sort_iops[$val] - $sort_iops[$val - 1]))
}

}
return $new_iops

}


# Set path and import the csv file
$path = Get-Location
$file = Import-Csv "$pathiops_stat_seconds.csv"

#$sort_iops = $file."Total IOPS" | Sort-Object -Property {$_ -as [decimal]}


# sort the values in IOPS column in ascending order
$sort_iops = [decimal[]] $file."Total IOs"
[Array]::Sort($sort_iops)

# remove the empty or 0 values
$sort_iops = @($sort_iops) -ne '0'

$iops_len = $sort_iops.Length
#$percentiles = (1,25,50,75,90,95,99)
$percentiles = (.01,.25,.50,.75,.90,.95,.99)

# find the calculated percentiles and put them in an array
$new_iops = Get-IopsPercentiles $sort_iops $iops_len $percentiles

# if the old iops length is less than the length of the new calculated iops scores, then that new length is the iops_len
$new_iops_len = $new_iops.Length
if($iops_len -le $new_iops_len){
$iops_len = $new_iops_len
}


# loop through all the CSV rows and insert 2 new columns for the percentile rank and scores
for ($i = 0; $i -lt $iops_len; $i++) {
$value = if ($i -lt $percentiles.Count) { $percentiles[$i] } else { $null }
$file[$i] | Add-Member -MemberType NoteProperty -Name "Percentile Rank" -Value $value

$value2 = if ($i -lt $percentiles.Count) { $new_iops[$i] } else { $null }
$file[$i] | Add-Member -MemberType NoteProperty -Name "IOPS %-tile Score" -Value $value2

}

# Show output to terminal
$file | Format-Table -AutoSize

# Export to a new CSV file
$file | Export-Csv -Path "$pathiops_percentiles.csv" -NoTypeInformation -Force

 

Microsoft Federal Collaboration and Cybersecurity Summit

Microsoft Federal Collaboration and Cybersecurity Summit

This article is contributed. See the original author and article here.

reg is open.jpg


 


 


Here at Microsoft, our mission is to empower every person on the planet to achieve more.

Microsoft Federal shares that commitment to further our government customers’ digital transformation, innovation, and secure government collaboration.

Please  join us  next Tuesday for our  Federal Collaboration and Cybersecurity Summit a half-day virtual event at no additional cost designed to advance U.S. Federal agencies collaboration and cybersecurity initiatives. 


Microsoft is bringing together executives and leaders from U.S. Federal agencies to deliver key insights, lessons learned, and practical guidance on:


 



  • Advancing Cybersecurity in the Federal Government

  • Cultural transformations that drive new ways of working and digital modernization.

  • Breaking down silos to facilitate partnership with industry and academia.

  • Connecting with people and information from the office or in the field to securely share and protect sensitive information.


In the face of unprecedented challenges today, leadership resiliency is paramount.  The high stakes of cybersecurity challenges continue to increase and evolve with no end in sight.  The frequency of cybersecurity threats and their level of sophistication have and will continue to grow and as the threat of cyber-breaches increase, so does the need for intergovernmental collaboration, communications, and data sharing.


 


Click HERE to register today and learn more.


 

Use Helm Charts from Windows Client Machine to Deploy SQL Server 2019 Containers on Kubernetes

This article is contributed. See the original author and article here.

Helm is the package manager for Kubernetes itself. Learn with Amit Khandelwal on Data Exposed how you can use Helm from your Windows machine to deploy SQL Server 2019 containers on Kubernetes all in less than 5 minutes.


 


Watch on Data Exposed



Resources:

Deploy SQL Server on Azure Kubernetes Service cluster via Helm Charts – on a windows client machine



 


View/share our latest episodes on Channel 9 and YouTube!

Azure Marketplace new offers – Volume 130

Azure Marketplace new offers – Volume 130

This article is contributed. See the original author and article here.











We continue to expand the Azure Marketplace ecosystem. For this volume, 86 new offers successfully met the onboarding criteria and went live. See details of the new offers below:





































































































































































































































































































































































 


Applications


 


uiCOCKPIT.png

[ui!] COCKPIT: Urban Software Institute’s [ui!] COCKPIT enables visualization of complex data from a cloud-based platform, such as [ui!] UrbanPulse. Choose from different visualizations, providing general information for the public, management decision aids, and customized applications for specific subjects.


AdstraConsumerEssentials.png

Adstra Consumer Essentials: Adstra Consumer Essentials provides a comprehensive data set of more than 230 million US-based individuals, including data elements commonly used by marketers and advertisers. The proprietary data set is drawn from various sources including public records and a leading global risk/fraud prevention provider.


AITRICS.png

AITRICS: VitalCare from AITRICS is a risk-prediction system built on Microsoft AI services. VitalCare directly collects patient data, such as vital signs and lab tests, from electronic medical records and generates prediction scores for clinical deterioration and sepsis.


AlefPlatform.png

Alef Platform: Alef Education’s platform provides data analytics to help teachers focus on where students are in their mastery of a subject. Alef provides experiential learning that enables students to apply and transfer their newly acquired skills.


AlgoSupplyChainAnalyticsCollaborativePlatform.png

Algo Supply Chain Analytics Collaborative Platform: Algo’s advanced analytics solutions help companies operate highly efficient supply chains by using AI and deep learning to maximize revenue and profit while optimizing inventory spending. Business users can interact with Algo using chat functionality through platforms such as Microsoft Teams.


ApacheWebServerwithDebian10.png

Apache Web Server with Debian 10: Cognosys provides this ready-to-run image containing Apache HTTP Server 2.4.38 installed on Debian 10 Linux. Apache includes software to handle multi-processing modes and support for SSL v3 and TLS via mod_ssl.


Apifon-Multi-channelBusinessMessagingPlatform.png

Apifon – Multi-channel Business Messaging Platform: With Apifon’s messaging platform, you can engage customers through their favorite channels, track the performance of your campaigns, and turn data into KPIs that help you increase your ROI.


atmaioConnectedProductCloud.png

atma.io Connected Product Cloud: Avery Dennison’s atma.io platform creates, manages, and assigns digital identities to products, enabling end-to-end transparency for tracking, storing, and managing events for individual products from source to consumer.


AvnetIoTConnectandSmartFactory.png

Avnet IoT Connect and Smart Factory: Built on IoTConnect and Microsoft Azure, Avnet’s Smart Factory solution helps you monitor and track the production and performance on your factory floor. Gain real-time insights for all locations and integrate your data with supply chain management systems.


AwarenessPlatform.png

Awareness Platform: This solution from i5 B. V. provides ready-to-go professional learning focused on security and privacy to reduce risky behavior by your employees. With Awareness Platform, you can customize courses with a few clicks to match your organization’s policies.


BoxOpsPlatform.png

BoxOps Platform: BoxBoat’s BoxOps is a DevSecOps service solution designed for software teams, enterprise operations, and IT staff who want to accelerate their end-to-end management of app deployment.


ChatbotSmartRH.png

Chatbot Smart RH: SMART RH from Alexys Solutions is an AI-powered chatbot designed to serve internal collaborators seeking HR assistance for leave requests, work certifications, and more. Automate HR requests and free employee time to concentrate on high-value work.


CloudCover365Exchangebackup.png

CloudCover 365: Exchange Backup: CloudCover 365 from virtualDCS lets you back up and restore Exchange Online data, including email, calendars, contacts, and more. The browser-based portal integrates with Veeam Backup 365 and Azure Active Directory.


OneDriveforBusinessCloudBackup.png

CloudCover 365:OneDrive for Business Backup: Back up OneDrive for Business data through a browser-based portal with CloudCover 365 from virtualDCS. CloudCover 365 integrates with Veeam Backup 365 and Azure Active Directory.


CompleteCloudBackupforMicrosoft365.png

Complete Cloud Backup for Microsoft 365: Implement CloudCover 365 from virtualDCS to back up and restore Microsoft 365 data, customize retention plans, schedule backups, and more. The browser-based portal integrates with Veeam Backup 365 and Azure Active Directory.


COMtracInvestigationandBriefManagementSolution.png

COMtrac Investigation & Brief Management Solution: COMtrac provides a consistent approach to managing investigations. The COMtrac platform is a management solution for cases, evidence, and briefs that can be used for all types of investigations by private sector clients and government entities.


ConnectedHeavyMachinery.png

Connected Heavy Machinery: Improve operational safety and utilization of your plants with Equiprise’s cloud-based monitoring solution built on IoT technology. Connected Heavy Machinery connects your equipment and provides you with key performance data.


CRMSensor.png

CRMSensor: Designed for retail chains, banks, healthcare providers, and convenience stores, CRMSensor is an Azure-based system that enables you to communicate interactively with customers. The solution includes an app for Android tablets and customized CRMSensor devices.


DataInsights.png

Data Insights: The oh22 Data Insights solution provides consulting, development, and implementation of a custom enterprise data solution based on Microsoft Azure Synapse Analytics, Azure Data Lake, and Azure Data Factory.


DigitalCustomerExperience.png

Digital Customer Experience: The EY Global Digital Customer Experience solution utilizes Microsoft Dynamics 365 along with an innovative array of EY tools and services, from UX to market research and content writing. Respond to digital change, cut costs, and make your organization fit for growth.


DigitalProcessIntegrationPlatform.png

Digital Process Integration Platform: PlanB. GmbH provides universal microservices for integration of your cloud-based digital services and applications. The PlanB. platform simplifies API management and integrates with on-premises systems, including ERP, CRM, project portfolio management, and manufacturing execution systems.


DigitalSalesServices.png

Digital Sales Services: Softtek enables digital sales from demand generation to e-commerce. Built on Microsoft Azure, Power BI, and Azure-based services, Digital Sales Services enables logistics, last-mile delivery, payments, and analytics.


DNAZ-DigitalBankingShrink-wrapped.png

DNA Z – Digital Banking Shrink-wrapped: DNA Z is an end-to-end digital banking solution for new or existing banks that is deployable on Microsoft Azure. The system includes a blueprint for bank policies and frameworks, fully mapped journeys, operating processes, mobile apps, and data analytics.


DockerCEwithDebian10.png

Docker CE with Debian 10: Cognosys has configured this ready-to-run image of Docker CE 20.10.4 on Debian 10 Linux. Docker Community Server is designed for developers and small teams looking to start with Docker and container-based apps. The image includes built-in orchestration, networking, and security.


EskerOrderManagementAutomation.png

Esker Order Management Automation: Order Management from Esker SA uses AI and robotic process automation to increase the efficiency of sales order processing. Customer service teams can electronically process and track faxes, emails, and orders with improved monitoring and accuracy.


ExperianOpenDataPlatform.png

Experian Open Data Platform: The Open Data Platform (ODP) gives you instant access to a customer’s financial information via Experian’s consumer and business credit information. You can easily create a picture of customer financial well-being to deliver new products and services.


GitlabCommunityEditionWithDebian10.png

GitLab Community Edition with Debian 10: Cognosys has pre-configured this ready-to-run image containing GitLab 13.9.1 on Debian 10 Linux. GitLab is a fast DevOps tool that provides a web-based method for managing Git repositories. GitLab includes wikis, issue tracking, and CI/CD pipelines.


GrafanawithDebian10.png

Grafana with Debian 10: Cognosys has pre-configured this ready-to-run image containing Grafana 7.4.3 on Debian 10 Linux. Grafana is a multi-platform, open-source web application providing analytics and interactive visualizations.


GrafanawithUbuntu1804LTS.png

Grafana with Ubuntu 18.04 LTS: Cognosys has pre-configured this ready-to-run image containing Grafana 7.4.3 on Ubuntu 18.04 LTS. Grafana is a multi-platform, open-source web application providing analytics and interactive visualizations.


GrafanawithUbuntu2004LTS.png

Grafana with Ubuntu 20.04 LTS: Cognosys has pre-configured this ready-to-run image containing Grafana 7.4.3 on Ubuntu 20.04 LTS. Grafana is a multi-platform, open-source web application providing analytics and interactive visualizations.


Haproxy18withDebian10.png

HAProxy 1.8 with Debian 10: Cognosys has pre-configured this ready-to-run image containing HAProxy 1.8.19 on Debian 10 Linux. HAProxy is an open-source, high-availability server that provides TCP/HTTP load balancing and proxying.


IBMWebSphereProductFamilyonAzureOverview.png

IBM WebSphere Product Family on Azure Overview: The IBM WebSphere product family is a suite of enterprise Java application servers that enable enterprise Java workloads on Microsoft Azure. These servers run on Microsoft Azure Red Hat OpenShift, Azure Kubernetes Service, and VMs.


IntelligentDataPlatform.png

Intelligent Data Platform: Powered by Microsoft Azure, the EY Intelligent Data Platform is a scalable solution to optimize data in real-time, generate rapid insights, enhance decision-making, and deliver greater business value. The platform supports risk management, regulatory reporting, governance, and more.


ioMoVo.png

ioMoVo: ioMoVo offers you a range of storage, data exchange, and multimedia management options for cloud or on-premises storage. This solution from Practical Solutions Inc. provides secure access to your data and lets you interconnect multiple storage platforms.


ioMoVoS.png

ioMoVoS: An add-in for the Practical Solutions Inc. ioMoVo platform, ioMoVoS provides media services such as video indexing, analysis of media with machine learning, publication to external video platforms, and more.


IoTAmbientConditionsIntelligentService.png

IoT Ambient Conditions Intelligent Service: IoT Ambient Conditions Intelligence Service helps data center operators, manufacturers, and plant operators improve their performance and reduce costs by improving the operational ambient conditions and reducing equipment maintenance.


JenkinsWithDebian10.png

Jenkins with Debian 10: Cognosys has pre-configured this ready-to-run image containing Jenkins 2.263.4 on Debian 10 Linux. Jenkins is a Java-based open-source tool providing continuous integration services for software development.


KeyScalerforAzureSphere.png

KeyScaler for Azure Sphere: Device Authority provides Sphere Security Automation powered by Keyscaler to enable end-to-end service offerings with enhanced security on Microsoft Azure Sphere.


LAMPWithDebian10.png

LAMP with Debian 10: Cognosys has pre-configured this ready-to-run image containing a LAMP (Linux Apache MySQL PHP) stack on Debian 10 Linux. This image has been designed for enterprise customers who want to deploy a secure LAMP server. This image contains Apache HTTP Server 2.4.38, PHP 7.3, and MySQL Server 8.0.23.


MicrosoftTeamsVoIPCallingSolutions.png

Microsoft Teams VoIP Calling Solutions: Add a virtualDCS calling plan to extend your Microsoft Teams solution by enabling VoIP calling to non-Teams devices and telephones. virtualDCS offers a range of telephony services that integrate with Teams to meet your business requirements.


ModernWorkplace.png

Modern Workplace: The EY Modern Workplace services provide integrated and secure solutions for collaboration built on Microsoft 365, Windows 10, and enterprise mobility. With EY, you can be confident of having the right strategy, technology, capabilities, and governance to fuel and sustain your work.


MozzazDigitalHealthPlatformSaaS.png

Mozzaz Digital Health Platform (SaaS): Mozzaz is a digital health technology company that specializes in interactive solutions for remote patient monitoring, active engagement, and virtual telehealth. The Mozzaz platform provides over 200 digital solution libraries based on clinically proven interventions.


NetFoundryEdgeRouter.png

NetFoundry Edge Router: NetFoundry Edge Routers provide zero trust connectivity between Microsoft Azure and any site, edge device, private/public clouds, and hybrid applications. Create orchestrated networks delivered as a service to replace VPNs and SD-WAN.


Nextcloud-Theself-hostedproductivityplatform.png

Nextcloud – The self-hosted productivity platform: Linnovate offers this self-hosted instance of Nextcloud Flow, enabling users to quickly and securely share files and folders. Nextcloud Flow features file access control, encryption, authentication, and ransomware recovery capabilities.


OnlineCloudBackupforSharePoint.png

Online Cloud Backup for SharePoint: Back up SharePoint data through a browser-based portal with CloudCover 365 from virtualDCS. CloudCover 365 integrates with Veeam Backup 365 and Azure Active Directory.


PachydermEnterprise.png

Pachyderm Enterprise: Pachyderm is an enterprise-grade data science platform built on Kubernetes. Deploy a Pachyderm cluster on Microsoft Azure and deploy automated machine learning workflows at scale.


PCGAnalytics.png

PCG Analytics: This service enables strategic decision-making and reporting for stakeholders inside and outside of a university. Built on Microsoft Power BI, PCG Analytics integrates with external data sources, provides role-based dashboards, and delivers comprehensive data analysis for non-technical users.


ProjecttoPlannerSync-SaaS.png

Project to Planner Sync – SaaS: PPM Works’ Microsoft Project and Planner Sync enables two-way task synchronization between Microsoft Project Online and Microsoft Planner. Give your executives the visibility they seek with this powerful tool.


PublicFinanceManager.png

Public Finance Manager: Public Finance Manager (PFM) is a blockchain solution that addresses long-standing issues challenging public finance management. PFM integrates with existing ERP systems and facilitates viewing and reconciliation of appropriation and management frameworks.


Python3withDebian10.png

Python 3 with Debian 10: Cognosys has pre-configured this ready-to-run image containing Python 3.7.3 on Debian 10 Linux. Python is an open-source programming language with support for object-oriented programming, dynamic typing, and dynamic binding.


QStockWarehouseManagementandOrderManagement.png

QStock Warehouse Management & Order Management: The QStock warehouse management solution runs on Microsoft Azure and integrates in real time with Sage Intacct. QStock offers inventory control, integrated shipping, lot and serial tracking, e-commerce support, commercial invoices, and more.


Restaurantintra.png

Restaurantintra: Restaurantintra is a SaaS-based sales reporting solution for restaurants. The software provides mobile-friendly interactions, support for multiple restaurants, sales analysis, reporting, and budgeting. This software is available in Finnish and English.


RiskIntegrityIFRS17.png

RiskIntegrity IFRS 17: RiskIntegrity helps insurers of any size transition from legacy accounting frameworks to the IFRS 17 standard. The solution integrates with existing infrastructure and supports credit insurers, reinsurers, life insurers, and non-life insurers.


RiskIntegrityLDTI.png

RiskIntegrity LDTI: RiskIntegrity helps insurers of any size transition from legacy accounting frameworks to the Long-Duration Targeted Improvements (LDTI) accounting requirements. The solution integrates with existing infrastructure and supports credit insurers, reinsurers, life insurers, and non-life insurers.


RockyDEM44.png

Rocky DEM 4.4: CrunchYard’s Rocky DEM 4.4 System is a Microsoft Azure-based VM that provides a suitable environment for users to run Rocky DEM simulations with single or multiple Nvidia GPUs. Rocky is installed and configured on the chosen VM along with Nvidia CUDA drivers.


SimplificaCI.png

SimplificaCI: The SimplificaCI platform helps organizations facilitate internal communications across multiple channels, making your company more productive and profitable. The solution integrates with desktop, mobile, calendar, and email communications. This solution is available only in Portuguese.


SkyHiveEnterprise.png

SkyHive Enterprise: SkyHive Enterprise drives rapid workforce transformation by delivering real-time, skill-level insights into internal workforces and external labor markets, identifying future skills, and facilitating individual-and company-level reskilling.


UnionBenefitandProjectTimesheetTracker.png

Union Benefit and Project Timesheet Tracker: Simplify your union payroll with the Data Pros Timesheet app, built on Microsoft SharePoint and the Microsoft Power Platform. This automation software integrates with popular payroll systems and calculates union benefit payments, insurance, USL&H, and more.


UtilityWave.png

UtilityWave: UtilityWave delivers the required capabilities to tackle the challenges of multiple legacy systems, IoT devices, and a dynamic energy grid. UtilityWave utilizes Microsoft Azure to provide a scalable platform on which utilities can build digital energy services.


VeritasAPTAREITAnalytics.png

Veritas APTARE IT Analytics: Quickly deploy Veritas APTARE IT Analytics for reporting insights into your hybrid cloud storage environment. This BYOL version provides the visibility enterprises need to identify underutilized IT resources they can repurpose to achieve significant cost savings.


VolunteerManagementSystem.png

Volunteer Management System: Web Synergies’ iVolunteer is an end-to-end volunteer management system that is designed to help not-for-profit organizations increase efficiency, reduce costs, expand community outreach, and enable effective fundraising.


WordpressWithDebian10.png

WordPress with Debian 10: Cognosys has pre-configured this ready-to-run image featuring WordPress 5.6.2 on Debian 10 Linux. WordPress is an open-source CMS that provides a templating system for content publication. This image includes MySQL Server 8.0.23, Apache HTTP Server 2.4.38, and PHP 7.3.



 


Consulting services


 


1-DaySmartMaintenanceEnvisioningWorkshop.png

1-Day Smart Maintenance Envisioning Workshop: HSO will guide you on the journey from preventive maintenance to predictive maintenance by using Microsoft Azure AI. After reviewing your business objectives, HSO consultants will brainstorm solutions to define the strategy needed to drive your desired business outcomes.


AdvancedAnalyticsDiscovery10-WeekWorkshop.png

Advanced Analytics Discovery: 10-Week Workshop: The Advanced Analytics Discovery program from Peak Indicators will architect and deliver a blueprint for your organization to deploy a solution on Microsoft Azure using services such as Azure Machine Learning, Azure Databricks, and Azure Synapse Analytics.


AIandAdvancedAnalyticsServices10-WeekProofofConcept.png

AI & Advanced Analytics Services: 10-Week Proof of Concept: Tiger Analytics will help you drive planning and optimization of brand investments to improve sales, customer acquisition, customer insights, product analytics and more. The data engineering service includes the design and development of an ETL pipeline using Azure Machine Learning services.


AzureAdvancedAnalytics10-WeekImplementation.png

Azure Advanced Analytics: 10-Week Implementation: Peak Indicators will work closely with your data science teams to deliver a pilot analytics solution built on Microsoft Azure. The engagement will focus on a use case defined with your stakeholders, development of a solution, and deployment of data science experiments and models.


AzureAppModernization2-WeekImplementation.png

Azure App Modernization: 2-Week Implementation: Softlanding’s engagement covers the benefits of Microsoft Azure and highlights Azure services that will help you modernize your applications. This offer includes guidance and deployment assistance for your developers to update on application to use Azure.


AzureApplicationMigration1-WeekAssessment.png

Azure Application Migration: 1-Week Assessment: PetaBytz’s cloud migration team will help your business get started using Microsoft Azure or optimize your current implementation. The service includes guidance on infrastructure, migration strategy for apps, and a high-level roadmap for migration planning.


AzureAutomation4-HourAssessment.png

Azure Automation: 4-Hour Assessment: In this free assessment, akquinet AG will explore the possibilities for you to automate tasks using automation tools on Microsoft Azure. This service is available for either an existing Azure tenant or a planned environment.


AzureMigration10-WeekImplementation.png

Azure Migration: 10-Week Implementation: Cybercom Group’s Cloud Migration Practice will onboard you and your applications on Microsoft Azure to enable further growth. Cybercom will migrate and modernize your digital estate.


AzureSentinel2-WeekImplementationandMaintenance.png

Azure Sentinel: 2-Week Implementation & Maintenance: Softlanding will provide you with a high-level view of your security infrastructure by deploying Microsoft Azure Sentinel, hardening your Microsoft 365 environment, and configuring baseline security reports.


AzureSynapseAnalytics5-DayImplementation.png

Azure Synapse Analytics: 5-Day Implementation: Softlanding will provide you with a strong foundation to analyze big data using Microsoft Azure Synapse Analytics and create reports built on Microsoft Power BI. This service includes data ingestion, design of data lake and data warehouse, and data cleansing.


AzureWindowsVirtualDesktop6-WeekProofofConcept.png

Azure Windows Virtual Desktop: 6-Week Proof of Concept: Stay ahead of the curve by utilizing Practical Solutions Inc.’s professional services to quickly unlock the full scope of Windows Virtual Desktop on Microsoft Azure. Practical Solutions will develop a conceptual proof of concept and deliver a roadmap for deployment.


BuildUpwithAzure-AssessmentandPropositions5-Day.png

Build Up with Azure: 5-Day Assessment & Propositions: Indacon offers a remote engagement to build up or integrate your solutions on Microsoft Azure. Indacon will identify how you can migrate or optimize environments and will define a roadmap to provide you with immediate benefits in cost, performance, and security.


CloudAdoptionFramework6-WeekImplementation.png

Cloud Adoption Framework: 6-Week Implementation: Practical Solutions Inc. (PSI) will highlight the best practices, key value, and benefits of Microsoft Azure cloud services. PSI will walk you through the Microsoft Cloud Adoption Framework, guide you through adoption, and identify key cost-saving opportunities.


CloudServicesforAzureLighthouse.png

Cloud Services for Azure Lighthouse: Practical Solutions Inc. (PSI) will support your Azure-based cloud services using Microsoft Azure Lighthouse. With Azure Lighthouse, you maintain control of your Azure tenant while PSI has the access required to support you.


ContainerswithOpenShiftonAzureImplementation.png

Containers with OpenShift on Azure: Implementation: Uni Systems provides consulting and assistance for you transition to a container-based architecture for DevOps using Red Hat OpenShift on Microsoft Azure. The engagement includes assistance in establishing DevOps practices, configuring CI/CD pipelines, cluster optimization, and more.


DataGovernance10-WeekImplementation.png

Data Governance: 10-Week Implementation: Exelegent will implement security and information governance capabilities in your healthcare organization by using Microsoft Azure Information Protection, cybersecurity frameworks, and industry best practices.


GitHubandAzureDevOps2-DayWorkshop.png

GitHub and Azure DevOps: 2-Day Workshop: Brainscale will highlight features of GitHub and Microsoft Azure DevOps to help participants decide which developer collaboration platform suits their needs. This workshop includes an overview of DevOps fundamentals and industry practices, as well guidance on migrating from older source control platforms.


MigratetoAzure4-WeekImplementation.png

Migrate to Azure: 4-Week Implementation: Foghorn Consulting experts will help you migrate to Microsoft Azure and manage your cloud operations. Foghorn provides expertise in cloud engineering, site reliability, performance optimization, and other services to improve your ROI and accelerate DevOps efforts.


MOQdigitalAzureMigration2-WeekImplementation.png

MOQdigital Azure Migration: 2-Week Implementation: MOQdigital will migrate your virtual machines to Microsoft Azure IaaS. This service is aimed at customers who want to migrate workloads in a secure manner and establish a repeatable process for server migration using Microsoft best practices.


MphasisEONQuantumComputing5-DayAssessment.png

Mphasis EON Quantum Computing: 5-Day Assessment: Mphasis’s assessment helps enterprises perform a structured analysis to determine if using quantum computing is a relevant approach for solving your specific business problem. Mphasis will evaluate software, hardware, and algorithm requirements for you.


MphasisEONQuantumComputing5-DayWorkshop.png

Mphasis EON Quantum Computing: 5-Day Workshop: Mphasis’s hands-on workshop helps enterprises create a roadmap for using quantum computing to solve business problems in machine learning, optimization, and simulation.


MphasisEONQuantumComputing6-WeekProofofConcept.png

Mphasis EON Quantum Computing: 6-Week Proof of Concept: Mphasis will create a proof of concept to establish a business case for a quantum computing solution to solve your critical business problem. This offer is led by Mphasis’s team of experts in quantum computing, data science, and Microsoft Azure.


SmartMeterAnalytics8-WeekImplementation.png

Smart Meter Analytics: 8-Week Implementation: Neudesic will process, validate, and prepare smart meter data for visualization and analysis on a hybrid cloud architecture that utilizes on-premises Microsoft SQL Server and Microsoft Power BI with Microsoft Azure HDInsight.


VOIPNETWORKSCLOUD9PROMO.png

VoIP Networks Cloud9 Promotion: VoIP Networks will act as your one-stop vendor for all facets of your telephony and networking technologies. This offer includes a central point of contact for all common carriers to maintain existing services or coordinate activation of new ones.