GraphQL on Azure: Part 4 – Serverless CosmosDB

This article is contributed. See the original author and article here.

A few months ago I wrote a post on how to use GraphQL with CosmosDB from Azure Functions, so this post might feel like a bit of a rehash of it, with the main difference being that I want to look at it from the perspective of doing .NET integration between the two.

The reason I wanted to tackle .NET GraphQL with Azure Functions is that it provides a unique opportunity, being able to leverage Function bindings. If you’re new to Azure Functions, bindings are a way to have the Functions runtime provide you with a connection to another service in a read, write or read/write mode. This could be useful in the scenario of a function being triggered by a file being uploaded to storage and then writing some metadata to a queue. But for todays scenario, we’re going to use a HTTP triggered function, our GraphQL endpoint, and then work with a database, CosmosDB.

Why CosmosDB? Well I thought it might be timely given they have just launched a consumption plan which works nicely with the idea of a serverless GraphQL host in Azure Functions.

While we have looked at using .NET for GraphQL previously in the series, for this post we’re going to use a different GraphQL .NET framework, Hot Chocolate, so there’s going to be some slightly different types to our previous demo, but it’s all in the name of exploring different options.

Getting Started

At the time of writing, Hot Chocolate doesn’t officially support Azure Functions as the host, but there is a proof of concept from a contributor that we’ll use as our starting point, so start by creating a new Functions project:

func init dotnet-graphql-cosmosdb --dotnet

Next, we’ll add the NuGet packages that we’re going to require for the project:

 

<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.3" />
<PackageReference Include="HotChocolate" Version="10.5.2" />
<PackageReference Include="HotChocolate.AspNetCore" Version="10.5.2" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="3.0.7" />

 

These versions are all the latest at the time of writing, but you may want to check out new versions of the packages if they are available.

And the last bit of getting started work is to bring in the proof of concept, so grab all the files from the GitHub repo and put them into a new folder under your project called FunctionsMiddleware.

Making a GraphQL Function

With the skeleton ready, it’s time to make a GraphQL endpoint in our Functions project, and to do that we’ll scaffold up a HTTP Trigger function:

func new --name GraphQL --template "HTTP trigger"

This will create a generic function for us and we’ll configure it to use the GraphQL endpoint, again we’ll use a snippet from the proof of concept:

 

using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using HotChocolate.AspNetCore;

namespace DotNet.GraphQL.CosmosDB
{
    public class GraphQL
    {
        private readonly IGraphQLFunctions _graphQLFunctions;

        public GraphQL(IGraphQLFunctions graphQLFunctions)
        {
            _graphQLFunctions = graphQLFunctions;
        }

        [FunctionName("graphql")]
        public async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
            ILogger log,
            CancellationToken cancellationToken)
        {
            return await _graphQLFunctions.ExecuteFunctionsQueryAsync(
                req.HttpContext,
                cancellationToken);
        }
    }
}

 

Something you might notice about this function is that it’s no longer a static, it has a constructor, and that constructor has an argument. To make this work we’re going to need to configure dependency injection for Functions.

Adding Dependency Injection

Let’s start by creating a new class to our project called Startup:

 

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(DotNet.GraphQL.CosmosDB.Startup))]

namespace DotNet.GraphQL.CosmosDB
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
        }
    }
}

 

There’s two things that are important to note about this code, first is that we have the [assembly: FunctionsStartup(... assembly level attribute which points to the Startup class. This tells the Function runtime that we have a class which will do some stuff when the application starts. Then we have the Startup class which inherits from FunctionsStartup. This base class comes from the Microsoft.Azure.Functions.Extensions NuGet package and works similar to the startup class in an ASP.NET Core application by giving us a method which we can work with the startup pipeline and add items to the dependency injection framework.

We’ll come back to this though, as we need to create our GraphQL schema first.

Creating the GraphQL Schema

Like our previous demos, we’ll use the trivia app.

We’ll start with the model which exists in our CosmosDB store (I’ve populated a CosmosDB instance with a dump from OpenTriviaDB, you’ll find the JSON dump here). Create a new folder called Models and then a file called QuestionModel.cs:

 

using System.Collections.Generic;
using Newtonsoft.Json;

namespace DotNet.GraphQL.CosmosDB.Models
{
    public class QuestionModel
    {
        public string Id { get; set; }
        public string Question { get; set; }
        [JsonProperty("correct_answer")]
        public string CorrectAnswer { get; set; }
        [JsonProperty("incorrect_answers")]
        public List<string> IncorrectAnswers { get; set; }
        public string Type { get; set; }
        public string Difficulty { get; set; }
        public string Category { get; set; }
    }
}

 

As far as our application is aware, this is a generic data class with no GraphQL or Cosmos specific things in it (it has some attributes for helping with serialization/deserialization), now we need to create our GraphQL schema to expose it. We’ll make a new folder called Types and a file called Query.cs:

 

using DotNet.GraphQL.CosmosDB.Models;
using HotChocolate.Resolvers;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace DotNet.GraphQL.CosmosDB.Types
{
    public class Query
    {
        public async Task<IEnumerable<QuestionModel>> GetQuestions(IResolverContext context)
        {
            // TODO
        }

        public async Task<QuestionModel> GetQuestion(IResolverContext context, string id)
        {
            // TODO
        }
    }
}

 

This class is again a plain C# class and Hot Chocolate will use it to get the types exposed in our query schema. We’ve created two methods on the class, one to get all questions and one to get a specific question, and it would be the equivalent GraphQL schema of:

type QuestionModel {
    id: String
    question: String
    correctAnswer: String
    incorrectAnswers: [String]
    type: String
    difficulty: String
    category: String
}

schema {
    query: {
        questions: [QuestionModel]
        question(id: String): QuestionModel
    }
}

You’ll also notice that each method takes an IResolverContext, but that’s not appearing in the schema, well that’s because it’s a special Hot Chocolate type that will give us access to the GraphQL context within the resolver function.

But, the schema has a lot of nullable properties in it and we don’t want that, so to tackle this we’ll create an ObjectType for the models we’re mapping. Create a class called QueryType:

 

using HotChocolate.Types;

namespace DotNet.GraphQL.CosmosDB.Types
{
    public class QueryType : ObjectType<Query>
    {
        protected override void Configure(IObjectTypeDescriptor<Query> descriptor)
        {
            descriptor.Field(q => q.GetQuestions(default!))
                .Description("Get all questions in the system")
                .Type<NonNullType<ListType<NonNullType<QuestionType>>>>();

            descriptor.Field(q => q.GetQuestion(default!, default!))
                .Description("Get a question")
                .Argument("id", d => d.Type<IdType>())
                .Type<NonNullType<QuestionType>>();
        }
    }
}

 

Here we’re using an IObjectTypeDescription to define some information around the fields on the Query, and the way we want the types exposed in the GraphQL schema, using the built in GraphQL type system. We’ll also do one for the QuestionModel in QuestionType:

 

using DotNet.GraphQL.CosmosDB.Models;
using HotChocolate.Types;

namespace DotNet.GraphQL.CosmosDB.Types
{
    public class QuestionType : ObjectType<QuestionModel>
    {
        protected override void Configure(IObjectTypeDescriptor<QuestionModel> descriptor)
        {
            descriptor.Field(q => q.Id)
                .Type<IdType>();
        }
    }
}

 

Consuming the GraphQL Schema

Before we implement our resolvers, let’s wire up the schema into our application, and to do that we’ll head back to Startup.cs, and register the query, along with Hot Chocolate:

 

public override void Configure(IFunctionsHostBuilder builder)
{
    builder.Services.AddSingleton<Query>();

    builder.Services.AddGraphQL(sp =>
        SchemaBuilder.New()
        .AddServices(sp)
        .AddQueryType<QueryType>()
        .Create()
    );
    builder.Services.AddAzureFunctionsGraphQL();
}

 

First off we’re registering the Query as a singleton so it can be resolved, and then we’re adding GraphQL from Hot Chocolate. With the schema registration, we’re using a callback that will actually create the schema using SchemaBuilder, registering the available services from the dependency injection container and finally adding our QueryType, so GraphQL understands the nuanced type system.

Lastly, we call an extension method provided by the proof of concept code we included early to register GraphQL support for Functions.

Implementing Resolvers

For the resolvers in the Query class, we’re going to need access to CosmosDB so that we can pull the data from there. We could go and create a CosmosDB connection and then register it in our dependency injection framework, but this won’t take advantage of the input bindings in Functions.

With Azure Functions we can setup an input binding to CosmosDB, specifically we can get a DocumentClient provided to us, which FUnctions will take care of connection client reuse and other performance concerns that we might get when we’re working in a serverless environment. And this is where the resolver context, provided by IResolverContext will come in handy, but first we’re going to modify the proof of concept a little, so we can add to the context.

We’ll start by modifying the IGraphQLFunctions interface and adding a new argument to ExecuteFunctionsQueryAsync:

 

Task<IActionResult> ExecuteFunctionsQueryAsync(
    HttpContext httpContext,
    IDictionary<string, object> context,
    CancellationToken cancellationToken);

 

This IDictionary<string, object> will allow us to provide any arbitrary additional context information to the resolvers. Now we need to update the implementation in GraphQLFunctions.cs:

 

public async Task<IActionResult> ExecuteFunctionsQueryAsync(
    HttpContext httpContext,
    IDictionary<string, object> context,
    CancellationToken cancellationToken)
{
    using var stream = httpContext.Request.Body;

    var requestQuery = await _requestParser
        .ReadJsonRequestAsync(stream, cancellationToken)
        .ConfigureAwait(false);

    var builder = QueryRequestBuilder.New();

    if (requestQuery.Count > 0)
    {
        var firstQuery = requestQuery[0];

        builder
            .SetQuery(firstQuery.Query)
            .SetOperation(firstQuery.OperationName)
            .SetQueryName(firstQuery.QueryName);

        foreach (var item in context)
        {
            builder.AddProperty(item.Key, item.Value);
        }

        if (firstQuery.Variables != null
            && firstQuery.Variables.Count > 0)
        {
            builder.SetVariableValues(firstQuery.Variables);
        }
    }

    var result = await Executor.ExecuteAsync(builder.Create());
    await _jsonQueryResultSerializer.SerializeAsync((IReadOnlyQueryResult)result, httpContext.Response.Body);

    return new EmptyResult();
}

 

There’s two things we’ve done here, first is adding that new argument so we match the signature of the interface, secondly is when the QueryRequestBuilder is being setup we’ll loop over the context dictionary and add each item as a property of the resolver context.

And lastly, we need to update the Function itself to have an input binding to CosmosDB, and then provide that to the resolvers:

 

[FunctionName("graphql")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    ILogger log,
    [CosmosDB(
        databaseName: "trivia",
        collectionName: "questions",
        ConnectionStringSetting = "CosmosDBConnection")] DocumentClient client,
    CancellationToken cancellationToken)
{
    return await _graphQLFunctions.ExecuteFunctionsQueryAsync(
        req.HttpContext,
        new Dictionary<string, object> {
            { "client", client },
            { "log", log }
        },
        cancellationToken);
}

 

With that sorted we can implement our resolvers. Let’s start with the GetQuestions one to grab all of the questions from CosmosDB:

 

public async Task<IEnumerable<QuestionModel>> GetQuestions(IResolverContext context)
{
    var client = (DocumentClient)context.ContextData["client"];

    var collectionUri = UriFactory.CreateDocumentCollectionUri("trivia", "questions");
    var query = client.CreateDocumentQuery<QuestionModel>(collectionUri)
        .AsDocumentQuery();

    var quizzes = new List<QuestionModel>();

    while (query.HasMoreResults)
    {
        foreach (var result in await query.ExecuteNextAsync<QuestionModel>())
        {
            quizzes.Add(result);
        }
    }

    return quizzes;
}

 

Using the IResolverContext we can access the ContextData which is a dictionary containing the properties that we’ve injected, one being the DocumentClient. From here we create a query against CosmosDB using CreateDocumentQuery and then iterate over the result set, pushing it into a collection that is returned.

To get a single question we can implement the GetQuestion resolver:

 

public async Task<QuestionModel> GetQuestion(IResolverContext context, string id)
{
    var client = (DocumentClient)context.ContextData["client"];

    var collectionUri = UriFactory.CreateDocumentCollectionUri("trivia", "questions");
    var sql = new SqlQuerySpec("SELECT * FROM c WHERE c.id = @id");
    sql.Parameters.Add(new SqlParameter("@id", id));
    var query = client.CreateDocumentQuery<QuestionModel>(collectionUri, sql, new FeedOptions { EnableCrossPartitionQuery = true })
        .AsDocumentQuery();

    while (query.HasMoreResults)
    {
        foreach (var result in await query.ExecuteNextAsync<QuestionModel>())
        {
            return result;
        }
    }

    throw new ArgumentException("ID does not match a question in the database");
}

 

This time we are creating a SqlQuerySpec to do a parameterised query for the item that matches with the provided ID. One other difference is that I needed to enable CrossPartitionQueries in the FeedOptions, because the id field is not the partitionKey, so you may not need that, depending on your CosmosDB schema design. And eventually, once the query completes we look for the first item, and if none exists raise an exception that’ll bubble out as an error from GraphQL.

Conclusion

With all this done, we now have a our GraphQL server running in Azure Functions and connected up to a CosmosDB backend, in which we have no need to do any connection management ourselves, that’s taken care of by the input binding.

You’ll find the full code of my sample on GitHub.

While this has been a read-only example, you could expand this out to support GraphQL mutations and write data to CosmosDB with a few more resolvers.

Something else that would be worth for you to explore is how you can look at the fields being selected in the query, and only retrieve that data from CosmosDB, because here we’re pulling all fields, but if you create a query like:

{
    questions {
        id
        question
        correctAnswer
        incorrectAnswers
    }
}

It might be optimal to not return fields like type or category from CosmosDB.

Experiencing Latency and Data Loss issue in Azure Portal for Many Data Types – 09/09 – Resolved

This article is contributed. See the original author and article here.

Final Update: Wednesday, 09 September 2020 00:17 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 09/08, 23:40 UTC. Our logs show the incident started on 09/08, 22:20 UTC and that during the 1 hour and 20 minutes that it took to resolve the issue small number of customers in the Switzerland North Region experienced intermittent metric data latency and data gaps, as well as incorrect metric alert activation.

  • Root Cause: The failure was due to issue with one of the backend services.
  • Incident Timeline: 1 Hours & 20 minutes – 09/08, 22:20 UTC through 09/08, 23:40 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Jayadev


What’s new: Azure DDoS Protection connector in Public Preview for Azure Sentinel

What’s new: Azure DDoS Protection connector in Public Preview for Azure Sentinel

This article is contributed. See the original author and article here.

This installment is part of a broader series to keep you up to date with the latest features in Azure Sentinel. The installments will be bite-sized to enable you to easily digest the new content.

 

Even more Azure Sentinel connector news for you! If you are using Azure DDoS Standard Protection, you can now ingest this via our connector into your Azure Sentinel workspace.

 

In addition to the core DDoS protection in the Azure platform, Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities against network attacks. It’s automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes.

 

Connecting Azure DDoS Protection Standard logs to Azure Sentinel enables you to view and analyze this data in your workbooks, query it to create custom alerts, and incorporate it to improve your investigation process, giving you more insight into your platform security.

 

2020-09-09_11-38-35.png

 

 

How to enable Azure DDoS Protection log ingestion in Azure Sentinel

 

Prerequisite – You must have a configured Azure DDoS Standard protection plan.

 

1. From the Azure Sentinel navigation menu, select Data connectors.

2020-09-07_13-51-38.png

 

2. Select Azure DDoS Protection from the data connectors gallery, and then select Open Connector Page on the preview pane.

 

3. Enable Diagnostic logs on all the firewalls whose logs you wish to connect:

a. Select the Open Diagnostics settings > link and choose a Public IP Address resource from the list.

 

2020-09-09_12-04-47.png

2020-09-09_12-00-27.png

b. Select + Add diagnostic setting.

2020-09-09_12-01-37.png

c. In the Diagnostics settings screen

  • Enter a name in the Diagnostic setting name field.
  • Mark the Send to Log Analytics check box. Two new fields will be displayed below it. Choose the relevant Subscription and Log Analytics Workspace (where Azure Sentinel resides).
  • Mark the check boxes of the rule types whose logs you want to ingest. We recommend DDoSProtectionNotifications, DDoSMitigationFlowLogs, and DDoSMitigationReports.

2020-09-09_12-02-35.png

d. Click Save at the top of the screen. Repeat this process for any additional firewalls (public IP addresses) for which you have enabled DDoS protection.

 

4. To use the relevant schema in Log Analytics for Azure DDoS Protection alerts, search for AzureDiagnostics. Here’s an example query below:

 

AzureDiagnostics 
| where ResourceType == "PUBLICIPADDRESSES"
| sort by TimeGenerated

 

And that’s it! You will now have Azure DDoS Standard logs connected to your Sentinel workspace.

 

 

Get Started Today!

Try out the new connector and let us know your feedback using any of the channels listed in the Resources.

 

You can also contribute new connectors, workbooks, analytics and more in Azure Sentinel. Get started now by joining the Azure Sentinel Threat Hunters GitHub community and follow the guidance.

 

Master the basics of Microsoft Azure—cloud, data, and AI

This article is contributed. See the original author and article here.

 

Microsoft Learn offers several different fundamentals training and certifications for Azure—Azure Fundamentals, Azure Data Fundamentals, and Azure AI Fundamentals. Choose the ones that work for you. Use these foundational certifications as a starting point to explore more training for Azure technologies and to chart your path forward. If you’re looking to advance your career or to jump-start a new one, the message is the same: establish your foundations.

 

Azure opens a world of possibilities for you in this cloud-based, digital era. Let’s explore a few of them and how they can fit with your plans for growing your skills and expertise.

 

How you can use Azure to grow your career

Azure offers an ever-expanding set of cloud services that can help companies meet business challenges. It offers the freedom to build, manage, and deploy applications on a massive global network using an organization’s favorite tools and frameworks. This opens up many opportunities for IT professionals, depending on their talents and interests.

 

If you’re a developer, you can get your work done faster, take your skills to the next level, and imagine and build tomorrow’s applications.

 

If you’re an IT administrator, Azure cloud infrastructure helps you simplify management, reduce costs, rapidly adjust to changing business demands, and enhance security.

 

If you’re a data specialist, Azure can help you unlock the potential of data. Azure enables rapid growth and innovation with a portfolio of secure, enterprise grade database services that support open-source database engines.

 

If you’re an artificial intelligence (AI) specialist, Azure offers your application an edge over the competition. Just imagine what you can build—an app that translates speech in real time as you’re speaking or an app that helps you identify parts of a motor in a mixed-reality training. The possibilities are endless.

 

Get the Azure training that fits your background and interests  

Interested in Azure, and want to learn more? Use our training offerings to explore the fundamentals of the cloud platform, foundational database concepts in Azure, and the basics of Azure AI.

 

Use Azure fundamentals training to learn the essentials of Azure—architectural components and core Azure services and solutions, plus management tools, compliance, security, and data protection. Learn how to get the best of Azure by growing your skills on cloud computing concepts, models, and services, including public, private, and hybrid cloud. In this training, explore cloud concepts, such as high availability, scalability, elasticity, agility, fault tolerance, and disaster recovery, and get strategies for transitioning to the cloud. To help you start this foundational training, we’ve curated the Azure Fundamentals collection on Microsoft Learn.

 

Use Azure database training to learn the fundamentals of database concepts in a cloud environment, get basic skilling in cloud data services, and build your foundational knowledge of cloud data services within Azure. Learn core data concepts, such as relational, nonrelational, big data, analytics, and roles, plus tasks and responsibilities in the world of data. To start this foundational training, check out the Azure Data Fundamentals collection on Microsoft Learn we’ve curated for you.

 

Use Azure AI training to explore how Azure provides easy-to-use services to help you get started with building AI solutions. Learn about many areas of AI, including machine learning, which is at the core of AI, and how many modern applications and services depend on predictive machine learning models. Explore computer vision, an area of AI in which software systems are designed to perceive the world visually, though cameras, images, and video. Plus, get the details on natural language processing (NLP), which supports applications that can see, hear, speak with, and understand users, and conversational AI, which deals with dialogs between AI agents and human users. We’ve curated the Azure AI Fundamentals collection on Microsoft Learn to help you start this foundational training.

 

Choose the right certification for you

Combine your training with a certification that announces your proficiency to the world. A comprehensive path forward for your Azure learning might begin with the foundations of cloud services and could be followed with core data concepts, after which it might move to common machine learning and AI workloads.

 

If you’re a system administrator, developer, or data and AI professional just starting out with Azure or the cloud, consider the Azure Fundamentals certification. This validates your basic knowledge of cloud services and how those services are provided with Azure. It can also help to prepare you for other Azure certifications, but it’s not a prerequisite for any of them.

 

Looking to grow your cloud database expertise? If you’re a developer or a data and AI pro—or even if you’re just beginning to work with data in the cloud—the new Azure Data Fundamentals certification can help you prove your knowledge of core data concepts and how they’re implemented using Azure data services.

 

If you’re getting started in the AI world, consider the Azure AI Fundamentals certification. Use this certification to demonstrate your knowledge of common AI and machine learning workloads and how to implement them on Azure. You don’t need to be a technical professional to take this exam. General programming knowledge will help, but data science or software engineering experience isn’t required.

 

Although not part of the Azure portfolio, the Power Platform Fundamentals certification can help data analysts and Azure developers validate their understanding of core Microsoft Power Platform capabilities, including Power Apps and Power BI.

 

Time to start mastering the basics!

It’s time to start growing your skills and building your reputation as an Azure expert. Go to Microsoft Learn, and explore the fundamentals training and related certifications: Azure Fundamentals. . . checked? Azure Data Fundamentals . . . checked? Azure AI Fundamentals. . . checked? Excellent. You’re on the path to getting the recognition that you deserve.

 

Related posts

Understanding Microsoft Azure certifications

Finding the right Microsoft Azure certification for you

 

 

 

 

 

 

 

 

 

Azure Security and Frameworks

This article is contributed. See the original author and article here.

Azure provides several mechanisms how to secure Azure platform.

The most popular approach is through Azure Security Center.

ASC is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on premises.

https://docs.microsoft.com/en-us/azure/security-center/security-center-intro

 

I’d like to highlight also another framework which I’m seeing in use with other customers – Secure DevOps Kit for Azure (AzSK)

https://azsk.azurewebsites.net/

The Secure DevOps Kit for Azure (AzSK) was created by the Core Services Engineering & Operations (CSEO) division at Microsoft, to help accelerate Microsoft IT’s adoption of Azure. Documentation with the community to provide guidance for rapidly scanning, deploying and operationalizing cloud resources, across the different stages of DevOps, while maintaining controls on security and governance.

 

Microsoft Azure Well-Architected Framework

This article is contributed. See the original author and article here.

 

This is nice framework customers were waiting for. Framework is guiding Architects through pillars of architecture excellence: Cost Optimization, Operational Excellence, Performance Efficiency, Reliability, and Security.

https://docs.microsoft.com/en-us/azure/architecture/framework/

 

See also Design Patterns: https://docs.microsoft.com/en-us/azure/architecture/patterns/

And Azure Architecture Center: https://docs.microsoft.com/en-us/azure/architecture/