Monitoring Air Quality with Azure Sphere and Sysinno iAeris

This article is contributed. See the original author and article here.

Data is king, of course.

IoT technologies have sprung up to collect data from anything you can imagine, from the status of the fan in a building’s air conditioning unit to the noise level of the lathe on a factory floor. Businesses have sprung up turning that data into insights, and those insights into actions that drive value. For example, monitoring the sensors in the buildings on the Microsoft Puget Sound campus, and in the equipment attached to those buildings, has helped Microsoft reduce electrical consumption by over 20 percent. Generally speaking, securely connecting sensors to a cloud-based system with analytics and dashboards is a recipe for improving operations and the environment.


 


There are two primary ways sensors can connect to the cloud-based system. The most common way is for some separate application to query the sensor for the data, either directly from the cloud or using a on-premises gateway to query the sensor and push the data to the cloud. The other way is for a sensor with more compute power to create a direct connection to the cloud and push the data. Which of these methods is used depends upon the capabilities of the sensor, and the enterprise architecture into which the data is to be pumped. Two concerns that are top of mind with either of these methods are the cost-performance and the security of the data.


 


Consider the common scenario of monitoring the quality of the air inside or outside of buildings. It is important for understanding the environment and for enabling building owners to provide a healthier place for people to live and work. Sensors are available on the market which measure the levels of harmful chemicals and particles in the air, and the task for the enterprise is to select a sensor and how to get those readings into a centralized system or dashboard that allows the enterprise to take whatever actions are appropriate based upon the levels detected.


 

Most of the existing air quality sensors are standalone devices that can only respond to queries. The disadvantage of this method in the context of a large enterprise monitoring environment is that it requires a separate application (a gateway) to issue those queries and forward the data.  This introduces additional cost and management effort, as well as potentially increasing security risks if the gateway needs to be accessed remotely (for example over RDP). A company by the name of Sysinno has an alternative to this, with an air quality sensor that can directly and securely connect to the cloud without the need of a local gateway using an onboard Azure Sphere chip from Microsoft. The onboard Azure Sphere thus reduces operating cost and complexity, and it does so
in a highly secure manner.


 


We’ve written a whitepaper to show how to build an end-to-end solution using the Sysinno iAeris air quality sensor and a number of Azure IoT elements. In addition to showing how to configure the Sysinno detector to send data to Azure IoT Hub, the paper shows how to write an Azure function to send the data to SQL Server, and how to create Power BI and Time Series Insights (TSI) dashboards to display real-time and historical data. At this point, the air quality data could be consumed by any enterprise monitoring system, and furthermore be accessed by a tool such as Dynamics 365 Field Service for generating maintenance work orders or building-wide alerts to occupants. More broadly, the whitepaper shows how to easily build an end-to-end workflow for capturing, storing, and displaying certain types of IoT data. You could use the code shown to display room temperatures, occupancy, noise levels, traffic, or almost any other data for which you have a sensor.


 


To read the article and see the code, please follow this link to the Sysinno website.

GraphQL on Azure: Part 6 – Subscriptions With SignalR

This article is contributed. See the original author and article here.

In our exploration of how to run GraphQL on Azure, we’ve looked at the two most common aspects of a GraphQL server, queries and mutations, so we can get data and store data. Today, we’re going to look at the third piece of the puzzle, subscriptions.


What are GraphQL Subscriptions


In GraphQL, a Subscription is used as a way to provide real-time data to connected clients. Most commonly, this is implemented over a WebSocket connection, but I’m sure you could do it with long polling or Server Sent Events if you really wanted to (I’ve not gone looking for that!). This allows the GraphQL server to broadcast query responses out when an event happens that the client is subscribed to.


Let’s think about this in the context of the quiz game we’ve been doing. So far the game is modeled for single player, but if we wanted to add multiplayer, we could have the game wait for all players to join, and once they have, broadcast out a message via a subscription that the game is starting.


Defining Subscriptions


Like queries and mutations, subscriptions are defined as part of a GraphQL schema, and they can reuse the types that are available within our schema. Let’s make a really basic schema that contains a subscription:


 


 

type Query {
    hello: String!
}

type Subscription {
    getMessage: String!
}

schema {
    query: Query
    subscription: Subscription
}

 


 


The subscription type that we’re defining can have as many different subscriptions that clients can subscribe via, and each might return different data, it’s completely up to the way your server wants to expose real-time information.


Implementing Subscriptions on Azure


For this implementation, we’re going to go back to TypeScript and use Apollo. Apollo have some really great docs on how to implement subscriptions in an Apollo Server, and that’ll be our starting point.


But before we can start pushing messages around, we need to work out what is going to be the messaging backbone of our server. We’re going to need some way in which the server and communicate with all connected clients, either from within a resolver, or from some external event that the server receives.


In Azure, when you want to do real-time communications, there’s no better service to use than SignalR Service. SignalR Service takes care of the protocol selection, connection management and scaling that you would require for a real-time application, so it’s ideal for our needs.


Creating the GraphQL server


In the previous posts, we’ve mostly talked about running GraphQL in a serverless model on Azure Functions, but for a server with subscriptions, we’re going to use Azure App Service, and we can’t expose a WebSocket connection from Azure Functions for the clients to connect to.


Apollo provides plenty of middleware options that we can chose from, so for this we’ll use the Express integration, apollo-server-express and follow the subscriptions setup guide.


Adding Subscriptions with SignalR


When it comes to implementing the integration with SignalR, Apollo uses the graphql-subscriptions PubSubEngine class to handle how the broadcasting of messages, and connections from clients.


So that means we’re going to need an implementation of that which uses SignalR, and thankfully there is one, @aaronpowell/graphql-signalr-subscriptions (yes, I did write it :squinting_face_with_tongue:).


We’ll start by adding that to our project:


 


 

npm install --save /graphql-signalr-subscriptions

 


 


You’ll need to create a SignalR Service resource and get the connection string for it (I use dotenv to inject it for local dev) so you can create PubSub engine. Create a new resolvers.ts file and create the SignalRPubSub instance in it.


 


 

import { SignalRPubSub } from "@aaronpowell/graphql-signalr-subscriptions";

export const signalrPubSub = new SignalRPubSub(
    process.env.SIGNALR_CONNECTION_STRING
);

 


 


We export this so that we can import it in our index.ts and start the client when the server starts:


 


 

// setup ApolloServer
httpServer.listen({ port }, () => {
    console.log(
        ` Server ready at http://localhost:${port}${server.graphqlPath}`
    );
    console.log(
        ` Subscriptions ready at ws://localhost:${port}${server.subscriptionsPath}`
    );

    signalrPubSub
        .start()
        .then(() => console.log(" SignalR up and running"))
        .catch((err: any) => console.error(err));
});

 


 


It’s important to note that you must call start() on the instance of the PubSub engine, as this establishes the connection with SignalR, and until that happens you won’t be able to send messages.


Communicating with a Subscription


Let’s use the simple schema from above:


 


 

type Query {
    hello: String!
}

type Subscription {
    getMessage: String!
}

schema {
    query: Query
    subscription: Subscription
}

 


 


In the hello query we’ll broadcast a message, which the getMessage can subscribe to. Let’s start with the hello resolver:


 


 

export const resolvers = {
    Query: {
        hello() {
            signalrPubSub.publish("MESSAGE", {
                getMessage: "Hello I'm a message"
            });
            return "Some message";
        }
    }
};

 


 


So our hello resolver is going to publish a message with the name MESSAGE and a payload of { getMessage: “…” } to clients. The name is important as it’s what the subscription resolvers will be configured to listen for and the payload represents all the possible fields that someone could select in the subscription.


Now we’ll add the resolver for the subscription:


 


 

export const resolvers = {
    Query: {
        hello() {
            signalrPubSub.publish("MESSAGE", {
                getMessage: "Hello I'm a message"
            });
            return "Some message";
        }
    },
    Subscription: {
        getMessage: {
            subscribe: () => signalrPubSub.asyncIterator(["MESSAGE"])
        }
    }
};

 


 


A resolver for a subscription is a little different to query/mutation/field resolvers as you need to provide a subscribe method, which is what Apollo will invoke to get back the names of the triggers to be listening on. We’re only listening for MESSAGE here (but also only broadcasting it), but if you added another publish operation with a name of MESSAGE2, then getMessage subscribers wouldn’t receive that. Alternatively, getMessage could be listening to a several trigger names, as it might represent an aggregate view of system events.


Conclusion


In this post we’ve been introduced to subscriptions in GraphQL and seen how we can use the Azure SignalR Service as the backend to provide this functionality.


You’ll find the code for the SignalR implementation of subscriptions here and the full example here.


 

GraphQL on Azure: Part 5 – Can We Make GraphQL Type Safe in Code

This article is contributed. See the original author and article here.

I’ve been doing a lot of work recently with GraphQL on Azure Functions and something that I find works nicely is the schema-first approach to designing the GraphQL endpoint.


The major drawback I’ve found though is that you start with a strongly typed schema but lose that type information when implementing the resolvers and working with your data model.


So let’s have a look at how we can tackle that by building an application with GraphQL on Azure Functions and backing it with a data model in CosmosDB, all written in TypeScript.



To learn how to get started with GraphQL on Azure Functions, check out the earlier posts in this series.



Creating our schema


The API we’re going to build today is a trivia API (which uses data from Open Trivia DB as the source).


We’ll start by defining a schema that’ll represent the API as a file named schema.graphql within the graphql folder:



type Question {
    id: ID!
    question: String!
    correctAnswer: String!
    answers: [String!]!
}

type Query {
    question(id: ID!): Question
    getRandomQuestion: Question
}

type Answer {
    questionId: ID
    question: String!
    submittedAnswer: String!
    correctAnswer: String!
    correct: Boolean
}

type Mutation {
    answerQuestion(id: ID, answer: String): Answer
}

schema {
    query: Query
    mutation: Mutation
}


 

Our schema has defined two core types, Question and Answer, along with a few queries and a mutation and all these types are decorated with useful GraphQL type annotations, that would be useful to have respected in our TypeScript implementation of the resolvers.


Creating a resolver


Let’s start with the query resolvers, this will need to get back the data from CosmosDB to return the our consumer:


const resolvers = {
    Query: {
        question(_, { id }, { dataStore }) {
            return dataStore.getQuestionById(id);
        },
        async getRandomQuestion(_, __, { dataStore }) {
            const questions = await dataStore.getQuestions();
            return questions[Math.floor(Math.random() * questions.length) + 1];
        }
    }
};

export default resolvers;

This matches the query portion of our schema from the structure, but how did we know how to implement the resolver functions? What arguments do we get to question and getRandomQuestion? We know that question will receive an id parameter, but how? If we look at this in TypeScript there’s any all over the place, and that’s means we’re not getting much value from TypeScript.


Here’s where we start having a disconnect between the code we’re writing, and the schema we’re working against.


Enter GraphQL Code Generator


Thankfully, there’s a tool out there that can help solve this for us, GraphQL Code Generator. Let’s set it up by installing the tool:

npm install --save-dev @graphql-codegen/cli

And we’ll setup a config file named config.yml in the root of our Functions app:

overwrite: true
schema: "./graphql/schema.graphql"
generates:
    graphql/generated.ts:
        plugins:
            - typescript
            - typescript-resolvers

This will generate a file named generated.ts within the graphql folder using our schema.graphql as the input. The output will be TypeScript and we’re also going to generate the resolver signatures using the typescript and typescript-resolvers plugins, so we best install those too:

npm install --save-dev @graphql-codegen/typescript @graphql-codegen/typescript-resolvers

It’s time to run the generator:

npx graphql-codegen --config codegen.yml

Strongly typing our resolvers


We can update our resolvers to use this new type information:

import { Resolvers } from "./generated";

const resolvers: Resolvers = {
    Query: {
        question(_, { id }, { dataStore }) {
            return dataStore.getQuestionById(id);
        },
        async getRandomQuestion(_, __, { dataStore }) {
            const questions = await dataStore.getQuestions();
            return questions[Math.floor(Math.random() * questions.length) + 1];
        }
    }
};

export default resolvers;

Now we can hover over something like id and see that it’s typed as a string, but we’re still missing a piece, what is dataStore and how do we know what type to make it?


Creating a data store


Start by creating a new file named data.ts. This will house our API to work with CosmosDB, and since we’re using CosmosDB we’ll need to import the node module:

npm install --save @azure/cosmos

Why CosmosDB? CosmosDB have just launched a serverless plan which works nicely with the idea of a serverless GraphQL host in Azure Functions. Serverless host with a serverless data store, sound like a win all around!


With the module installed we can implement our data store:

import { CosmosClient } from "@azure/cosmos";

export type QuestionModel = {
    id: string;
    question: string;
    category: string;
    incorrect_answers: string[];
    correct_answer: string;
    type: string;
    difficulty: "easy" | "medium" | "hard";
};

interface DataStore {
    getQuestionById(id: string): Promise;
    getQuestions(): Promise<QuestionModel[]>;
}

class CosmosDataStore implements DataStore {
    #client: CosmosClient;
    #databaseName = "trivia";
    #containerName = "questions";

    #getContainer = () => {
    return this.#client
        .database(this.#databaseName)
        .container(this.#containerName);
    };

    constructor(client: CosmosClient) {
    this.#client = client;
    }

    async getQuestionById(id: string) {
    const container = this.#getContainer();

    const question = await container.items
        .query({
        query: "SELECT * FROM c WHERE c.id = @id",
        parameters: [{ name: "@id", value: id }],
        })
        .fetchAll();

    return question.resources[0];
    }

    async getQuestions() {
    const container = this.#getContainer();

    const question = await container.items
        .query({
        query: "SELECT * FROM c",
        })
        .fetchAll();

    return question.resources;
    }
}

export const dataStore = new CosmosDataStore(
    new CosmosClient(process.env.CosmosDB)
);

This class will receive a CosmosClient that gives us the connection to query CosmosDB and provides the two functions that we used in the resolver. We’ve also got a data model, QuestionModel that represents how we’re storing the data in CosmosDB.



To create a CosmosDB resource in Azure, check out their quickstart and here is a data sample that can be uploaded via the Data Explorer in the Azure Portal.



To make this available to our resolvers, we’ll add it to the GraphQL context by extending index.ts:

import { ApolloServer } from "apollo-server-azure-functions";
import { importSchema } from "graphql-import";
import resolvers from "./resolvers";
import { dataStore } from "./data";

const server = new ApolloServer({
    typeDefs: importSchema("./graphql/schema.graphql"),
    resolvers,
    context: {
        dataStore
    }
});

export default server.createHandler();

If we run the server, we’ll be able to query the endpoint and have it pull data from CosmosDB but our resolver is still lacking a type for dataStore, and to do that we’ll use a custom mapper.


Custom context types


So far, the types we’re generating are all based off what’s in our GraphQL schema, and that works mostly but there are gaps. One of those gaps is how we use the request context in a resolver, since this doesn’t exist as far as the schema is concerned we need to do something more for the type generator.


Let’s define the context type first by adding this to the bottom of data.ts:

export type Context = {
    dataStore: DataStore;
};

Now we can tell GraphQL Code Generator to use this by modifying our config:

overwrite: true
schema: "./graphql/schema.graphql"
generates:
    graphql/generated.ts:
        config:
            contextType: "./data#Context"
        plugins:
            - "typescript"
            - "typescript-resolvers"

We added a new config node in which we specify the contextType in the form of <path>#<type name> and when we run the generator the type is used and now the dataStore is typed in our resolvers!


Custom models


It’s time to run our Function locally.

npm start

And let’s query it. We’ll grab a random question:

{
    getRandomQuestion {
        id
        question
        answers
    }
}

Unfortunately, this fails with the following error:



Cannot return null for non-nullable field Question.answers.



If we refer back to our Question type in the GraphQL schema:

type Question {
    id: ID!
    question: String!
    correctAnswer: String!
    answers: [String!]!
}

This error message makes sense as answers is a non-nullable array of non-nullable strings ([String!]!), but if that’s compared to our data model in Cosmos:

export type QuestionModel = {
    id: string;
    question: string;
    category: string;
    incorrect_answers: string[];
    correct_answer: string;
    type: string;
    difficulty: "easy" | "medium" | "hard";
};

Well, there’s no answers field, we only have incorrect_answers and correct_answer.


It’s time to extend our generated types a bit further using custom models. We’ll start by updating the config file:

overwrite: true
schema: "./graphql/schema.graphql"
generates:
    graphql/generated.ts:
        config:
            contextType: "./data#Context"
            mappers:
                Question: ./data#QuestionModel
        plugins:
            - "typescript"
            - "typescript-resolvers"

With the mappers section, we’re telling the generator when you find the Question type in the schema, it’s use QuestionModel as the parent type.


But this still doesn’t tell GraphQL how to create the answers field, for that we’ll need to define a resolver on the Question type:

import { Resolvers } from "./generated";

const resolvers: Resolvers = {
    Query: {
        question(_, { id }, { dataStore }) {
            return dataStore.getQuestionById(id);
        },
        async getRandomQuestion(_, __, { dataStore }) {
            const questions = await dataStore.getQuestions();
            return questions[Math.floor(Math.random() * questions.length) + 1];
        }
    },

    Question: {
        answers(question) {
            return question.incorrect_answers
                .concat([question.correct_answer])
                .sort();
        },
        correctAnswer(question) {
            return question.correct_answer;
        }
    }
};

export default resolvers;

These field resolvers will receive a parent as their first argument that is the QuestionModel and expect to return the type as defined in the schema, making it possible to do mapping of data between types as required.


If you restart your Azure Functions and execute the query from before, a random question is returned from the API.


Conclusion


We’ve taken a look at how we can build on the idea of deploying GraphQL on Azure Functions and looked at how we can use the GraphQL schema, combined with our own models, to enforce type safety with TypeScript.


We didn’t implement the mutation in this post, that’s an exercise for you as the reader to tackle.


You can check out the full example, including how to connect it with a React front end, on GitHub.



Modern Application Development Overview

Modern Application Development Overview

This article is contributed. See the original author and article here.

This blog will provide an overview of Modern application development. I will first define the modern application development approach. Then delve into the ‘7 building blocks’ of the approach starting with cloud native architecture, followed by AI, Integration, Data, Software delivery, Operations, and Security.


 


Each segment will define and explain the ‘building block’ and how the modern application development approach leverages the ‘building blocks’ to produce more robust applications.


 


What is Modern Application Development (MAD)?


Modern application development is an approach that enables you to innovate rapidly by using cloud-native architectures with loosely coupled microservices, managed databases, AI, DevOps support, and built-in monitoring.


 


The resulting modern applications leverage cloud native architectures by packaging code and dependencies in containers and deploying them as microservices to increase developer velocity using DevOps practices.


 


Subsequently modern applications utilize continuous integration and delivery (CI/CD) technologies and processes to improve system reliability. Modern apps employ automation to identify and quickly mitigate issues applying best practices like infrastructure as code and increasing data security with threat detection and protection.


 


Lastly, modern applications are faster by infusing AI into native architecture structures to reduce manual tasks, accelerating workflows and introducing low code application development tools to simplify and expedite development processes.


 


Cloud-native architectures


According to The Cloud Native Computing Foundation (CNCF), cloud native is defined as “Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.


Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.


 


These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”


 


Utilizing that definition, what are the key tenants of a cloud-native approach, and how does each tenant benefit you?


 


As stated above, cloud-native architectures center on speed and agility. That speed and agility are derived from 6 factors:


1. Cloud infrastructure


2. Modern design


3. Microservices


4. Containers


5. Backing services


6. Automation.


Cloud infra.png



 


Cloud infrastructure is the most important factor that contributes to the speed and agility of cloud-native architecture.


 


3 Key Factors


1. Cloud-native systems fully leverage the cloud service model using PaaS compute infrastructure and managed services.


2. Cloud-native systems continue to run as infrastructure scales in or out without worrying about the back end because the infra is fully managed.


3. Cloud-native systems have auto scaling, self-healing, and monitoring capabilities.


Modern Design is highly effective in part due to the Twelve-Factor Application method, which is a set of principles and practices that developers follow to construct applications optimized for modern cloud environments.


 


Most Critical Considerations for Modern Design


1. Communication — How front ends communication with back-end services, and how back-end services communicate with each other.


2. Resiliency — How services in your distributed architecture respond in less-than-ideal scenarios due to the in-process, out-process network communications of microservices architecture.


3. Distributed Data — How do you query data or implement a transaction across multiple services?


4. Identity — How does your service identify who is accessing it and their allotted permissions?


 


What are Microservices?


Microservices are built as a distributed set of small, independent services that interact through a shared fabric.


Microservices infra.png


 


Improved Agility with Microservices


1. Each microservice has an autonomous lifecycle and can evolve independently and deploy frequently.


2. Each microservice can scale independently, enabling services to scale to meet demand.


Those microservices are then packaged a container image, those images are stored in container registry. When needed you transform the container into a running container instance, to utilize the stored microservices. How do containers benefit cloud native apps?


 


Benefits of Containers


1. Provide portability and guarantee consistency across environments.


2. Containers can isolate microservices and their dependencies from the underlying infrastructure.


3. Smaller footprints than full virtual machines (VMs). That smaller size increases density, the number of microservices, that a given host can run at a time.


 


Cloud native solutions also increase application speed and agility via backing services.


Container bois.png



Benefits of Backing Services


1. Save time and labor


2. Treating backing services as attached resources enables the services to attach and detach as needed without code changes to the microservices that contain information, enabling greater dynamism.


Lastly, cloud-native solutions leverage automation. Using cloud-native architectures your infrastructure and deployment are automated, consistent, and reputable.


 


Benefits of Automation


1. Infrastructure as Code (IaC) avoids manual environment configuration and delivers stable environments rapidly at scale.


2. Automated deployment leverages CI/CD to speed up innovation and deployment, updating on-demand; saving money and time.


 


Artificial Intelligence


The second building block in the modern application development approach is Artificial intelligence (AI).


 


What comprises artificial intelligence? How do I add AI to my applications? Azure Artificial Intelligence is comprised of machine learning, knowledge mining, and AI apps and agents. Under the apps and agent’s domain there are two overarching products, Azure Cognitive Services and Bot Service, that we’re going to focus on.


 


Cognitive services are a collection of domain specific pre-trained AI models that can be customized with your data. Bot service is a purpose-built bot development environment with out-of-the-box templates. To learn how to add AI to your applications watch the short video titled “Easily add AI to your applications.


Virtual Assitant guy.png


Innate Benefits


 


User benefits: Translation, chatbots, and voice for AI-enabled user interfaces.


Business benefits: Enhanced business logic for scenarios like search, personalization, document processing, image analytics, anomaly detection, and speech analytics.


 


Modern Application Development unique benefit:


Enable developers of any skill to add AI capabilities to their applications with pre-built and customizable AI models for speech, vision, language, and decision-making.


 


Integration


The third building block is integration.


 


Why is integration needed, and how is it accomplished?


Integration is needed to integrate applications by connecting multiple independent systems. The four core cloud services to meet integration needs are:


 


1. A way to publish and manage application programming interfaces (APIs).


2. A way to create and run integration logic, typically with a graphical tool for defining the workflow’s logic.


3. A way for applications and integration technologies to communicate in a loosely coupled way via messaging.


4. A technology that supports communication via events


Azure Integration Services jawn.jpg



What are the benefits of Azure integration services and how do they translate to the modern app dev approach?


Azure meets all four needs, the first need is met by Azure API management, the second is met by Azure Logic Apps, the third is Azure Service Bus, and the final is met by Azure Event Grid.


 


The four components of Azure Integration Services address the core requirements of application integration. Yet real scenarios often require more, and this is where the modern application development approach comes into play.


 


Perhaps your integration application needs a place to store unstructured data, or a way to include custom code that does specialized data transformations.


 


Azure Integration Services is part of the larger Azure cloud platform, making it easier to integrate data, APIs, and into your modern app to meet your needs.


 


You might store unstructured data in Azure Data Lake Store, for instance, or write custom code using Azure Functions, to meet serverless compute tech needs.


 


Data


The fourth building block is data, and more specifically managed databases.


 


What are the advantages of managed databases?


Fully managed, cloud-based databases provide limitless scale, low-latency access to rich data, and advanced data protection — all built in, regardless of languages or frameworks.


 


How does the modern application development approach benefit from fully managed databases?


Modern application development leverages microservices and containers, the benefit to both technologies is their ability to operate independently and scale as demand warrants.


 


To ensure the greatest user satisfaction and app functionality the limitless scale and low-latency access to data enable apps to run unimpeded.


 


Software Delivery


The fifth building block is software delivery.


 


What constitutes modern development software delivery practices?


Modern app development software delivery practices enable you to meet rapid market changes that require shorter release cycles without sacrificing quality, stability, and security.


 


The practices help you to release in a fast, consistent, and reliable way by using highly productive tools, automating mundane and manual steps, and iterating in small increments through CI/CD and DevOps practices.


 


What is DevOps?


A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers. DevOps enables formerly siloed roles — development, IT operations, quality engineering, and security — to coordinate and collaborate to produce better, more reliable products.


 


By adopting a DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve development goals faster.


 


DevOps influences the application lifecycle throughout its plan, develop, deliver, and operate phases.


 


Plan


In the plan phase, DevOps teams ideate, define, and describe features and capabilities of the applications and systems they are building. Creating backlogs, tracking bugs, managing agile software development with Scrum, using Kanban boards, and visualizing progress with dashboards are some of the ways DevOps teams plan with agility and visibility.


CICD life.png



Develop


The develop phase includes all aspects of coding — writing, testing, reviewing, and the integration of code by team members — as well as building that code into build artifacts that can be deployed into various environments. To develop rapidly, they use highly productive tools, automate mundane and manual steps, and iterate in small increments through automated testing and continuous integration.


 


Deliver


Delivery is the process of deploying applications into production environments and deploying and configuring the fully governed foundational infrastructure that makes up those environments.


 


In the deliver phase, teams define a release management process with clear manual approval stages. They also set automated gates that move applications between stages until they’re made available to customers.


 


Operate


The operate phase involves maintaining, monitoring, and troubleshooting applications in production environments. In adopting DevOps practices, teams work to ensure system reliability, high availability, and aim for zero downtime while reinforcing security and governance.


 


What is CI/CD?


Under continuous integration, the develop phase — building and testing code — is fully automated. Each time you commit code, changes are validated and merged to the master branch, and the code is packaged in a build artifact.


 


Under continuous delivery, anytime a new build artifact is available, the artifact is automatically placed in the desired environment and deployed. With continuous deployment, you automate the entire process from code commit to production.


 


Operations
The sixth building block is operations to maximize automation.


How do you maximize automation in your modern application development approach?


With an increasingly complex environment to manage, maximizing the use of automation helps you improve operational efficiency, identify issues before they affect customer experiences, and quickly mitigate issues when they occur.


 


Fully managed platforms provide automated logging, scaling, and high availability. Rich telemetry, actionable alerting, and full visibility into applications and the underlying system are key to a modern application development approach.


 


Automating regular checkups and applying best practices like infrastructure as code and site reliability engineering promotes resiliency and helps you respond to incidents with minimal downtime and data loss.


 


Security


The seventh building block is multilayered security.


 


Why do I need multi-layered security in my modern applications?


Modern applications require multilayered security across code, delivery pipelines, app runtimes, and databases. Start by providing developers secure dev boxes with well-governed identity. As part of the DevOps lifecycle, use automated tools to examine dependencies in code repositories and scan for vulnerabilities as you deploy apps to the target environment.


 


Enterprise-grade secrets and policy management encrypt the applications and give the operations team centralized policy enforcement. With fully managed compute and database services, security control is built in and threat protection is executed in real time.


 


Conclusion
While modern application development can seem daunting, it is an approach that can be done iteratively, and each step can yield large benefits for your team.


 


Access webinars, analyst reports, tutorials, and more on the Modern application development on Azure page.

A journey to green labelling

A journey to green labelling

This article is contributed. See the original author and article here.

Before joining Microsoft and falling in love with the technology and platform and possibilities that the Azure public cloud provides, I was (and for a 2-digit number of years, gosh) an expert in the contact center and telco market: the technology that companies use to provide customer service on their products. Lately, my focus has been mostly on the customer experience, as the power clearly shifted over the years from the technology and technologists to the end-users and how their perception of experience was contributing to the success (or failure) of a company.


pexels-canva-studio-3194519.jpg


 


As a part of my customer experience work, I researched how to apply the idea of a Net Promoter Score. I was really drawn to the technique of using a single question to define if something was going to succeed or fail. But the more I saw companies using NPS, the more I realized this approach omitted an entire important set of choices a company can make: green choices.


 


While a customer is navigating your virtual space, such as a website, mobile app or even your physical store, there is nothing that communicates a green option for the product or the technology that is used to bring that product to the end user.


 


When I think about my e-commerce experiences, which started back in the year 2000, the closest example to a green option was the energy consumption label on some appliances. I searched the web and found the EU energy consumption labels only apply to the following categories: appliances (dishwashers, refrigerators, etc.), air conditioners, light bulbs, cars, televisions, houses, and tires. When buying a large appliance, this label helped me, as a consumer, to pick the one that was more energy efficient. While choosing an appliance that consumes less energy could be framed as a “greener” choice, in most cases, it’s framed more like a “cost savings” choice.


 


The point is we need to start doing something at all levels, and little changes can lead to a great impact if we concentrate our efforts in the same direction. Consumers make many small choices every day on the products they buy. In many cases, they have little or no knowledge about the carbon impact those choices have.


pexels-ready-made-3850512.jpg


 


But what if the end-user could be more knowledgeable about the carbon impact of their purchases? Thinking of my own experiences as a user, I’d like to see in the foreseeable future something like:



  • An energy consumption label (with street-light color code and A to F rating) on computers and devices.

  • How recyclable a device is.

  • Sustainable software, knowing that the software used in the device was created with sustainability paths and best practices and will allow the user choices on energy consumption.

  • Active and real-time information from the energy suppliers about the carbon impact of the consumed energy in my house. A device might be labeled as low carbon impact, but knowing from the energy supplier when is the greenest moment (i.e. the time of day when energy is produced with alternative sources) to charge my device is something that needs to be done at user level and is highly dependent on the location.


Omitting carbon impact information from a product undervalues a customer’s desire to reduce their carbon footprint through their purchasing choices. Adding this labeling opens up a lot of potential for both consumers and companies to make more sustainable choices. For companies, this could even mean leveraging “green loyalty, which is a marketing technique that can help people feel more active in their consumers’ choices on sustainability.


By Sandra PallierBy Sandra Pallier


Today, a customer has the ability to make some green choices, such as conserving water, recycling, using reusable items and shopping bags, etc. Adding labeling around carbon impact would give customers significantly more choice. Despite the upfront challenges in providing this information, carbon impact labeling would allow products with a lower carbon impact to differentiate themselves. This could produce better products as well as a reduction in carbon emissions, and an overall education of technology users to search for the greener option.