by Contributed | Apr 6, 2021 | Technology
This article is contributed. See the original author and article here.
In our exploration of how to run GraphQL on Azure, we’ve looked at the two most common aspects of a GraphQL server, queries and mutations, so we can get data and store data. Today, we’re going to look at the third piece of the puzzle, subscriptions.
What are GraphQL Subscriptions
In GraphQL, a Subscription is used as a way to provide real-time data to connected clients. Most commonly, this is implemented over a WebSocket connection, but I’m sure you could do it with long polling or Server Sent Events if you really wanted to (I’ve not gone looking for that!). This allows the GraphQL server to broadcast query responses out when an event happens that the client is subscribed to.
Let’s think about this in the context of the quiz game we’ve been doing. So far the game is modeled for single player, but if we wanted to add multiplayer, we could have the game wait for all players to join, and once they have, broadcast out a message via a subscription that the game is starting.
Defining Subscriptions
Like queries and mutations, subscriptions are defined as part of a GraphQL schema, and they can reuse the types that are available within our schema. Let’s make a really basic schema that contains a subscription:
type Query {
hello: String!
}
type Subscription {
getMessage: String!
}
schema {
query: Query
subscription: Subscription
}
The subscription type that we’re defining can have as many different subscriptions that clients can subscribe via, and each might return different data, it’s completely up to the way your server wants to expose real-time information.
Implementing Subscriptions on Azure
For this implementation, we’re going to go back to TypeScript and use Apollo. Apollo have some really great docs on how to implement subscriptions in an Apollo Server, and that’ll be our starting point.
But before we can start pushing messages around, we need to work out what is going to be the messaging backbone of our server. We’re going to need some way in which the server and communicate with all connected clients, either from within a resolver, or from some external event that the server receives.
In Azure, when you want to do real-time communications, there’s no better service to use than SignalR Service. SignalR Service takes care of the protocol selection, connection management and scaling that you would require for a real-time application, so it’s ideal for our needs.
Creating the GraphQL server
In the previous posts, we’ve mostly talked about running GraphQL in a serverless model on Azure Functions, but for a server with subscriptions, we’re going to use Azure App Service, and we can’t expose a WebSocket connection from Azure Functions for the clients to connect to.
Apollo provides plenty of middleware options that we can chose from, so for this we’ll use the Express integration, apollo-server-express and follow the subscriptions setup guide.
Adding Subscriptions with SignalR
When it comes to implementing the integration with SignalR, Apollo uses the graphql-subscriptions PubSubEngine class to handle how the broadcasting of messages, and connections from clients.
So that means we’re going to need an implementation of that which uses SignalR, and thankfully there is one, @aaronpowell/graphql-signalr-subscriptions (yes, I did write it :squinting_face_with_tongue:).
We’ll start by adding that to our project:
npm install --save /graphql-signalr-subscriptions
You’ll need to create a SignalR Service resource and get the connection string for it (I use dotenv to inject it for local dev) so you can create PubSub engine. Create a new resolvers.ts file and create the SignalRPubSub instance in it.
import { SignalRPubSub } from "@aaronpowell/graphql-signalr-subscriptions";
export const signalrPubSub = new SignalRPubSub(
process.env.SIGNALR_CONNECTION_STRING
);
We export this so that we can import it in our index.ts and start the client when the server starts:
// setup ApolloServer
httpServer.listen({ port }, () => {
console.log(
` Server ready at http://localhost:${port}${server.graphqlPath}`
);
console.log(
` Subscriptions ready at ws://localhost:${port}${server.subscriptionsPath}`
);
signalrPubSub
.start()
.then(() => console.log(" SignalR up and running"))
.catch((err: any) => console.error(err));
});
It’s important to note that you must call start() on the instance of the PubSub engine, as this establishes the connection with SignalR, and until that happens you won’t be able to send messages.
Communicating with a Subscription
Let’s use the simple schema from above:
type Query {
hello: String!
}
type Subscription {
getMessage: String!
}
schema {
query: Query
subscription: Subscription
}
In the hello query we’ll broadcast a message, which the getMessage can subscribe to. Let’s start with the hello resolver:
export const resolvers = {
Query: {
hello() {
signalrPubSub.publish("MESSAGE", {
getMessage: "Hello I'm a message"
});
return "Some message";
}
}
};
So our hello resolver is going to publish a message with the name MESSAGE and a payload of { getMessage: “…” } to clients. The name is important as it’s what the subscription resolvers will be configured to listen for and the payload represents all the possible fields that someone could select in the subscription.
Now we’ll add the resolver for the subscription:
export const resolvers = {
Query: {
hello() {
signalrPubSub.publish("MESSAGE", {
getMessage: "Hello I'm a message"
});
return "Some message";
}
},
Subscription: {
getMessage: {
subscribe: () => signalrPubSub.asyncIterator(["MESSAGE"])
}
}
};
A resolver for a subscription is a little different to query/mutation/field resolvers as you need to provide a subscribe method, which is what Apollo will invoke to get back the names of the triggers to be listening on. We’re only listening for MESSAGE here (but also only broadcasting it), but if you added another publish operation with a name of MESSAGE2, then getMessage subscribers wouldn’t receive that. Alternatively, getMessage could be listening to a several trigger names, as it might represent an aggregate view of system events.
Conclusion
In this post we’ve been introduced to subscriptions in GraphQL and seen how we can use the Azure SignalR Service as the backend to provide this functionality.
You’ll find the code for the SignalR implementation of subscriptions here and the full example here.
by Contributed | Apr 6, 2021 | Technology
This article is contributed. See the original author and article here.
This blog will provide an overview of Modern application development. I will first define the modern application development approach. Then delve into the ‘7 building blocks’ of the approach starting with cloud native architecture, followed by AI, Integration, Data, Software delivery, Operations, and Security.
Each segment will define and explain the ‘building block’ and how the modern application development approach leverages the ‘building blocks’ to produce more robust applications.
What is Modern Application Development (MAD)?
Modern application development is an approach that enables you to innovate rapidly by using cloud-native architectures with loosely coupled microservices, managed databases, AI, DevOps support, and built-in monitoring.
The resulting modern applications leverage cloud native architectures by packaging code and dependencies in containers and deploying them as microservices to increase developer velocity using DevOps practices.
Subsequently modern applications utilize continuous integration and delivery (CI/CD) technologies and processes to improve system reliability. Modern apps employ automation to identify and quickly mitigate issues applying best practices like infrastructure as code and increasing data security with threat detection and protection.
Lastly, modern applications are faster by infusing AI into native architecture structures to reduce manual tasks, accelerating workflows and introducing low code application development tools to simplify and expedite development processes.
Cloud-native architectures
According to The Cloud Native Computing Foundation (CNCF), cloud native is defined as “Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.
Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”
Utilizing that definition, what are the key tenants of a cloud-native approach, and how does each tenant benefit you?
As stated above, cloud-native architectures center on speed and agility. That speed and agility are derived from 6 factors:
1. Cloud infrastructure
2. Modern design
3. Microservices
4. Containers
5. Backing services
6. Automation.

Cloud infrastructure is the most important factor that contributes to the speed and agility of cloud-native architecture.
3 Key Factors
1. Cloud-native systems fully leverage the cloud service model using PaaS compute infrastructure and managed services.
2. Cloud-native systems continue to run as infrastructure scales in or out without worrying about the back end because the infra is fully managed.
3. Cloud-native systems have auto scaling, self-healing, and monitoring capabilities.
Modern Design is highly effective in part due to the Twelve-Factor Application method, which is a set of principles and practices that developers follow to construct applications optimized for modern cloud environments.
Most Critical Considerations for Modern Design
1. Communication — How front ends communication with back-end services, and how back-end services communicate with each other.
2. Resiliency — How services in your distributed architecture respond in less-than-ideal scenarios due to the in-process, out-process network communications of microservices architecture.
3. Distributed Data — How do you query data or implement a transaction across multiple services?
4. Identity — How does your service identify who is accessing it and their allotted permissions?
What are Microservices?
Microservices are built as a distributed set of small, independent services that interact through a shared fabric.

Improved Agility with Microservices
1. Each microservice has an autonomous lifecycle and can evolve independently and deploy frequently.
2. Each microservice can scale independently, enabling services to scale to meet demand.
Those microservices are then packaged a container image, those images are stored in container registry. When needed you transform the container into a running container instance, to utilize the stored microservices. How do containers benefit cloud native apps?
Benefits of Containers
1. Provide portability and guarantee consistency across environments.
2. Containers can isolate microservices and their dependencies from the underlying infrastructure.
3. Smaller footprints than full virtual machines (VMs). That smaller size increases density, the number of microservices, that a given host can run at a time.
Cloud native solutions also increase application speed and agility via backing services.

Benefits of Backing Services
1. Save time and labor
2. Treating backing services as attached resources enables the services to attach and detach as needed without code changes to the microservices that contain information, enabling greater dynamism.
Lastly, cloud-native solutions leverage automation. Using cloud-native architectures your infrastructure and deployment are automated, consistent, and reputable.
Benefits of Automation
1. Infrastructure as Code (IaC) avoids manual environment configuration and delivers stable environments rapidly at scale.
2. Automated deployment leverages CI/CD to speed up innovation and deployment, updating on-demand; saving money and time.
Artificial Intelligence
The second building block in the modern application development approach is Artificial intelligence (AI).
What comprises artificial intelligence? How do I add AI to my applications? Azure Artificial Intelligence is comprised of machine learning, knowledge mining, and AI apps and agents. Under the apps and agent’s domain there are two overarching products, Azure Cognitive Services and Bot Service, that we’re going to focus on.
Cognitive services are a collection of domain specific pre-trained AI models that can be customized with your data. Bot service is a purpose-built bot development environment with out-of-the-box templates. To learn how to add AI to your applications watch the short video titled “Easily add AI to your applications.”

Innate Benefits
User benefits: Translation, chatbots, and voice for AI-enabled user interfaces.
Business benefits: Enhanced business logic for scenarios like search, personalization, document processing, image analytics, anomaly detection, and speech analytics.
Modern Application Development unique benefit:
Enable developers of any skill to add AI capabilities to their applications with pre-built and customizable AI models for speech, vision, language, and decision-making.
Integration
The third building block is integration.
Why is integration needed, and how is it accomplished?
Integration is needed to integrate applications by connecting multiple independent systems. The four core cloud services to meet integration needs are:
1. A way to publish and manage application programming interfaces (APIs).
2. A way to create and run integration logic, typically with a graphical tool for defining the workflow’s logic.
3. A way for applications and integration technologies to communicate in a loosely coupled way via messaging.
4. A technology that supports communication via events

What are the benefits of Azure integration services and how do they translate to the modern app dev approach?
Azure meets all four needs, the first need is met by Azure API management, the second is met by Azure Logic Apps, the third is Azure Service Bus, and the final is met by Azure Event Grid.
The four components of Azure Integration Services address the core requirements of application integration. Yet real scenarios often require more, and this is where the modern application development approach comes into play.
Perhaps your integration application needs a place to store unstructured data, or a way to include custom code that does specialized data transformations.
Azure Integration Services is part of the larger Azure cloud platform, making it easier to integrate data, APIs, and into your modern app to meet your needs.
You might store unstructured data in Azure Data Lake Store, for instance, or write custom code using Azure Functions, to meet serverless compute tech needs.
Data
The fourth building block is data, and more specifically managed databases.
What are the advantages of managed databases?
Fully managed, cloud-based databases provide limitless scale, low-latency access to rich data, and advanced data protection — all built in, regardless of languages or frameworks.
How does the modern application development approach benefit from fully managed databases?
Modern application development leverages microservices and containers, the benefit to both technologies is their ability to operate independently and scale as demand warrants.
To ensure the greatest user satisfaction and app functionality the limitless scale and low-latency access to data enable apps to run unimpeded.
Software Delivery
The fifth building block is software delivery.
What constitutes modern development software delivery practices?
Modern app development software delivery practices enable you to meet rapid market changes that require shorter release cycles without sacrificing quality, stability, and security.
The practices help you to release in a fast, consistent, and reliable way by using highly productive tools, automating mundane and manual steps, and iterating in small increments through CI/CD and DevOps practices.
What is DevOps?
A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers. DevOps enables formerly siloed roles — development, IT operations, quality engineering, and security — to coordinate and collaborate to produce better, more reliable products.
By adopting a DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve development goals faster.
DevOps influences the application lifecycle throughout its plan, develop, deliver, and operate phases.
Plan
In the plan phase, DevOps teams ideate, define, and describe features and capabilities of the applications and systems they are building. Creating backlogs, tracking bugs, managing agile software development with Scrum, using Kanban boards, and visualizing progress with dashboards are some of the ways DevOps teams plan with agility and visibility.

Develop
The develop phase includes all aspects of coding — writing, testing, reviewing, and the integration of code by team members — as well as building that code into build artifacts that can be deployed into various environments. To develop rapidly, they use highly productive tools, automate mundane and manual steps, and iterate in small increments through automated testing and continuous integration.
Deliver
Delivery is the process of deploying applications into production environments and deploying and configuring the fully governed foundational infrastructure that makes up those environments.
In the deliver phase, teams define a release management process with clear manual approval stages. They also set automated gates that move applications between stages until they’re made available to customers.
Operate
The operate phase involves maintaining, monitoring, and troubleshooting applications in production environments. In adopting DevOps practices, teams work to ensure system reliability, high availability, and aim for zero downtime while reinforcing security and governance.
What is CI/CD?
Under continuous integration, the develop phase — building and testing code — is fully automated. Each time you commit code, changes are validated and merged to the master branch, and the code is packaged in a build artifact.
Under continuous delivery, anytime a new build artifact is available, the artifact is automatically placed in the desired environment and deployed. With continuous deployment, you automate the entire process from code commit to production.
Operations
The sixth building block is operations to maximize automation.
How do you maximize automation in your modern application development approach?
With an increasingly complex environment to manage, maximizing the use of automation helps you improve operational efficiency, identify issues before they affect customer experiences, and quickly mitigate issues when they occur.
Fully managed platforms provide automated logging, scaling, and high availability. Rich telemetry, actionable alerting, and full visibility into applications and the underlying system are key to a modern application development approach.
Automating regular checkups and applying best practices like infrastructure as code and site reliability engineering promotes resiliency and helps you respond to incidents with minimal downtime and data loss.
Security
The seventh building block is multilayered security.
Why do I need multi-layered security in my modern applications?
Modern applications require multilayered security across code, delivery pipelines, app runtimes, and databases. Start by providing developers secure dev boxes with well-governed identity. As part of the DevOps lifecycle, use automated tools to examine dependencies in code repositories and scan for vulnerabilities as you deploy apps to the target environment.
Enterprise-grade secrets and policy management encrypt the applications and give the operations team centralized policy enforcement. With fully managed compute and database services, security control is built in and threat protection is executed in real time.
Conclusion
While modern application development can seem daunting, it is an approach that can be done iteratively, and each step can yield large benefits for your team.
Access webinars, analyst reports, tutorials, and more on the Modern application development on Azure page.
by Contributed | Apr 6, 2021 | Technology
This article is contributed. See the original author and article here.
Before joining Microsoft and falling in love with the technology and platform and possibilities that the Azure public cloud provides, I was (and for a 2-digit number of years, gosh) an expert in the contact center and telco market: the technology that companies use to provide customer service on their products. Lately, my focus has been mostly on the customer experience, as the power clearly shifted over the years from the technology and technologists to the end-users and how their perception of experience was contributing to the success (or failure) of a company.

As a part of my customer experience work, I researched how to apply the idea of a Net Promoter Score. I was really drawn to the technique of using a single question to define if something was going to succeed or fail. But the more I saw companies using NPS, the more I realized this approach omitted an entire important set of choices a company can make: green choices.
While a customer is navigating your virtual space, such as a website, mobile app or even your physical store, there is nothing that communicates a green option for the product or the technology that is used to bring that product to the end user.
When I think about my e-commerce experiences, which started back in the year 2000, the closest example to a green option was the energy consumption label on some appliances. I searched the web and found the EU energy consumption labels only apply to the following categories: appliances (dishwashers, refrigerators, etc.), air conditioners, light bulbs, cars, televisions, houses, and tires. When buying a large appliance, this label helped me, as a consumer, to pick the one that was more energy efficient. While choosing an appliance that consumes less energy could be framed as a “greener” choice, in most cases, it’s framed more like a “cost savings” choice.
The point is we need to start doing something at all levels, and little changes can lead to a great impact if we concentrate our efforts in the same direction. Consumers make many small choices every day on the products they buy. In many cases, they have little or no knowledge about the carbon impact those choices have.

But what if the end-user could be more knowledgeable about the carbon impact of their purchases? Thinking of my own experiences as a user, I’d like to see in the foreseeable future something like:
- An energy consumption label (with street-light color code and A to F rating) on computers and devices.
- How recyclable a device is.
- Sustainable software, knowing that the software used in the device was created with sustainability paths and best practices and will allow the user choices on energy consumption.
- Active and real-time information from the energy suppliers about the carbon impact of the consumed energy in my house. A device might be labeled as low carbon impact, but knowing from the energy supplier when is the greenest moment (i.e. the time of day when energy is produced with alternative sources) to charge my device is something that needs to be done at user level and is highly dependent on the location.
Omitting carbon impact information from a product undervalues a customer’s desire to reduce their carbon footprint through their purchasing choices. Adding this labeling opens up a lot of potential for both consumers and companies to make more sustainable choices. For companies, this could even mean leveraging “green loyalty”, which is a marketing technique that can help people feel more active in their consumers’ choices on sustainability.
By Sandra Pallier
Today, a customer has the ability to make some green choices, such as conserving water, recycling, using reusable items and shopping bags, etc. Adding labeling around carbon impact would give customers significantly more choice. Despite the upfront challenges in providing this information, carbon impact labeling would allow products with a lower carbon impact to differentiate themselves. This could produce better products as well as a reduction in carbon emissions, and an overall education of technology users to search for the greener option.
Recent Comments