Azure Maya Mystery Part II: The Mysterious Cenote

Azure Maya Mystery Part II: The Mysterious Cenote

This article is contributed. See the original author and article here.

 

If you haven’t been following along as we explore a Maya pyramid in the middle of the uncharted jungle, it’s not too late! Join us as we explore the pyramid.

 

The following content contains spoilers about gameplay!

 

 

In part 1 of the Azure Maya Mystery, we discovered the meaning of many glyphs to gain access to the pyramid. Amongst others, we learned symbols with familiar imagery: ‘jaguar’, ‘macaw’, and ‘snake’

 

glyphs1.gif

as well as more esoteric glyphs: ‘earth’, ‘wind’, and ‘tree’

 

glyph2.gif

 

Art by Dana Moot II

Winding our way around the pyramid’s base and finally up its steps, we used our new knowledge to gain entrance. We also learned the meaning of one of the parts of the temple’s name by launching a chat app on an Azure Static Web App to talk to the goddess to whom the pyramid is dedicated.

 

ukdjiqbb29glhmgs04je.png

 

But when the floor suddenly gives way underfoot, the intrepid explorer has to wait until Part 2 of the Maya Mystery to learn more!

 

That time is now.

 

In this part of the Mystery, you will continue your exploration of the deepest part of the pyramid, an underground cenote, a natural sinkhole that the ancient Maya sometimes used for sacrificial offerings. Your job? dive in and gather broken glyphs that careless prior explorers have tossed in to the cenote, restoring them to their former place. In the process, you will learn more glyph meanings and the second part of the pyramid’s name.

 

While exploring, you will also launch a shopping experience to acquire the gear you need to successfully complete your mission. Learn more about the Node.js code you need to launch by visiting a new Microsoft Learn learning path all about the topic!

 

Are you prepared to continue your exploration? Haven’t joined us yet? It’s not too late! Complete Level 1, or skip right to Level 2.

Learning How to Transition Your SQL Server Skills to Azure SQL | Data Exposed

This article is contributed. See the original author and article here.

The Azure Data team at Microsoft recently launched FOUR new ways for you to learn Azure SQL. They’ll show you how to translate your existing SQL Server expertise to Azure SQL including Azure SQL Database and Azure SQL Managed Instance through all-new content on YouTube, GitHub, and Microsoft Learn. After completing the video series, learning path, or workshop, you will have a foundational knowledge of what to use when, as well as how to configure, secure, monitor, and troubleshoot Azure SQL. In this episode, Bob Ward, Anna Hoffman, and Marisa Brasile share the details of their all-new content and how to get started.

 

Watch on Data Exposed

 

Resources:
YouTube: https://aka.ms/azuresql4beginners

Github: https://aka.ms/azuresqlworkshop

Microsoft Learn: https://aka.ms/azuresqlfundamentalsb

 

View/share our latest episodes on Channel 9 and YouTube!

Automated user provisioning from SAP SuccessFactors is now GA

Automated user provisioning from SAP SuccessFactors is now GA

This article is contributed. See the original author and article here.

Howdy folks,

 

Today, we’re announcing the general availability of user provisioning from SAP SuccessFactors to Azure AD. In addition, SAP and Microsoft have been working closely together to enhance existing integrations between Azure AD and SAP Cloud Identity Services of the SAP Cloud Platform, making it easier to manage and secure your SAP applications.

 

User provisioning from SAP SuccessFactors to Azure AD is now generally available

With the integration between Azure AD and SAP SuccessFactors, you can now automate user access to applications and resources so that a new hire can be up and running with full access to the necessary applications on day one. The integration also helps you reduce dependencies on IT helpdesk for on-boarding and off-boarding tasks.

Thanks to your feedback on our public preview, we’ve added these new capabilities:

  • With enhanced attribute support, you can now include any SAP SuccessFactors Employee Central attributes associated with Person, Employment and Foundation objects in your provisioning rules and attribute mapping.
  • Using flexible mapping rules, you can now handle different HR scenarios such as worker conversion, rehire, concurrent assignment, and global assignment.
  • In addition to email, we now support writeback of phone numbers, username, and login method from Azure AD to SAP SuccessFactors.

 

We’d like to say a special thank you to SAP SuccessFactors team who helped us enhance the integration. Here’s what they had to say:

“Enabling end-to-end user lifecycle management is critical. The Azure AD and SAP SuccessFactors integration will help streamline HR and IT processes to help our joint customers save time, improve security, and enable employee productivity.“ – Lara Albert, VP, Solution Marketing, SAP SuccessFactors

 

We’d like to also thank our preview customers and partners who provided great feedback on different aspects of this integration! Here’s what one of our system integrator partners, Olikka, had to say:

 

“Inbound provisioning from SAP SuccessFactors to Azure AD and on-premises Active Directory has helped us reduce the time customers need to spend on-boarding/off-boarding and adjusting access through the employee lifecycle. The integration that Microsoft has delivered means that we can avoid building complex custom connectors and managing on-premises infrastructure. This enables Olikka to get the customer up and running quickly while leveraging their existing Azure AD subscription.” – Chris Friday, Senior Project Consultant, Olikka

 

SAP1.png

 

Using Azure AD to manage and secure your SAP applications

Our new provisioning integration between Azure AD and SAP Cloud Identity Services allows you to spend less time creating or managing accounts for individual SAP applications. With this integration, you can now use Azure AD to automatically provision user accounts for SAP Analytics Cloud. In the coming months, we will expand this support to additional SAP applications like SAP S/4HANA, SAP Fieldglass, and SAP Marketing Cloud.

 

SAP2.png

 

We’re also enabling One-click SSO to simplify the configuration and setup of single sign-on with SAP Cloud Identity Services. One-click SSO allows you to quickly setup single sign-on between Azure AD and SAP Cloud Platform without needing to copy and paste values from different admin portals.

 

Learn more

User provisioning from SAP SuccessFactors to Azure AD requires an Azure AD P1 license, all other features referenced in this blog are available across all licensing tiers. To get started with these integrations, read our SAP identity integration documentation.

 

SAP Cloud Platform is also available on Azure regions around the globe to complement your SAP S/4HANA on Azure implementations, providing low latency and ease of integration. Learn more about SAP solutions on Azure at www.microsoft.com/sap and explore our SAP on Azure use cases to get started.

 

Together, our integrations with SAP allow you to manage and protect access to all your critical SAP landscape. As always, we’d love to hear any feedback or suggestions you may have. Please let us know what you think in the comments below or on the Azure AD feedback forum.

Best regards,

Alex Simons (@Alex_A_Simons)

Corporate VP of Program Management

Microsoft Identity Division

 

Learn more about Microsoft identity:

Configuring Codespaces for .NET Core Development

Configuring Codespaces for .NET Core Development

This article is contributed. See the original author and article here.

Since April 2020 Visual Studio Codespaces has been generally available. In addition to that, GitHub Codespaces has been provided as a private preview. Both are very similar to each other in terms of their usage. There are differences between both, though, discussed from this post. Throughout this post, I’m going to focus on the .NET Core application development.

 

Visual Studio Codespaces (VS CS) is an online IDE service running on a VM somewhere in Azure. Like Azure DevOps build agents, this VM is provisioned when a VS CS begins and destroyed after the VS CS is closed. With no other configuration, it is provisioned with default values. However, it’s not enough for .NET Core application development. Therefore, we might have to add some configurations.

 

What if, there is a pre-configured .NET Core development environment ready? It can be sorted out in two different ways. One is configuring the dev environment for each project or repository, and the other is personalising the dev environment. The former approach would be better to secure a common ground for the entire project, while each individual in the team can personalise their own environment using the latter approach. This post focuses on the former one.

 

Configuring Development Environment

 

As a Docker container takes care of the dev environment, we should define the Dockerfile. As there’s already a pre-configured one, we simply use it. But let’s build our opinionated one! There are roughly two parts – Docker container and extensions.

 

The sample environment can be found at this repository.

 

Create .devcontainer Directory

 

First of all, we need to create the .devcontainer directory within the repository. This directory contains Dockerfile, a bash script that the Docker container executes, and devcontainer.json that defines extensions.

 

Define Dockerfile

 

As there’s an official Docker image for .NET Core SDK, we just use it as a base image. Here’s the Dockerfile. The 3.1-bionic tag is for Ubuntu 18.04 LTS (line #1). If you want to use a different Linux distro, choose a different tag.

 

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic

WORKDIR /home/

COPY . .

RUN bash ./setup.sh

 

Now, let’s move on for setup.sh.

 

Configure setup.sh

 

In the setup.sh, we’re going to install several applications:

  1. Update the existing packages through the apt-get command. If there’s a new package, the script will install the new ones. (line #1-9).
  2. Install nvm for ASP.NET Core application development, which uses node.js (line #12).
  3. Install Docker CLI (line #15).
  4. The Docker image as on today includes PowerShell 7.0.2. If you want to install the latest version of PowerShell, run this part (line #18).
  5. Install Azure Functions Core Tools v3 (line #21-24).
  6. Enable local HTTPS connection (line #27).
  7. If you want to use zsh instead of bash, oh my zsh enhances developer experiences (line #30-34).

 

## Update and install some things we should probably have
apt-get update
apt-get install -y 
  curl 
  git 
  gnupg2 
  jq 
  sudo 
  zsh

## Instsall nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

## Install Docker CLI
curl -fsSL https://get.docker.com | bash

## Update to the latest PowerShell
curl -sSL https://raw.githubusercontent.com/PowerShell/PowerShell/master/tools/install-powershell.sh | bash

## Install Azure Functions Core Tools v3
wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install azure-functions-core-tools-3

## Enable local HTTPS
dotnet dev-certs https --trust

## Setup and install oh-my-zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
cp -R /root/.oh-my-zsh /home/$USERNAME
cp /root/.zshrc /home/$USERNAME
sed -i -e "s//root/.oh-my-zsh//home/$USERNAME/.oh-my-zsh/g" /home/$USERNAME/.zshrc
chown -R $USER_UID:$USER_GID /home/$USERNAME/.oh-my-zsh /home/$USERNAME/.zshrc

 

We now have both Dockerfile and setup.sh for the container setting. It’s time to install extensions for VS CS to use for .NET Core application development.

 

List of Extensions

 

devcontainer.json is the entry point when a new VS CS instance is firing up. The Dockerfile we defined above is linked to this devcontainer.json so that the development environment is provisioned. Through this devcontainer.json file, we can install all the necessary extensions to use. I’m going to install the following extensions for .NET Core app development.

 

Define those extensions in the devcontainer.json like:

 

{
  ...
  "extensions": [
    "docsmsft.docs-authoring-pack",
    "donjayamanne.githistory",
    "eamodio.gitlens",
    "editorconfig.editorconfig",
    "github.vscode-pull-request-github",
    "jongrant.csharpsortusings",
    "k--kato.docomment",
    "kreativ-software.csharpextensions",
    "mhutchie.git-graph",
    "ms-dotnettools.csharp",
    "ms-vscode.powershell",
    "ms-vscode.vscode-node-azure-pack",
    "ms-vsliveshare.vsliveshare",
    "visualstudioexptteam.vscodeintellicode",
    "yzhang.markdown-all-in-one"
  ],
  ...
}

 

All the environment setup is done!

 

Run GitHub Codespaces

 

Push this .devcontainer directory back to the repository and run GitHub Codespaces. If you’ve already joined in the GitHub Codespaces Early Access Program, you’ll be able to see the menu like below:

 

 

Click the menu, and you’ll be able to access to all files within GitHub Codespaces.

 

 

In addition to that, we have both a PowerShell console and zsh console. Run the following command to create a sample .NET Core console app to start writing the code!

 

dotnet new console -n Sample.ConsoleApp

 

The Program.cs should be looking like this!

 

 

Run Visual Studio Codespaces

 

This time, run the same repository on VS CS. First of all, visit https://online.visualstudio.com and login.

 

You MUST have an active Azure subscription to run a VS CS instance. If you don’t, create a free account and subscription through this Free Azure account page.

 

After the login, unless you have a billing plan, you should create it. VS CS takes the consumption-based model, which means that you only have to pay for what you have used. If you don’t need it any longer, delete it to avoid the further charge.

 

 

You will be asked to create a new instance if you don’t have one yet.

 

 

Enter the requested information and create a new VS CS instance. The screenshot below links the GitHub repository, which is dedicated to the repository. If you don’t link any of GitHub repository, it can be used for any repository.

 

 

The VS CS instance created looks like following. Although it uses the same repository as GitHub Codespaces uses, GitHub Codespaces has pushed nothing, VS CS doesn’t have the change.

 


So far, we’ve walked through how to set up the dev environment in VS CS for .NET Core application development. As this is the starting point of the team project efforts, it will significantly reduce the “It works on my machine” issue.


This post is originally posted on DevKimchi.

Connecting the Wio Terminal to Azure IoT

Connecting the Wio Terminal to Azure IoT

This article is contributed. See the original author and article here.

It’s been a few months now since I started playing with the Wio Terminal from Seeed Studio. It is a pretty complete device that can be used to power a wide range of IoT solutions—just look at its specifications!

 

WioT-Hardware-Overview.png
  • Cortex-M4F running at 120MHz (can be overclocked to 200MHz) from Microchip (ATSAMD51P19) ;
  • 192 KB of RAM, 4MB of Flash ;
  • Wireless connectivity: WiFi 2.4 & 5 GHz  (802.11 a/b/g/n), BLE, BLE 5.0, powered by a Realtek RTL8720DN module ;
  • 2.4″ LCD screen, 320×240 pixels ;
  • microSD card reader ;
  • Built-in sensors and actuators: light sensor, LIS3DH accelerometer, infrared emitter, microphone, buzzer, 5-way switch ;
  • Expansion ports: 2x Grove ports, 1x Raspberry-Pi compatible 40-pin header.

 

Please refer to the application’s README to learn how to test that the sample is working properly using Azure IoT Explorer.

 

azure-iot-explorer-telemetry.gif azure-iot-explorer-send-command.gif

 

Azure IoT Explorer – Monitoring Telemetry Azure IoT Explorer – Sending Commands


It is important to mention that this sample application is compatible with IoT Plug and Play. It means that there is a clear and documented contract of the kind of messages the Wio Terminal may send (telemetry) or receive (commands).

 

You can see the model of this contract below—it is rather straightforward. It’s been authored using the dedicated VS Code extension for DTDL, the Digital Twin Description Language.

 

 

{
  "@context": "dtmi:dtdl:context;2",
  "@id": "dtmi:seeed:wioterminal;1",
  "@type": "Interface",
  "displayName": "Seeed Studio Wio Terminal",
  "contents": [
    {
      "@type": [
        "Telemetry",
        "Acceleration"
      ],
      "unit": "gForce",
      "name": "imu",
      "schema": {
        "@type": "Object",
        "fields": [
          {
            "name": "x",
            "displayName": "IMU X",
            "schema": "double"
          },
          {
            "name": "y",
            "displayName": "IMU Y",
            "schema": "double"
          },
          {
            "name": "z",
            "displayName": "IMU Z",
            "schema": "double"
          }
        ]
      }
    },
    {
      "@type": "Command",
      "name": "ringBuzzer",
      "displayName": "Ring buzzer",
      "description": "Rings the Wio Terminal's built-in buzzer",
      "request": {
        "name": "duration",
        "displayName": "Duration",
        "description": "Number of milliseconds to ring the buzzer for.",
        "schema": "integer"
      }
    }
  ]
}

 

 

When connecting to IoT Hub, the Wio Terminal sample application “introduces itself” as conforming to the dtmi:seeed:wioterminal;1 model.

 

This allows you (or anyone who will be creating IoT applications integrating with your device, really) to be sure there won’t be any impedence mismatch between the way your device talks and expects to be talked to, and what your IoT application does.

 

A great example of why being able to automagically match a device to a corresponding DTDL model is useful can be illustrated with the way we used the Azure IoT Explorer earlier. Since the device “introduced itself” when connecting to IoT Hub, and since Azure IoT Explorer has a local copy of the model, it automatically showed us a dedicated UI for sending the ringBuzzer command!

 

pnp-command.png

 

Azure SDK for Embedded C

 

In the past, adding support for Azure IoT to an IoT device using the C programming language required to either use the rather monolithic (ex. it is not trivial to bring your own TCP/IP or TLS stack) Azure IoT C SDK, or to implement everything from scratch using the public documentation of Azure IoT’s MQTT front-end for devices.

 

Enter the Azure SDK for Embedded C!

 

The Azure SDK for Embedded C is designed to allow small embedded (IoT) devices to communicate with Azure services.

 

The Azure SDK team has recently started to put together a C SDK that specifically targets embedded and constrained devices. It provides a generic, platform-independent, infrastructure for manipulating buffers, logging, JSON serialization/deserialization, and more. On top of this lightweight infrastructure, client libraries for e.g Azure Storage or Azure IoT have been developed.

 

You can read more on the Azure IoT client library here, but in a nutshell, here’s what I had to implement in order to use it on the Wio Terminal connected:

 

  • As the sample uses symmetric keys to authenticate, we need to be able to generate a security token.
    • The token needs to have an expiration date (typically set to a few hours in the future), so we need to know the current date and time. We use an NTP library to get the current time from a time server.
    • The token includes an HMAC-SHA256 signature string that needs to be base64-encoded. Luckily, the recommended WiFi+TLS stack for the Wio Terminal already includes Mbed TLS, making it relatively simple to compute HMAC signatures (ex. mbedtls_md_hmac_starts) and perform base64 encoding (ex. mbedtls_base64_encode).
  • The Azure IoT client library helps with crafting MQTT topics that follow the Azure IoT conventions. However, you still need to provide your own MQTT implementation. In fact, this is a major difference with the historical Azure IoT C SDK, for which the MQTT implementation was baked into it. Since it is widely supported and just worked out-of-the-box, the sample application uses the PubSubClient MQTT library from Nick O’Leary.
  • And of course, one must implement their own application logic. In the context of the sample application, this meant using the Wio Terminal’s IMU driver to get acceleration data every 2 seconds, and hooking up the ringBuzzer command to actual embedded code that… rings the buzzer.

Conclusion

I hope you found this post useful! I will soon publish additional articles that go beyond the simple “Hey, my Wio Terminal can send accelerometer data to the cloud!” to more advanced use cases such as remote firmware upgrade. Stay tuned! :)

 

Let me know in the comments what you’ve done (or will be doing!) with your Wio Terminal, and also don’t hesitate to ask any burning question you may have. Of course, you can also always find me on Twitter.

 

 

Wireless connectivity, extensibility, processing power… on paper, the Wio Terminal must be the ideal platform from IoT development, right? Well, ironically, one thing it doesn’t do out-of-the-box is to actually connect to an IoT cloud platform!

 

You will have guessed it by now… In this blog post, you’ll learn how to connect your Wio Terminal to Azure IoT. More importantly, you will learn about the steps I followed, giving you all the information you need in order to port the Azure IoT Embedded C libraries to your own IoT device.

Connecting your Wio Terminal to Azure IoT

 

I have put together a sample application that should get you started in no time.

 

You will need a Wio Terminal, of course, an Azure IoT Hub instance, and a working Wi-Fi connection. The Wio Terminal will need to be connected to your computer over USB—kudos to Seeed Studio for providing a USB-C port, by the way!—so it can be programmed.

 

Here are the steps you should follow to get your Wio Terminal connected to Azure IoT Hub:

 

  1. If you don’t have an Azure subscriptioncreate one for free before you begin.
  2. Create an IoT Hub and register a new device (i.e. your Wio Terminal). Using the Azure portal is probably the most beginner-friendly method, but you can also use the Azure CLI or the VS Code extension. The sample uses symmetric keys for auhentication.
  3. Clone and open the sample repository in VS Code, making sure you have the PlatformIO extension installed.
  4. Update the application settings (include/config.h) file with your Wi-Fi, IoT Hub URL, and device credentials.
  5. Flash your Wio Terminal. Use the command palette (Windows/Linux: Ctrl+Shift+P / macOS: ⇧⌘P) to execute the PlatformIO: Upload command. The operation will probably take a while to complete as the Wio Terminal toolchain and the dependencies of the sample application are downloaded, and the code is compiled and uploaded to the device.
  6. Once the code has been uploaded successfully, your Wio Terminal LCD should turn on and start logging connection traces.
    You can also open the PlatformIO serial monitor to check the logs of the application (PlatformIO: Serial Monitor command).

 

> Executing task: C:Userskartben.platformiopenvScriptsplatformio.exe device monitor <
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- Miniterm on COM4  9600,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
Connecting to SSID: WiFi-Benjamin5G
......
> SUCCESS.
Connecting to Azure IoT Hub...
> SUCCESS.

 

Your device should now be sending its accelerometer sensor values to Azure IoT Hub every 2 seconds, and be ready to receive commands remotely sent to ring its buzzer.

 

Please refer to the application’s README to learn how to test that the sample is working properly using Azure IoT Explorer.

 

azure-iot-explorer-telemetry.gif azure-iot-explorer-send-command.gif

 

Azure IoT Explorer – Monitoring Telemetry Azure IoT Explorer – Sending Commands


It is important to mention that this sample application is compatible with IoT Plug and Play. It means that there is a clear and documented contract of the kind of messages the Wio Terminal may send (telemetry) or receive (commands).

 

You can see the model of this contract below—it is rather straightforward. It’s been authored using the dedicated VS Code extension for DTDL, the Digital Twin Description Language.

 

 

{
  "@context": "dtmi:dtdl:context;2",
  "@id": "dtmi:seeed:wioterminal;1",
  "@type": "Interface",
  "displayName": "Seeed Studio Wio Terminal",
  "contents": [
    {
      "@type": [
        "Telemetry",
        "Acceleration"
      ],
      "unit": "gForce",
      "name": "imu",
      "schema": {
        "@type": "Object",
        "fields": [
          {
            "name": "x",
            "displayName": "IMU X",
            "schema": "double"
          },
          {
            "name": "y",
            "displayName": "IMU Y",
            "schema": "double"
          },
          {
            "name": "z",
            "displayName": "IMU Z",
            "schema": "double"
          }
        ]
      }
    },
    {
      "@type": "Command",
      "name": "ringBuzzer",
      "displayName": "Ring buzzer",
      "description": "Rings the Wio Terminal's built-in buzzer",
      "request": {
        "name": "duration",
        "displayName": "Duration",
        "description": "Number of milliseconds to ring the buzzer for.",
        "schema": "integer"
      }
    }
  ]
}

 

 

When connecting to IoT Hub, the Wio Terminal sample application “introduces itself” as conforming to the dtmi:seeed:wioterminal;1 model.

 

This allows you (or anyone who will be creating IoT applications integrating with your device, really) to be sure there won’t be any impedence mismatch between the way your device talks and expects to be talked to, and what your IoT application does.

 

A great example of why being able to automagically match a device to a corresponding DTDL model is useful can be illustrated with the way we used the Azure IoT Explorer earlier. Since the device “introduced itself” when connecting to IoT Hub, and since Azure IoT Explorer has a local copy of the model, it automatically showed us a dedicated UI for sending the ringBuzzer command!

 

pnp-command.png

 

Azure SDK for Embedded C

 

In the past, adding support for Azure IoT to an IoT device using the C programming language required to either use the rather monolithic (ex. it is not trivial to bring your own TCP/IP or TLS stack) Azure IoT C SDK, or to implement everything from scratch using the public documentation of Azure IoT’s MQTT front-end for devices.

 

Enter the Azure SDK for Embedded C!

 

The Azure SDK for Embedded C is designed to allow small embedded (IoT) devices to communicate with Azure services.

 

The Azure SDK team has recently started to put together a C SDK that specifically targets embedded and constrained devices. It provides a generic, platform-independent, infrastructure for manipulating buffers, logging, JSON serialization/deserialization, and more. On top of this lightweight infrastructure, client libraries for e.g Azure Storage or Azure IoT have been developed.

 

You can read more on the Azure IoT client library here, but in a nutshell, here’s what I had to implement in order to use it on the Wio Terminal connected:

 

  • As the sample uses symmetric keys to authenticate, we need to be able to generate a security token.
    • The token needs to have an expiration date (typically set to a few hours in the future), so we need to know the current date and time. We use an NTP library to get the current time from a time server.
    • The token includes an HMAC-SHA256 signature string that needs to be base64-encoded. Luckily, the recommended WiFi+TLS stack for the Wio Terminal already includes Mbed TLS, making it relatively simple to compute HMAC signatures (ex. mbedtls_md_hmac_starts) and perform base64 encoding (ex. mbedtls_base64_encode).
  • The Azure IoT client library helps with crafting MQTT topics that follow the Azure IoT conventions. However, you still need to provide your own MQTT implementation. In fact, this is a major difference with the historical Azure IoT C SDK, for which the MQTT implementation was baked into it. Since it is widely supported and just worked out-of-the-box, the sample application uses the PubSubClient MQTT library from Nick O’Leary.
  • And of course, one must implement their own application logic. In the context of the sample application, this meant using the Wio Terminal’s IMU driver to get acceleration data every 2 seconds, and hooking up the ringBuzzer command to actual embedded code that… rings the buzzer.

Conclusion

I hope you found this post useful! I will soon publish additional articles that go beyond the simple “Hey, my Wio Terminal can send accelerometer data to the cloud!” to more advanced use cases such as remote firmware upgrade. Stay tuned! :)

 

Let me know in the comments what you’ve done (or will be doing!) with your Wio Terminal, and also don’t hesitate to ask any burning question you may have. Of course, you can also always find me on Twitter.