by Scott Muniz | Jul 15, 2020 | Uncategorized
This article is contributed. See the original author and article here.
The Microsoft Biz Apps for Women community continues to grow in Asia Pacific – pandemic or not.
Originally launched two years ago in Australia’s largest city, The Sydney Microsoft Business Applications Women Meetup has since grown to more than 300 members across multiple locations. Inspired by these efforts, The ASEAN Microsoft BizApps UG Women in Tech event launched in June to further support women in the region.
BizApps MVP and Sydney group organizer Olena Grischenko said she wanted to create a safe place for women to grow and openly discuss ideas. “I wanted to see more women actively engaged in the community and started with myself. I wanted to see more women being recognised for the work they do and simply more women connected and visible in the space,” Olena said.
The purpose of the community is to empower, promote, and support women involved with Microsoft Business Applications to speak at conferences and other events. They also run study groups to help people to prepare for certifications.
The group – whose committee includes Linda Do (Microsoft Product Marketing Manager), Abby (Mi) Kong (Dynamics 365 Applications Specialist), Gill Walker (Founder and Managing Director at Opsis), Karen Scott Davie (Microsoft MTC and Senior Consultant), and Katherine Woods (Dynamics 365 Consultant) – has since expanded to Melbourne and Auckland, New Zealand.

Further, the group recently enjoyed being only one of four community groups invited to set up a booth at Microsoft Ignite in February.
Usually, the group can be found on the first Thursday of every month at the Microsoft Reactor in Sydney. The monthly meetup, however, has recently been using Office 365 and Microsoft Teams to stay connected during the coronavirus pandemic. Recent virtual topics covered by the group include “The Seven Deadly Sins of Dynamics 365 Training” and “Tactics to Surviving Corona Disruptions and Staying Connected.”
“Although we miss our live events, I have to admit there are some good things which happened to us, too,” Olena said. “The most important thing is that we got closer to our New Zealand peers. We share the audience and actively interact with each other, especially on the leadership level.”
Supporting and promoting Asian Pacific Women in BizApps grew even further thanks to the efforts of another group in Singapore. Manonmani VS, Geetha Chockalingam, and Jeevarajan Kumar – in conjunction with Panjaporn Vittayalerdpun from Microsoft and other ASEAN BizApps UG leads – organized the ASEAN Microsoft BizApps UG Women in Tech event on June 13 to empower all women in the ecosystem to effectively use the platform for a better tomorrow.
More than 180 participated in this Women in Tech event via Microsoft Teams and learnt practical information of Power Platform and Dynamics 365, career stories of women in leadership, and use cases.

Coorganizer and MVP Jeevarajan said the team wanted to inspire and encourage all women interested in BizApps to be “part of us, ride, and rise with us.”
“There has been much less participation from women in all past community meetups. This meetup had more new women participants and all women speakers highlighted their journey – which inspired not only women but also men to know more about the platform and other Microsoft initiatives around this platform,” Jeevarajan said.
Eleven women speakers including 4 MVPs and 4 MSFTs spoke on a variety of topics with regional relevance. For example, New Zealand MVP Elaiza Benitez presented on “Tackling an NZ use case with AI Builder Forms Processing”. Moreover, Australian MVP Ee Lane Yu spoke about building trivia using “PVA Bot, Adaptive Cards and Teams”, Indian MVP Roohi Shaikh delved into moving Power Apps Biz Logic to the Cloud for better results, and Sri Lankan MVP for Microsoft Azure Hansamali Gamage demonstrated how to create a payment reminder using Power Apps, Excel, and Azure Functions.
Hansamali said she was excited to take part in an event which focused on inspiring women to improve their day-to-day work. “Since it’s all about women speakers, I was very thrilled to be part of it,” she said.
Check out the Microsoft User Group Singapore YouTube channel to see presentations from the day. For more information on the Sydney Microsoft Business Applications Women community, visit their Meetup page.
by Scott Muniz | Jul 15, 2020 | Azure, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Initial Update: Wednesday, 15 July 2020 19:49 UTC
We are aware of issues within Application Insights and are actively investigating. Globally 17% of customers may not be able to access Live Metrics stream in Azure portal.
- Work Around: None
- Next Update: Before 07/15 22:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Sindhu
by Scott Muniz | Jul 15, 2020 | Uncategorized
This article is contributed. See the original author and article here.

Grab your coffee or tea and join us for a half-day of conversations with developers from the community on pressing topics for frontend developers, such as building inclusive and accessible web applications, static sites, serverless, and much more.
This event will packed with great content on static web apps, serverless applications, a discussion panel on working remotely, and finally a workshop on a few of the frameworks offered in Azure Static Web Apps! See the full event agenda at https://aka.ms/createfrontend.
We will even have some fun stuff for kids too! Don’t miss this FREE Event!
by Scott Muniz | Jul 15, 2020 | Uncategorized
This article is contributed. See the original author and article here.
Microsoft Rocket, an open-source project from Microsoft Research, provides cascaded video pipelines that combined with Live Video Analytics from Azure Media Services, makes it easy and affordable for developers to build video analytics applications in their IoT solutions. Unprecedented advances in computer vision and machine learning have opened opportunities for video analytics applications that are of wide-spread interest to society, science, and business. While computer vision models have become more accurate and capable, they are also becoming resource-hungry and expensive for 24/7 analysis of video. As a result, live video analytics across multiple cameras also means a large computational footprint on premises built with a good amount of expensive edge compute hardware (CPU, GPU etc.).
Total cost of ownership (TCO) for video analytics is an important consideration and pain point for our customers. With that in mind, we integrated Live Video Analytics from Azure Media Services and Microsoft Rocket (from Microsoft Research) to enable an order-of-magnitude improvement in throughput per edge core (frame per second analyzed per CPU/GPU core), while maintaining the accuracy of the video analytics insights.
In a previous blog, we introduced Azure Live Video Analytics (LVA), a state-of-the-art platform to capture, record, and analyze live videos and publish the results (video and/or video analytics) to Azure services (in the cloud and/or on the edge). With Live Video Analytics’ flexible live video workflows, developers are now empowered to analyze video with a specialized AI model of their choice, and build truly hybrid (cloud + edge) applications that can analyze live video in the customer’s environment and combine video analytics on their camera streams with data from other IoT sensors and/or business data to build enterprise-grade solutions.
What is Microsoft Rocket?
Microsoft Rocket is a video analytics system that makes it easy and affordable for anyone with a video stream to obtain intelligent and actionable outputs from video analytics. Microsoft Rocket provides cascaded video pipelines for efficiently processing live video streams. In the cascaded video pipeline, a decoded frame is first analyzed with a relatively inexpensive light Deep Neural Network (DNN) like Tiny YOLO. A heavy DNN such as YOLOv3 is invoked only when the light DNN is not sufficiently confident, thus leading to efficient GPU usage. You can plug in any ONNX, TensorFlow, or Darknet DNN model in Microsoft Rocket. You can also augment the cascaded pipeline with a simpler CPU-based motion filter based on OpenCV background subtraction, as shown in the figure below.

In the cascaded video analytics pipeline in the above figure, decoded video frames are filtered first using background subtraction detection and focused on a line of interest in the frame, before calling even the low-resource light DNN detection. Frames requiring further processing are passed through a heavy DNN detector.
Rocket plus Live Video Analytics’ cascaded video analytics pipelines lead to considerable efficiency when processing video streams (as we will quantify shortly). By filtering out frames with limited relevant information and being judicious about invoking resource-intensive operations on the GPU, it allows the analytics to not only keep up with the frame-rates of the incoming video stream, but also pack the analysis of more video analytics pipelines on an edge compute box.
Up to 17X improvement in efficiency!
We benchmarked our performance and compared it against naïve video analytics pipelines that execute the Yolov3 DNN on each frame of the video. As shown in the graph below, Rocket plus Live Video Analytics cascaded pipeline is up to nearly 17X more efficient in its processing, bumping up the video analytics pipeline’s processing rate from 10 frames/second with the Yolov3 DNN to over 200 frames/second. Benchmark results across the NVIDIA T4 (which is available in Azure Stack Edge), K80, and Tesla P100 GPUs show that the gains in efficiency hold across the different GPU classes. Further, by carefully tuning simple knobs for downsizing and sampling frames, the video analytics rate goes up to nearly 700 frames/second in our benchmarks with little loss in accuracy (as shown in the tables below).
As a result of these efficiency improvements, an edge compute box that can process only three video streams in parallel when the YoloV3 object detection model is executed on each frame, goes up all the way to processing 17 video streams in parallel with Rocket plus Live Video Analytics’ cascaded pipelines (with the requirement to process at the rate of at least 10 frames/second for acceptable accuracy, which we have seen in our prior engagements on traffic video analytics).

Benchmark results of Live Video Analytics and Rocket containers, with improvement factors marked alongside each bar. Live Video Analytics plus Rocket achieves ~17X higher processing rates (measured in frames/second) compared to naïve solutions that run Yolov3 DNN on each frame of the video. We measure the performance of two cascaded modes of Rocket after background subtraction (BGS): “Full Cascade” (BGS → Light DNN → Heavy DNN) and “Heavy Cascade” (BGS →Heavy DNN). In fact, when we count the cars on the freeway, we can also use just BGS alone for counting as the freeway lanes are unlikely to contain any other objects (and thus requires no confirmation from the DNNs of the objects being cars). The choice of the pipeline and parameters is video-specific.

Impact of varying the frame resolution and frame sampling on the processing rates. Note that for all the experiments in the table above, there was no drop in the accuracy of the video analytics.
Check out the code and take it for a spin!
You can now take advantage of the open-source reference app with extensive instructions to build and deploy Live Video Analytics and Rocket containers for highly efficient live video analytics applications on the edge (as shown in the architecture figure below). The project repository contains helpful Jupyter notebooks as well as a sample video file for easy testing of an object counter for various object categories crossing a line of interest in the video.

The above figure shows the architecture where the Azure Live Video Analytics container works in tandem with Microsoft Rocket’s container (with cascaded video analytics).
We also open-source
Microsoft Rocket’s cascaded pipeline for video analytics. This should allow for bringing in your own DNN models, adding efficiency improvements, and expanding the video analytics capabilities beyond the (line-of-interest based) object counter included in the code.
Check out the
source code of Microsoft Rocket, make your custom modifications for video analytics, and let us know your feedback! We have already tested it with many real-world use cases (e.g., for
road safety and efficiency), and look forward to hearing about your deployment experiences!
Contributors: Ganesh Ananthanarayanan, Yuanchao Shu, Mustafa Kasap, Avi Kewalramani, Milan Gada, Victor Bahl
by Scott Muniz | Jul 15, 2020 | Azure, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Machine learning is a complex and task heavy art, be it cleaning data, creating new models, deploying models, managing a model repository, or automating the entire CI/CD pipeline for machine learning.
As more companies embark on the journey of machine learning in everything they do, Microsoft Azure Machine Learning provides them with enterprise-grade capabilities to accelerate the machine learning lifecycle and empowers developers and data scientists of all skill levels to build, train, deploy, and manage models responsibly and at scale.
Azure Machine Learning studio is the web user interface of Azure Machine Learning, enabling data scientists to complete their end-to-end machine learning lifecycle, from cleaning and labeling data, to training and deploying models using cloud scalable compute, in a single enterprise-ready tool.
We are excited to announce that Azure Machine Learning studio is now generally available worldwide, supporting 18 languages and over 30 locales!
Azure Machine Learning studio caters to all skill levels, with authoring tools such as the automated machine learning user interface to train and deploy models in a click of a button, and the drag and drop designer to create ML pipelines using a visual interface. All resources and assets created during the ML process – notebooks, models, pipelines, are all available for team collaboration under one roof.
With this release, studio is even more comprehensive and easy to use
Notebooks: Intellisense, checkpoints, tabs, editing without compute, updated file operations, improved kernel reliability, and many more. Read more about Azure machine learning studio notebooks here.
Notebooks are integrated into Azure Machine Learning studio
Experimentation: Compare multiple runs graphically using an improved charting visualization experience including chart smoothing, displaying aggregated data and more.
Charts and metrics for tracking and analyzing runs
Security: Granular Role Based Access Controls (RBAC) are now supported (in preview) out of the box for the most common actions in your studio workspace. Specific actions or controls will now be hidden based on your role assignment automatically as setup by your IT Admins.
Compute: Compute instance has tons of improvements in quality, reliability, availability, provisioning latency, and user experience:
New enterprise readiness and administrator capabilities:
- REST API and CLI support to help automate creation and management of compute instance
- ARM template support for provisioning compute instance with sample template documented and downloadable from UI
- Ability for admin to create compute instance on behalf of other users and assign to them through ARM template and REST API. Data scientists do not need to have create/delete RBAC permissions and can access Jupyter, JupyterLab, RStudio, use compute instance from integrated notebooks, and can start/stop/restart compute instances (this is in preview).
- Validating user subnet NSG rules in virtual network for improved compute instance creation.
- Encryption in transit using TLS 1.2
More information available in the updated compute creation panel
Designer (preview): Improved performance and reliability. Updates to user experience and new features:
- New graph engine, with new-style modules. Modules have colored side bars to show the status and can be resized.
- New asset library, to split Datasets, Modules, Models into 3 tabs
- Output setting. Enable user to set module output datastores.
- New modules:
- Computer Vision: Support image dataset preprocessing, and train PyTorch models (ResNet/DenseNet), and score for image classification
- Recommendation: Support Wide&Deep recommender
New style to Modules in the drag-and-drop Designer
Data Labeling: Create, manage, and monitor labeling projects directly inside the studio web experience. Coordinate data, labels, and team members to efficiently manage labeling tasks. Supports image classification, either multi-label or multi-class, and object identification with bounding boxes.
The machine learning assisted labeling feature (Preview) lets you trigger automatic machine learning models to accelerate the labeling task.
Learn more about Azure Machine Learning data labeling in this blog post.
Data labeling updated style and machine learning assisted labeling
Fairlearn (preview): Azure Machine Learning is used for managing the artifacts in your model training and deployment process.
With the new fairness capabilities, users can store and track their models’ fairness (disparity) insights in Azure Machine Learning studio, easily share their models’ fairness learnings among different stakeholders. Beyond logging fairness insights within Azure Machine Learning run history, users can load Fairlearn’s visualization dashboard in studio to interact with mitigated or original models’ predictions and fairness insights, select a pleasant model, and register/deploy the model for scoring time.
Fairlearn visualization now available as preview in the studio
Automated machine learning user interface (preview) Automated machine learning is the process of automating the time-consuming, iterative tasks of machine learning model development to enable non data scientists to operationalize their machine learning models.
The new Data Guardrails helps fix and alert users of potential data issues. The model details tab includes key information around the best model and the run. There is more control over which visualizations are generated – choose a metric of interest and visualizations pertaining to that metric will display.
Data guardrails in automated machine learning will alert for issues in the data and even fix some of them
Continuing the journey together
Our customers inspire us to continue the journey, building together experiences that make machine learning easier to use, productive, and fun!
Send us your feedback
Use the feedback panel to share your thoughts with us
Feedback panel to share your thoughts
Recent Comments