Use Azure IR to Tune ADF and Synapse Data Flows

Use Azure IR to Tune ADF and Synapse Data Flows

This article is contributed. See the original author and article here.

Azure Integration Runtimes are ADF and Synapse entities that define the amount of compute you wish to apply to your data flows, as well as other resources. Here are some tips on how to tune data flows with proper Azure IR settings.

 

In all 3 of these examples, I tested my data flows with a demo set of mocked-up loans data in a CSV file located in my Blob Store container. There were 887k rows with 74 columns and in each case I read from the file, duplicated the data into 2 separate streams, 1 row aggregated a row count, and the 2nd row masked PII data with a 1-way hash. I then loaded the data into a destination blob store folder sink.

 

perf4.png

 

Each of these executions below was run from an ADF pipeline using the “Debug > Use Activity Runtime” setting so that I could manually adjust the number of cores and compute type for each run. This means that I am not using the warmed-up debug cluster session. The average start-up time for the Databricks cluster was 4.5 mins. I also left all optimization settings in the transformations to default / use current partitioning. This allowed data flows to rely fully on Spark’s best guess for partitioning my file-based data.

Compute Optimized

First, I ran the pipeline using the lowest cost option, Compute Optimized. For very memory-intensive ETL pipelines, we do not generally recommend using this category because it has the lowest RAM/Core ratio for the underlying VMs. But it can be useful for cost-savings and pipelines that are not acting on very large data without many joins or lookups. In this case, I chose 16 cores, which is 8 cores for the driver node and 8 cores for the worker nodes.

Results

  • Sink IO writing: 20s
  • Transformation time: 35s
  • Sink post-processing time: 40s
  • Data Flows used 8 Spark partitions based on my 8 core worker nodes.

perfco1.png

 

General Purpose

Next, I tried the exact same pipeline using General Purpose with the small 8 core (4+4) option, which gives you 1 driver and 1 worker node, each with 4 cores. This is the small default debug cluster you are provided with the Default Auto Azure Integration Runtime. General Purpose is a very good middle option for data flows with a better RAM-to-CPU ratio than Compute Optimized. But I would highly recommend much higher core counts than I used here in this test. I am only using the default 4+4 to demonstrate to you that the default 8 core total is fine for small debugging, but not good for operationalized pipelines. 

Results

  • Sink IO writing: 46s
  • Transformation time: 42s
  • Sink post-processing time: 45s
  • Data Flows partitioned the file data into 4 parts because in this case because I backed out to only 4 worker cores.

perfgp1.png

Memory Optimized

This is the most expensive option and the highest RAM-to-CPU ratio, making it very good for large workloads that you’ve operationalized in triggered pipelines. I gave it 80 cores (64 for workers, 16 for driver) and I naturally had the best individual stage timings with this option. The Databricks cluster took the longest to startup in this configuration and the larger number of partitions led to a slightly higher post-processing time as the additional partitions were coalesced. I ended up with 64 partitions, one for each worker core.

Results

  • Sink IO writing: 19s
  • Transformation time: 17s
  • Sink post-processing time: 40s

perfmo.png

Azure portal September 2020 update

Azure portal September 2020 update

This article is contributed. See the original author and article here.

General
  • Increased auto-refresh rate options on dashboards
  • Improvements to the ARM template deployment experience in the Azure portal
  • Deployment of templates at the tenant, management group and subscription scopes

Database

  • Configuration of Always On availability groups for SQL Server virtual machines

Management + Governance > Resource Graph Explorer

  • New features in the Resource Graph Explorer

Storage > Storage account

  • Azure blob storage updates
    • Object replication for blobs now generally available
    • Blob versioning now generally available
    • Change feed support for blob storage now generally available
    • Soft delete for containers now in public preview
    • Point-in-time restore for block blobs now in public preview

Azure mobile app

  • Azure alerts visualization

Intune

  • Updates to Microsoft Intune

 

Let’s look at each of these updates in greater detail.

           

General

Increased Auto-Refresh Rate Options on Dashboards

We’ve updated the auto-refresh rate options to include 5, 10, and 15 minutes.  

 

  1. Go to the left navigation and choose “Dashboard”

    autorefresh 1.png

  2. Select the auto-refresh rate you’d like and click “Apply”. Now your dashboard will refresh at the interval you selected.

    autorefresh 2.png

  3. You can check when your dashboard was last refreshed at the top right of your dashboard.

    autorefresh 3.png

This Azure portal “how to” video will show you how to set up auto-refresh rates.    

 

General

Improvements to the ARM template deployment experience in the Azure portal

The custom template deployment experience in the Azure portal allows customers to deploy an ARM template. This experience has been updated with the following improvements:

  1. Easier navigation – Now there are multiple tabs which allows users to go back and select a different template without re-loading the entire page.

    ARM template.png

  2. Review  + create tab – The popular “Review + create” tab has been added to the custom deployment experience, allowing users to review parameters before starting the deployment.
  3. Multi-line JSONs and comment support – The Edit template view now supports multi-line JSON and JSONs with comments in accordance with ARMs capabilities.

 

General

Deployment of templates at the tenant, management group and subscription scopes  

The custom template deployment experience in the Azure portal now supports deploying templates at the tenant, management group, and subscription scopes. The Azure portal looks at the schema of the ARM template to infer the scope of the deployment. The correct deployment template schema for deployments at different scopes can be found here. The Deployment scope section of the Basics tab will automatically update to reflect the scope inferred from the deployment template.

 

deployment dir.png

Deployment scope section of the Basics tab when the schema of the template indicates that it is a deployment at tenant scope.

 

deployment mgmt.png

Deployment scope section of the Basics tab when the schema of the template indicates that it is a deployment at management group scope.

 

deploy sub.png

Deployment scope section of the Basics tab when the schema of the template indicates that it is a deployment at subscription scope.

 

Steps:

  1. Navigate to the custom template deployment experience in the Azure portal.
  2. Choose the “Build your own template in the editor” option in the “Select a template” tab.
  3. Author an ARM deployment template that deploys at the tenant, management group or subscription scope and save.
  4. Complete the Basics tab by providing values for deployment parameters.
  5. Review the parameters and trigger a deployment in the Review + create tab.

 

Databases > SQL Virtual Machines

Configuration of Always On availability groups for SQL Server virtual machines

It is now possible to set up an Always On availability group (AG) for your Azure SQL Server Virtual Machines (VM) from the Azure portal. Also available with the use of an ARM template and Azure SQL VM CLI, this new experience simplifies the process of manually configuring availability groups into a few simple steps.

 

You can find this experience in any of your existing SQL virtual machines so long as they are registered with the SQL VM Resource Provider and are version SQL Server 2016 Enterprise or higher. To get started, the experience outlines the pre-requisites that must be completed outside the portal including joining the VMs to the same domain. After meeting the prerequisites, you can create and manage the Windows Server Failover Cluster, the Availability Groups, and the listener, as well as manage the set of VMs in the cluster and the AGs, all from the same portal experience.

 

Follow the detailed step-by-step documentation to set up availability groups. Here we will outline a few of the steps to access this capability:

  1. Sign in to the Azure portal.
  2. In the top search bar, search for “Azure SQL” and select “Azure SQL” under the list of Services.
  3. In the Azure SQL browse list, find and select the SQL VM that is at least version SQL Server 2016 Enterprise or higher that you would like to use as your primary VM.
  4. Select “High availability” in the left-side menu of the SQL VM resource.
  5. Make sure the VM is domain-joined, then select “New Windows Server Failover Cluster” at the top of the page.

    always 1.png

  6. Once you have created or onboarded to a Windows Server Failover Cluster, you can create your first availability group where you will name the availability group, configure a , and select from a list of viable VMs to include in the AG.

    always 2.png

  7. From there you can add databases and other VMs to the availability group, create more availability groups, and manage the configurations of all related settings.

    always 3.png

Management + Governance > Resource Graph Explorer

New features in the Resource Graph Explorer

We’ve added three features to the Resource Graph Explorer:

  1. Keyboard shortcuts are now available, such as Shift+ Enter to run a query. A list of the shortcuts can be found here.
  2. You can now see more than 1000 results by paging through the list.

    resource.png

  3. “Download as CSV” will now save up to 5000 results.

 

Storage/Storage account

Azure blob storage updates

Object Replication for Blobs now generally available

Object replication is a new capability for block blobs that lets you replicate your data from your blob container in one storage account to another anywhere in Azure.  This feature unblocks a new set of common replication scenarios:

  • Minimize latency – Users consume the data locally rather than issuing cross-region read requests.
  • Increase efficiency – Compute clusters process the same set of objects locally in different regions.
  • Optimize data distribution – Data is consolidated in a single location for processing/analytics and resulting dashboards are then distributed to your offices worldwide.
  • Minimize cost – Tier down your data to archive upon replication completion using lifecycle management policies to minimize the cost.

Learn more

 

blob updates.png

View any existing replication rules on your account by navigating to the Object replication resource menu item in your storage account.  Here you can edit or delete existing rules or download them to share with others.  This is also the entry point to either create a new rule or upload an existing rule that’s been shared with you.

 

blob  updates 2.png

Create a replication rule by specifying the destination and source.

 

blob updates 3.png

Upload a replication rule that has been shared with you.

 

Blob Versioning now generally available

Blob Versioning for Azure Storage automatically maintains previous versions of an object and identifies them with version IDs. You can list both the current blob and previous versions using version ID timestamps. You can also access and restore previous versions as the most recent version of your data if they were erroneously modified or deleted by an application or other users. 

 

Together with our existing data protection features, Azure Blob storage provides the most complete user configurable settings to protect your business-critical data.

Enabling versioning is free, but when versions are created, there will be costs associated with additional data storage being used. 

 

Learn more by viewing documentation and the “how to” video

blob vers.png

Enable versioning while creating your storage account.  Versioning can be enabled/disabled after create  as well under the Data Protection resource menu item for your storage account.

 

blob vers 2.png

View and manage blob versions.

 

Change feed support for blob storage now generally available

Change feed provides a guaranteed, ordered, durable, read-only log of all the creation, modification, and deletion change events that occur to the blobs in your storage account.

Change feed is the ideal solution for bulk handling of large volumes of blob changes in your storage account, as opposed to periodically listing and manually comparing for changes. It enables cost-efficient recording and processing by providing programmatic access such that event-driven applications can simply consume the change feed log and process change events from the last checkpoint.

Learn more

change feed.png

Enable change feed while creating your storage account.  Change feed can be enabled/disabled after create as well under the Data Protection resource menu item for your storage account.

 

change feed 2.png

Change feed logs are written to the $blobchangefeed container in your storage account.  Click the following link to understand change feed organization.

 

 

Soft delete for containers now in public preview 

Soft delete for containers protects your data from being accidentally or erroneously modified or deleted. When container soft delete is enabled for a storage account, any deleted container and their contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers and any blobs within them by calling the Undelete container operation.

Learn more

softdelete.png

Enable soft delete for containers while creating your storage account.  Soft delete can be enabled/disabled after create as well under the Data Protection resource menu item for your storage account.

 

softdelete 2.png

View and restore deleted containers.

 

 

Point-in-time restore for block blobs now in public preview 

Point-in-time restore provides protection against accidental deletion or corruption by enabling you to restore block blob data to an earlier state. This feature is useful in scenarios where a user or application accidentally deletes data or where an application error corrupts data. Point-in-time restore also enables testing scenarios that require reverting a data set to a known state before running further tests.

 

Point-in-time restore requires the following features to be enabled:

  • Soft delete
  • Change feed
  • Blob versioning

 

Point-in-time restore is currently supported for preview for general purpose v2 storage accounts in the following regions:

  • Canada Central
  • Canada East
  • France Central

Learn more.

point.png

Enable point-in-time restore while creating your storage account.  Point-in-time restore can be enabled/disabled after create as well under the Data Protection resource menu item for your storage account.

 

point2.png

Kick off a point-in-time restoration and roll back containers to a specified time and date.

 

 

Azure mobile app

Azure alerts visualization

The Azure mobile app now has a chart visualization for Azure alerts in the Home view. You can now choose between rendering a list view of your fired Azure alerts or displaying them as a chart. The chart view arranges the alerts by severity so you can quickly check the alerts that are active on your Azure environment.

 

In order to choose the Azure alerts chart visualization in the mobile app:

  • On Android: Turn on the “Chart” toggle in the Latest alerts card in the Home view.
  • On iOS: Tap the “Chart” tab in the Latest alerts card in the Home view.

mobile.png

 

INTUNE

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates as well. You can find the full list of updates to Intune on the What’s new in Microsoft Intune page, including changes that affect your experience using Intune.

 

Azure portal “how to” video series

Have you checked out our Azure portal “how to” video series yet? The videos highlight specific aspects of the portal so you can be more efficient and productive while deploying your cloud workloads from the portal.  Check out our most recently published videos:

 

 

Next steps

The Azure portal has a large team of engineers that wants to hear from you, so please keep providing us your feedback in the comments section below or on Twitter @AzurePortal.

 

Sign in to the Azure portal now and see for yourself everything that’s new. Download the Azure mobile app to stay connected to your Azure resources anytime, anywhere.  See you next month!

Released: Public preview of Azure Arc enabled SQL Server

Released: Public preview of Azure Arc enabled SQL Server

This article is contributed. See the original author and article here.

Azure Arc enabled SQL Server is now in public preview. It extends Azure services to SQL Server instances deployed outside of Azure in the customer’s datacenter, on the edge or in a multi-cloud environment.

 

The preview includes the following features:

– Use Azure Portal to register and track the global inventory of your SQL instances across different hosting infrastructures. You can register an individual SQL instance or register a set of servers at scale using the same auto-generated script.

– Use Azure Security Center to produce a comprehensive report of vulnerabilities in SQL servers and get advanced, real time security alerts for threats to SQL servers and the OS.

– Investigate threats in SQL Servers using Azure Sentinel 

– Periodically check the health of the SQL Server configurations and provide comprehensive reports and remediation recommendations using the power of Azure Log analytics.

 

The following diagram illustrates the architecture of Azure Arc enabled SQL Server

pubic-preview-architecture.png

 

The SQL Server can be installed in a virtual or physical machine running Windows or Linux that is connected to Azure Arc via the Connected Machine agent. The agent is installed and the machine is registered automatically as part of the SQL Server instance registration. The agent maintains secure communications with Azure Arc over an outbound port 443 directly or via a HTTP proxy. Any SQL Server instance version 2012 or higher can be registered with Azure Arc. 

 

How to get started

Check out Azure Arc enabled SQL Server documentation for more details on how to register and manage your SQL Server instances using Azure Arc.

 

Sidecar Pattern in Action

Sidecar Pattern in Action

This article is contributed. See the original author and article here.

 

The Sidecar pattern enables the extension and enhancement  – of an existing process or an application container  -without affecting/modifying the original process or container – thus ensuring the single-responsibility of the existing process / container. The problem context, solution, considerations and use cases for this pattern are comprehensively documented here.

Exploring the usage and reviewing the  implementation of this pattern, in a real-world project, is an effective way to gain a practical perspective as you start putting the pattern into practice.
I have chosen two open source projects that implement the sidecar pattern as part of their core architecture. The first one is – Dapr  – Distributed Application Runtime and the other project we will look into is the Open Service Mesh .

 

Dapr

Dapr is an event driven, portable runtime for building microservices on cloud and edge.

Dapr facilitates the development of microservices by providing independent building blocks like – State Management, Pub-Sub, Observability, Secrets and more  – which are encapsulated as the Dapr API – in a platform-agnostic and language-agnostic manner. The independent nature of the building blocks lets you cherry-pick the set of features you want for your application and the platform-agnostic aspect gives you a wide spectrum of platforms to operate in ranging – from lean and resource-starved devices on the edge  – to –  power-packed , full fledged Azure Kubernetes Service  clusters in the cloud. 

Sidecar implementation in Dapr architecture

 

The key advantage of using Dapr  is that the application utilizing it need not include any Dapr  runtime code. This advantage is  achieved by utilizing the – sidecar pattern.
The core Dapr API itself is  exposed as a sidecar – either a sidecar process or a sidecar container.
If it is a sidecar process  – the service code calls the Dapr API via HTTP/gRPC.
In the world of containers orchestrated by Kubernetes, the Dapr API is a side car container itself – which is utilized by the application container within the same pod.

Diagrams from the the Dapr project’s documentation as follows – 


Image 1 - Self hostedImage 1 – Self hosted

 

Image 2 - Kubernetes hostedImage 2 – Kubernetes hosted

So, the use case in – Dapr’s case –  is where a primary application uses a heterogeneous set of languages and frameworks and the component (Dapr API)  located in a sidecar service /container can be consumed by applications written in different languages using different frameworks.

Open Service Mesh (OSM)

OSM is a lightweight and extensible cloud native service mesh that runs on Kubernetes. This was a project initiated by Microsoft that has been now donated to the Cloud Native Computing Foundation (CNCF) where it is a Sandbox project – at the time of this writing. 

Sidecar implementation in OSM architecture

OSM onboards applications to the mesh by enabling the automatic sidecar injection of the Envoy proxy.
A look at the diagram below (source: OSM design doc)  – shows the Envoy proxy already existing as sidecar container within the Kubernetes (k8s) pod.
Image 3 - OSM ComponentsImage 3 – OSM Components

To elaborate on the automatic sidecar injection part – OSM utilizes a feature of  Kubernetes called MutatingAdmissionWebhook  admission controller to intercept the k8s API server request after the pod creation is initiated  but before the actual   persistence/creation of the pod – to augment the request to include a Envoy proxy sidecar. Now the created pod has the Envoy proxy sidecar automatically. 

This means the powerful features of Envoy like  – advanced load balancing, observability, rate limiting et.al. – are readily available to be utilized by each instance of the application and allowing  the services in the mesh to communicate with each other. 

The separation of concerns between application’s core functionality and Envoy’s functionality + the ability of Envoy features to still be in proximity of the application is achieved via the sidecar pattern in the case of Open Service Mesh.

I hope this has given you a bit more insight  into the Sidecar pattern and has also piqued your interest in both  the Dapr and  Open Service Mesh projects!

References:
1. Sidecar pattern – https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar
2. Dapr – https://dapr.io/
3. Sidecar implementation in Dapr – https://github.com/dapr/docs/tree/master/overview#sidecar-architecture
4. Open Service Mesh(OSM) – https://openservicemesh.io/
5. OSM component and interactions – https://github.com/openservicemesh/osm/blob/main/DESIGN.md#osm-components–interactions
6. Kubernetes Admission Controllers – https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
7.  OSM MutatingWebhookConfiguration – https://github.com/openservicemesh/osm/blob/release-v0.3/charts/osm/templates/mutatingwebhook.yaml
Note – Images 1,2 and 3 in this article are from respective project documentations the links to which are referenced above.

Sign up now: Microsoft Endpoint Manager post-Ignite activities

This article is contributed. See the original author and article here.

We hope you loved our Microsoft Endpoint Manager sessions at Microsoft Ignite 2020, the digital edition, and have had time to check out the dozens of videos we put on the Video Hub on TechCommunity. We couldn’t possibly squeeze everything into 48 hours, especially when it came to connecting you, my fellow Microsoft Endpoint Manager enthusiasts, with my colleagues in engineering and on the product team. So, whether you’re new to Microsoft Endpoint Manager or a long-time fan looking to know more, I’m happy to announce that we are keeping the connection going through October and beyond—and already looking ahead to Microsoft Ignite in March 2021!

Free Microsoft Endpoint Manager post-day: September 29th

While there are many sessions from Microsoft Ignite 2020 now available on demand, we are kicking off a special post-conference experience and offering a 24-hour, around-the-world training marathon: Manage, Configure, and Secure Devices with Microsoft Endpoint Manager. Starting at 10:00 a.m. Brisbane time (AEST) on September 29, 2020, we’ll kick off our post-day with four hours of Windows content, specifically:

  • Hour 1 – Get Your Windows Devices to Microsoft Endpoint Manager
  • Hour 2 – Configure your Windows Devices
  • Hour 3 – Secure your Windows Devices
  • Hour 4 – Improve the End User Experience on Your Windows Devices

We’ll then proceed to a four-hour block of mobile device management training:

  • Hour 1 – Get Your Mobile Devices to Microsoft Endpoint Manager
  • Hour 2 – Secure Your Mobile Devices with Microsoft Endpoint Manager
  • Hour 3 – Manage You MacOS with Microsoft Endpoint Manager
  • Hour 4 – Manage Shared Devices for Firstline Workers

We’ll then repeat both sets of sessions twice, wrapping up at 6:00 p.m. Seattle time (PST). Each four-hour block will contain the same content so attend whichever session times work best for you. You can do them in any order and jump between the repeats. Depending where you are in the world, some sessions may start Monday, September 28th or continue to Wednesday, September 30th.

How do I sign up for the post-day?

Simply register at https://aka.ms/MEMPostDay and you’ll have access to calendar invitations for the times that work best for you. And yes, we’ll record it all, but attending live means you get to ask questions in real time in our exclusive Q&A sessions on Teams Live Events!

Section

Start time – UTC

End time – UTC

Windows Q&A 1

Tuesday Sept 29

 00:00

Tuesday Sept 29

 04:00

Mobility Q&A 1

Tuesday Sept 29

04:30

Tuesday Sept 29
08:30

Windows Q&A 2

Tuesday Sept 29

07:30

Tuesday Sept 29

11:30

Mobility Q&A 2

Tuesday Sept 29

12:00

Tuesday Sept 29

16:00

Windows Q&A 3

Tuesday Sept 29

16:30

Tuesday Sept 29
20:30

Mobility Q&A 3

Tuesday Sept 29

21:00

Wednesday Sept 30
01:00

Additional Q&A opportunities

Ask the ConfigMgr Experts: Endpoint Manager ATE series

We had some fantastic Q&A during our Microsoft Ignite digital breakouts and Ask the Expert sessions, but we know you probably have more questions. We also know that the Q&A panel with the Microsoft Endpoint Configuration Manager team continues to be one of our most popular sessions at Microsoft Ignite, so we’re now bringing it to you digitally in Teams Live Events on Wednesday, September 30th. Click the desired time below to add them to your calendar.

Tuesdays with Microsoft Endpoint Manager: Ask the Experts

We’ll continue offering live Q&A through October with more Ask the Experts sessions on Tuesdays:

The Microsoft Endpoint Manager AMA series continues

We’ve been running Ask Microsoft Anything (AMA) events in the Microsoft Tech Community for a few months now, and we aren’t stopping! In fact, we’re pumping them up in October and November to help answer questions you may have from any of the announcements and new features or capabilities we announced at Microsoft Ignite.

To keep tabs on announcements about future AMA events, to attend an AMA, or to see recaps of previous AMAs, bookmark https://aka.ms/AMA/MEM.

Microsoft Endpoint Manager 1:1 consultations

We know not every question is suitable to be asked in a public environment. And, you can’t pull someone aside at a Microsoft Ignite booth this year to ask something privately. As a result, we are opening up some 1:1 consultations with Microsoft Endpoint Manager engineers so you can still dig in where you need to. You’ll be able to start signing up for half-hour slots beginning Monday, September 28th and appointments will be available on weekdays from Thursday, October 1st to Thursday October 8th at different times of the day around the world. Slots will be limited so check back on September 28th to reserve one. (Note, for now, this link points to the Microsoft Endpoint Manager community on Tech Community. It will be redirected to the 1:1 signup tool on September 28th.)

Request a meeting with the Microsoft Endpoint Management leadership team

If you need to have a strategic conversation, possibly including your leadership team, you can submit your request here. When filling out the request form, please provide us a few possible days/times to meet and tell us who else from your organization you’d like to attend. Please also describe what you want to talk about in as much detail as possible so we can find the member of our leadership team that will be the most appropriate for you to speak with.

Become an Endpoint Management Insider

I’d like to conclude this post by announcing our own Insider community for Endpoint Management. We’ll use this community to host special events that we’ll reserve for our truest fans. We’ll also include fun things like community-driven roundtables and games to help connect you with your peers in the field of endpoint management. Sign up to be an Endpoint Management Insider today and watch your email for an invitation to our Insider site.

The September 25th Weekly Roundup is Posted!

This article is contributed. See the original author and article here.

Microsoft Ignite is officially wrapped, with big news coming from several products:

 

SharePoint Syntex, the first product from Project Cortex, was announced.

 

Microsoft Teams announced several new capabilities to help people stay connected, collaborate and build solutions in Teams.

 

Endpoint Analytics is now generally available in Microsoft Endpoint Manager.

 

@alexandertuvstrom is our Member of the Week, and a great contributor across multiple communities like Windows 10. 

 

View the Weekly Roundup for Sept 21-25th in Sway and attached PDF document.