This article is contributed. See the original author and article here.
The finance department is the heart of the organization, juggling a myriad of critical, yet complex tasks—from quote-to-cash processes like credit and collections to risk management and compliance. Financial teams are not only responsible for these mandatory, labor-intensive operations, but are increasingly tasked with real-time insights into business performance and recommendations for future growth initiatives. In fact, 80% of finance leaders and teams face challenges to take on more strategic work beyond the operational portions of their roles.¹On the one hand, teams are poised and ready to play a larger role in driving business growth strategy. On the other hand, however, they can’t walk away from maintaining a critical and mandatory set of responsibilities.
Microsoft is introducing a solution to help finance teams reclaim time and stay on top of the critical decisions that can impact business performance.Microsoft Copilot for Finance is a new Copilot experience for Microsoft 365 that unlocks AI-assisted competencies for financial professionals, right from within productivity applications they use every day. Now available in public preview, Copilot for Finance connects to the organization’s financial systems, including Dynamics 365 and SAP, to provide role-specific workflow automation, guided actions, and recommendations in Microsoft Outlook, Excel, Microsoft Teams and other Microsoft 365 applications—helping to save time and focus on what truly matters: navigating the company to success.
Copilot for Finance
By harnessing AI, it automates time-consuming tasks, allowing you to focus on what truly matters.
Leveraging innovation to accelerate fiscal stewardship
Finance teams play a critical role in innovating processes to improve efficiency across the organization. As teams look to evolve and improve how time is spent to support more strategic work, it’s evident there are elements of operational tasks that are more mundane, repetitive, and manually intensive. Instead of spending the majority of their day on analysis or cross-team collaboration, 62% of finance professionals are stuck in the drudgery of data entry and review cycles.² While some of these tasks are critical and can’t be automated—like compliance and tax reporting—we also hear from majority of finance leaders that they lack the automation tools and technology they need to transform these processes and free up time.¹
With the pace of business accelerating every day, becoming a disruptor requires investing in technology that will drive innovation and support the bottom line. In the next three to five years, 68% of CFOs anticipate revenue growth from generative AI (GenAI).³ By implementing next-generation AI to deliver insight and automate costly and time-intensive operational tasks, teams can reinvest that time to accelerate their impact as financial stewards and strategists.
Microsoft Copilot for Finance: Accomplish more with less
Copilot for Finance provides AI-powered assistance while working in Microsoft 365 applications, making financial processes more streamlined and automated. Copilot for Finance can streamline audits by pulling and reconciling data with a simple prompt, simplify collections by automating communication and payment plans, and accelerate financial reporting by detecting variances with ease. The potential time and cost savings are substantial, transforming not just how financial professionals work, but how they drive impact within the organization.
Users can interact with Copilot for Finance in multiple ways. It both suggests actions in the flow of work, and enables users to ask questions by typing a prompt in natural language. For example, a user can prompt Copilot to “help me understand forecast to actuals variance data.” In moments, Copilot for Finance will generate insights and pull data directly from across the ERP and financial systems, suggesting actions to take and providing a head start by generating contextualized text and attaching relevant files. Like other copilot experiences, users can easily check source data to ensure transparency before using Copilot to take any actions.
Copilot for Finance connects to existing financial systems, including Dynamics 365 and SAP, as well as thousands more with Microsoft Copilot Studio. With the ability to both pull insight from and update actions back to existing sources, Copilot for Finance empowers users to stay in the flow of work and complete tasks more efficiently.
Built for finance professionals
Copilot for Finance is well versed in the critical and often time-consuming tasks and processes across a finance professional’s workday, providing a simple way to ask questions about data, surface insights, and automate processes—helping to reduce the time spent on repetitive actions. While today’s modern finance team is responsible for a litany of tasks, let’s explore three scenarios that Copilot for Finance supports at public preview.
Copilot for Finance can also help financial analysts to reduce the risk of reporting errors and missing unidentified variances. Rather than manually reviewing large financial data sets for unusual patterns, users can prompt Copilot to detect outliers and highlight variances for investigation. Copilot for Finance streamlines variance identification with reusable natural language instructions in the enterprise context. A financial analyst can direct Copilot to identify answers for variances, and Copilot will gather supporting data autonomously.
Audits of a company’s financial statements are critical to ensuring accuracy and mitigating risk. Traditionally, accounts receivable managers were required to pull account data manually from ERP records, reconcile it in Excel, and look for inaccuracies manually. With Copilot for Finance, these critical steps are done with a single prompt, allowing AR managers to act on inconsistencies and any delinquencies found with Copilot suggested copy and relevant invoices.
“Finance organizations need to be utilizing generative AI to help blend structured and unstructured datasets. Copilot for Finance is a solution that aggressively targets this challenge. Microsoft continues to push the boundary of business applications by providing AI-driven solutions for common business problems. Copilot for Finance is another powerful example of this effort. Copilot for Finance has potential to help finance professionals at organizations of all sizes accelerate impact and possibly even reduce financial operation costs.”
—Kevin Permenter, IDC research director, financial applications
The collections process is another critical responsibility as it affects company cash flow, profitability, and customer relationships. Collection coordinators spend their time reviewing outstanding accounts and attempting to reconcile them in a timely manner. This often means phone calls, emails, and negotiating payment plans. With Copilot for Finance, collection coordinators can focus their time on more meaningful client-facing interactions by leaving the busy work to Copilot. Copilot for Finance supports the collections process end-to-end by suggesting priority accounts, summarizing conversations to record back to ERP, and providing customized payment plans for customers.
Copilot for Finance can also help financial analysts to reduce the risk of reporting errors and missing unidentified variances. Rather than manually reviewing large financial data sets for unusual patterns, users can prompt Copilot to detect outliers and highlight variances for investigation. Copilot for Finance streamlines variance identification with reusable natural language instructions in the enterprise context. A financial analyst can direct Copilot to identify answers for variances, and Copilot will gather supporting data autonomously.
Copilot will suggest financial context contacts and will provide auto summaries for streamlined tracking of action items and follow ups. Copilot for Finance can generate fine-tuned financial commentary, PowerPoint presentations, and emails to report to key stakeholders.
Our journey with Microsoft Finance
Microsoft employs thousands across its finance team to manage and drive countless processes and systems as well as identify opportunities for company growth and strategy. Who better to pilot the latest innovation in finance? For the first phase, we worked closely with a Treasury team focused on accounts receivable as well as a team in financial planning and analysis—who need to reconcile data as a part of their workflow before conducting further analysis. After trialing the data reconciliation capabilities in Copilot for Finance, the initial value and potential for scale for these teams was clear.
“Financial analysts today spend, on average, one to two hours reconciling data per week. With Copilot for Finance, that is down to 10 minutes. Functionality like data reconciliation will be a huge time saver for an organization as complex as Microsoft.”
—Sarper Baysal, Microsoft Commercial Revenue Planning Lead
“The accounts receivable reconciliation capabilities help us to eliminate the time it takes to compare data across sources, saving an average 20 minutes per account. Based on pilot usage, this translates to an average of 22% cost savings in average handling time.”
—Gladys Jin, Senior Director Microsoft Finance Global Treasury and Financial Services
Microsoft Copilot for Finance availability
Ready to take the next step? Microsoft Copilot for Finance is available for public preview today. Explore the public preview demo and stay tuned for additional announcements by following us on social.
This article is contributed. See the original author and article here.
Configuration analyzer in Microsoft Defender for Office 365 helps you find and fix security policies that are less secure than the recommended settings. It allows you to compare your current policies with the standard or strict preset policies, lets you apply recommendations to improve your security posture, and view historical changes to your policies.
We are excited to announce several updates to Configuration analyzer. This update includes:
New recommendations covering more scenarios.
New flyout which adds more context around the recommendations.
New export button which lets you easily export recommendations to share with your partners.
Clicking on a recommendation will now open a flyout that has brief detail about why we are making the recommendation as well as targeted links to documentation to learn more about.
Exporting the Recommendations:
A new Export button should appear when you select one or multiple recommendations. Clicking on the Export button will download the selected recommendations as a CSV file which can be shared with your external partners who might not have access to your environment.
If you have other questions or feedback about Microsoft Defender for Office 365, engage with the community and Microsoft experts in the Defender for Office 365 forum.
This article is contributed. See the original author and article here.
Azure HDInsight Spark 5.0 to HDI 5.1 Migration
A new version of HDInsight 5.1 is released with Spark 3.3.1. This release improves join query performance via Bloom filters, increases the Pandas API coverage with the support of popular Pandas features such as datetime.timedelta and merge_asof, simplifies the migration from traditional data warehouses by improving ANSI compliance and supporting dozens of new built-in functions.
In this article we will discuss about the migration of user applications from HDInsight 5.0(Spark 3.1) to HDInsight 5.1 (Spark 3.3). The sections include,
1. Changes which are compatible with minor changes
2. Changes in Spark that require application changes
Application Changes with backport.
The below changes are part of HDI 5.1 release. If these functions are used in applications, the given steps can be taken to avoid the changes in application code.
Since Spark 3.3, the histogram_numeric function in Spark SQL returns an output type of an array of structs (x, y), where the type of the ‘x’ field in the return value is propagated from the input values consumed in the aggregate function. In Spark 3.2 or earlier, x’ always had double type. Optionally, use the configuration spark.sql.legacy.histogramNumericPropagateInputType since Spark 3.3 to revert to the previous behavior.
Spark 3.1 (pyspark)
Spark 3.3:
In Spark 3.3, the timestamps subtraction expression such as timestamp ‘2021-03-31 23:48:00’ – timestamp ‘2021-01-01 00:00:00’ returns values of DayTimeIntervalType. In Spark 3.1 and earlier, the type of the same expression is CalendarIntervalType. To restore the behavior before Spark 3.3, you can set spark.sql.legacy.interval.enabled to true.
Since Spark 3.3, the functions lpad and rpad have been overloaded to support byte sequences. When the first argument is a byte sequence, the optional padding pattern must also be a byte sequence and the result is a BINARY value. The default padding pattern in this case is the zero byte. To restore the legacy behavior of always returning string types, set spark.sql.legacy.lpadRpadAlwaysReturnString to true.
> SELECT hex(lpad(x’1020′, 5, x’05’))
0505051020
SELECT hex(rpad(x’1020′, 5, x’05’)) 1020050505
Since Spark 3.3, Spark turns a non-nullable schema into nullable for API DataFrameReader.schema(schema: StructType).json(jsonDataset: Dataset[String]) and DataFrameReader.schema(schema: StructType).csv(csvDataset: Dataset[String]) when the schema is specified by the user and contains non-nullable fields. To restore the legacy behavior of respecting the nullability, set spark.sql.legacy.respectNullabilityInTextDatasetConversion to true.
Since Spark 3.3, nulls are written as empty strings in CSV data source by default. In Spark 3.2 or earlier, nulls were written as empty strings as quoted empty strings, “”. To restore the previous behavior, set nullValue to “”, or set the configuration spark.sql.legacy.nullValueWrittenAsQuotedEmptyStringCsv to true.
Sample Data:
Spark 3.1:
Spark 3.3:
Since Spark 3.3, Spark will try to use built-in data source writer instead of Hive serde in INSERT OVERWRITE DIRECTORY. This behavior is effective only if spark.sql.hive.convertMetastoreParquet or spark.sql.hive.convertMetastoreOrc is enabled respectively for Parquet and ORC formats. To restore the behavior before Spark 3.3, you can set spark.sql.hive.convertMetastoreInsertDir to false.
Spark logs:
INFO ParquetOutputFormat [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)]: ParquetRecordWriter [block size: 134217728b, row group padding size: 8388608b, validating: false]INFO ParquetWriteSupport [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)]: Initialized Parquet WriteSupport with Catalyst schema:{ “type” : “struct”, “fields” : [ { “name” : “fname”, “type” : “string”, “nullable” : true, “metadata” : { } }, {
Since Spark 3.3.1 and 3.2.3, for SELECT … GROUP BY a GROUPING SETS (b)-style SQL statements, grouping__id returns different values from Apache Spark 3.2.0, 3.2.1, 3.2.2, and 3.3.0. It computes based on user-given group-by expressions plus grouping set columns. To restore the behavior before 3.3.1 and 3.2.3, you can set spark.sql.legacy.groupingIdWithAppendedUserGroupBy
In Spark 3.3, spark.sql.adaptive.enabled is enabled by default. To restore the behavior before Spark 3.3, you can set spark.sql.adaptive.enabled to false.
In Spark3.1, AQE is set to false by default.
In Spark3.3, AQE is enabled by default.
Adaptive Query Execution (AQE) is an optimization technique in Spark SQL that makes use of the runtime statistics to choose the most efficient query execution plan, which is enabled by default since Apache Spark 3.3.0. Spark SQL can turn on and off AQE by spark.sql.adaptive.enabled as an umbrella configuration. As of Spark 3.0, there are three major features in AQE: including coalescing post-shuffle partitions, converting sort-merge join to broadcast join, and skew join optimization.
In Spark 3.3, the output schema of SHOW TABLES becomes namespace: string, tableName: string, isTemporary: boolean. In Spark 3.1 or earlier, the namespace field was named database for the builtin catalog, and there is no isTemporary field for v2 catalogs. To restore the old schema with the builtin catalog, you can set spark.sql.legacy.keepCommandOutputSchema to true.
In Spark3.1, Field is termed as database:-
In Spark3.3, Field is termed as Namespace: –
We can restore the behavior by setting the below property.
In Spark 3.3, the output schema of SHOW TABLE EXTENDED becomes namespace: string, tableName: string, isTemporary: boolean, information: string. In Spark 3.1 or earlier, the namespace field was named database for the builtin catalog, and no change for the v2 catalogs. To restore the old schema with the builtin catalog, you can set spark.sql.legacy.keepCommandOutputSchema to true.
Show similar screenshot details in both spark-sql shell for spark3.1 and spark3.3 versions.
In Spark3.1, Field is termed as database:
In Spark3.3, Field is termed as Namespace: –
We can restore the behavior by setting the below property.
In Spark 3.3, CREATE TABLE AS SELECT with non-empty LOCATION will throw AnalysisException. To restore the behavior before Spark 3.2, you can set spark.sql.legacy.allowNonEmptyLocationInCTAS to true.
In spark 3.3, we are able to CTAS with non-empty location, as shown below
In spark 3.3 also we are able to create tables without the above property change
In Spark 3.3, special datetime values such as epoch, today, yesterday, tomorrow, and now are supported in typed literals or in cast of foldable strings only, for instance, select timestamp’now’ or select cast(‘today’ as date). In Spark 3.1 and 3.0, such special values are supported in any casts of strings to dates/timestamps. To keep these special values as dates/timestamps in Spark 3.1 and 3.0, you should replace them manually, e.g. if (c in (‘now’, ‘today’), current_date(), cast(c as date)).
In spark 3.3 and 3.1 below code works exactly same.
Application Changes Expected
There are some changes in the spark functions between HDI 5.0 and 5.1. The changes depend on whether the applications use below functionalities and APIs.
Since Spark 3.3, DESCRIBE FUNCTION fails if the function does not exist. In Spark 3.2 or earlier, DESCRIBE FUNCTION can still run and print “Function: func_name not found”.
Spark 3.1:
Spark 3.3:
Since Spark 3.3, DROP FUNCTION fails if the function name matches one of the built-in functions’ name and is not qualified. In Spark 3.2 or earlier, DROP FUNCTION can still drop a persistent function even if the name is not qualified and is the same as a built-in function’s name.
Since Spark 3.3, when reading values from a JSON attribute defined as FloatType or DoubleType, the strings “+Infinity”, “+INF”, and “-INF” are now parsed to the appropriate values, in addition to the already supported “Infinity” and “-Infinity” variations. This change was made to improve consistency with Jackson’s parsing of the unquoted versions of these values. Also, the allowNonNumericNumbers option is now respected so these strings will now be considered invalid if this option is disabled.
Since Spark 3.3, when reading values from a JSON attribute defined as FloatType or DoubleType, the strings “+Infinity”, “+INF”, and “-INF” are now parsed to the appropriate values, in addition to the already supported “Infinity” and “-Infinity” variations. This change was made to improve consistency with Jackson’s parsing of the unquoted versions of these values. Also, the allowNonNumericNumbers option is now respected so these strings will now be considered invalid if this option is disabled.
Spark 3.3:
Spark 3.1:
Spark 3.3 introduced error handling functions like below:
TRY_SUBTRACT – behaves as an “-” operator but returns null in case of an error.
TRY_MULTIPLY – is a safe representation of the “*” operator.
TRY_SUM – is an error-handling implementation of the sum operation.
TRY_AVG – is an error handling-implementation of the average operation.
TRY_TO_BINARY – eventually converts an input value to a binary value.
Example of ‘try_to_binary’ function:
When correct value given for base64 decoding:
When wrong value given for base64 decoding it doesn’t throw any error.
Since Spark 3.3, ADD FILE/JAR/ARCHIVE commands require each path to be enclosed by ” or ‘ if the path contains whitespaces.
In spark3.3:
In spark3.1: Multiple jars adding not working, only at a time can be added.
16.In Spark 3.3, the following meta-characters are escaped in the show() action. In Spark 3.1 or earlier, the following metacharacters are output as it is.
n (new line)
r (carrige ret)
t (horizontal tab)
f (form feed)
b (backspace)
u000B (vertical tab)
u0007 (bell)
In Spark3.3, meta-characters are escaped in the show() action.
In Spark3.1, the meta-characters are actually interpreted as their define functions.
In Spark 3.3, the output schema of DESCRIBE NAMESPACE becomes info_name: string, info_value: string. In Spark 3.1 or earlier, the info_name field was named database_description_item and the info_value field was named database_description_value for the builtin catalog. To restore the old schema with the builtin catalog, you can set spark.sql.legacy.keepCommandOutputSchema to true.
In Spark 3.1, we see the below headers before we set the property to false and check.
In Spark 3.3, we see the Info name and Info value before we set the property to true.
In Spark 3.3, DataFrameNaFunctions.replace() no longer uses exact string match for the input column names, to match the SQL syntax and support qualified column names. Input column name having a dot in the name (not nested) needs to be escaped with backtick `. Now, it throws AnalysisException if the column is not found in the data frame schema. It also throws IllegalArgumentException if the input column name is a nested column. In Spark 3.1 and earlier, it used to ignore invalid input column name and nested column name.
In Spark 3.3, CREATE TABLE .. LIKE .. command can not use reserved properties. You need their specific clauses to specify them, for example, CREATE TABLE test1 LIKE test LOCATION ‘some path’. You can set spark.sql.legacy.notReserveProperties to true to ignore the ParseException, in this case, these properties will be silently removed, for example: TBLPROPERTIES(‘owner’=’yao’) will have no effect. In Spark version 3.1 and below, the reserved properties can be used in CREATE TABLE .. LIKE .. command but have no side effects, for example, TBLPROPERTIES(‘location’=’/tmp’) does not change the location of the table but only creates a headless property just like ‘a’=’b’.
In spark 3.3 we got the same parse exceptions, post setting the property we were able to create the table
In spark 3.1 , we didn’t get any exceptions or errors:
In Spark 3.3, the unit-to-unit interval literals like INTERVAL ‘1-1’ YEAR TO MONTH and the unit list interval literals like INTERVAL ‘3’ DAYS ‘1’ HOUR are converted to ANSI interval types: YearMonthIntervalType or DayTimeIntervalType. In Spark 3.1 and earlier, such interval literals are converted to CalendarIntervalType. To restore the behavior before Spark 3.3, you can set spark.sql.legacy.interval.enabled to true.
In spark 3.3, post setting up this spark.sql.legacy.interval.enabled to true these literals are converted to ANSI interval types: YearMonthIntervalType or DayTimeIntervalType.
In Spark 3.1, there are no changes due to the change in property.
In Spark 3.3, the TRANSFORM operator can’t support alias in inputs. In Spark 3.1 and earlier, we can write script transform like SELECT TRANSFORM(a AS c1, b AS c2) USING ‘cat’ FROM TBL.
In Spark 3.1 we are able use direct transforms but , In spark 3.3, direct transform is prohibited , but can be use with below workaround.
Dynamics 365 Commerce is a comprehensive omnichannel solution that empowers retailers to deliver personalized, seamless, and differentiated shopping experiences across physical and digital channels. In the 2024 release Wave 1, Dynamics 365 Commerce continues to innovate and enhance its capabilities to improve store associate productivity and meet the evolving needs of customers and businesses. Here are some of the highlights of the new features coming soon:
Copilot in Site builder is going global and multi-lingual:
Copilot in Site builder is a generative AI assistant that helps users create engaging and relevant content for their e-commerce sites. Copilot uses the product information and the user’s input to generate product enrichment content that is crafted using brand tone and tailored for targeted customer segments.
Image: Copilot Site Builder
In the 2024 release wave 1, Copilot in Site builder is expanding its language support to include support for 23 additional locales including German, French, Spanish, and more. This feature demonstrates Microsoft’s commitment to making Copilot accessible globally and empowering users to create multilingual content with ease and efficiency.
Strengthening our dedication to creating a comprehensive B2B solution for Digital Commerce by supporting B2B indirect commerce
Dynamics 365 Commerce supports both B2C and B2B commerce scenarios, enabling retailers to sell directly to consumers and businesses. In the 2024 release wave 1, Dynamics 365 Commerce fortifies its B2B investments by introducing support for B2B indirect commerce, which enables manufacturers selling through a network of distributors to get complete visibility into their sales and inventory.
Image: New distributor capabilities
New distributor capabilities enable manufacturers to provide a self-service platform that simplifies distributor operations and builds meaningful, long-lasting business relationships through efficient and transparent transactions. Distributors can access product catalogs and pricing specific to their partner agreements, manufacturers can place orders on behalf of their customers with specific distributor, and outlets can track order status and history.
Dynamics 365 Commerce also streamlines multi-outlet ordering, enabling business buyers that are associated with more than one outlet organization to buy for all of them. Commerce provides the ability to seamlessly buy for multiple organizations using the same email account, enabling buyers to be more efficient.
Image: Order for Organizations
Additionally, Dynamics 365 Commerce supports advance ordering, which is a common practice in some businesses to order products in advance to ensure they have adequate stock when needed. This feature enables customers to specify the desired delivery date and include additional order notes.
Also, introducing support for a promotions page on an e-commerce site that serves as a hub to showcase various deals and promotions that shoppers can take advantage of. The promotions page can display active and upcoming promotions.
Image : Promotions Page
Adyen Tap to Pay is coming to Store Commerce app on iOS
The Store Commerce app is a mobile point of sale (POS) solution that enables store associates to complete transactions through a mobile device on the sales floor, pop-up store, or remote location. The Store Commerce app supports various payment methods, such as cash, card, gift card, and loyalty points.
Image: Adyen Tap to Pay
In the 2024 release wave 1, Dynamics 365 Commerce is introducing Adyen Tap to Pay capabilities into the Store Commerce app for iOS, so that retailers everywhere can accept payments directly on Apple iPhones. Adyen Tap to Pay enhances the utility and versatility of the Store Commerce app, as it eliminates the need for additional hardware or peripherals to process payments. It also enables retailers to offer a more customer-centric and engaging in-store retail experience, as store associates can interact with customers anywhere in the store and complete transactions on the spot.
Speed up your checkout process with simplified and consistent payment workflows for different payment methods on Store Commerce app
Efficiency and predictability are key to the smooth operation of a point of sale (POS) system, especially when it comes to payment processing. When store associates can process customer payments across a variety of payment types with minimal friction, customers spend less time waiting and more time shopping.
In the 2024 release wave 1, Dynamics 365 Commerce is improving the POS payment processing user experience to create more consistent workflows across payment types. The new user experience simplifies the payment selection and confirmation process, reduces the number of clicks and screens, and provides clear feedback and guidance to the store associate. The new user experience also supports split tendering, which allows customers to pay with multiple payment methods in a single transaction.
Image: Check out process
The improved POS payment processing user experience will contribute to efficiencies in the checkout process and more satisfied customers. It will also reduce the training time and effort for store associates, as they can easily learn and master the payment workflows.
Enabling retailers to effectively monitor and track inventory of Batch-controlled products via Store Commerce app
Batch-controlled products are products that are manufactured in batches and associated with unique identifiers for quality control and traceability. Batch-controlled products are commonly used in food, chemical, and electronics industries, where the quality and safety of the products are critical.
Image: Batch Control Products
In the 2024 release wave 1, Dynamics 365 Commerce enhances the Store Commerce app to support batch-controlled products. This feature enables store associates to scan or enter the batch number of the products during the sales or return transactions and validate the batch information against the inventory records. This feature also enables store associates to view the batch details of the products, such as the expiration date, manufacture date, and lot number.
With these new features, Dynamics 365 Commerce aims to provide you with the best tools and solutions to grow your business and delight your customers. Whether you want to create engaging and relevant content for your e-commerce site, automate and integrate your order management workflows, expand your B2B commerce opportunities, or improve your payment processing and inventory management, Dynamics 365 Commerce has something new for you.
To learn more about Dynamics 365 Commerce:
Learn more about additional investments and timeline for these investments here in release plans.
This article is contributed. See the original author and article here.
While onboarding customers to Azure they ask what permissions do we need to assign to our IT Ops or to partners and I’ve seen customer gets confused when we ask them for Azure AD permission for some task and they say we’ve provided owner access on Azure Subscription why Azure AD permission is required and how this is related. So thought of writing this blog to share how many permission domains are there when you use Azure.
We will talk about these RBAC Domain:
Classic Roles
Azure RBAC Roles
Azure AD Roles
EA RBAC
MCA RBAC
Reserved Instance RBAC
Classic Roles
So let us talk about RBAC first – When I used to work in Azure Classic portal it used to be fewer roles. Mostly Account Admin, Co-Admin and Service Admin. The person who created subscription would become service Admin and if that person wanted to share the admin privilege, then he used to assign co-administrator role to the other guy
So when you go to Subscription -> IAM blade you’ll still see this. I have seen customers trying to provide owner access just try to use this Add Co-administrator button. Now you know the difference. This is not mean for providing someone access to ARM resource.
Azure RBAC
Let us talk about ARM RBAC now. When we moved to Azure RBAC from classic. We started with more fine-grained access control. With each service there was a role e.g. virtual machine contributor for managing VMs, Network contributor for managing network and so on. So, the user gets stored in Azure AD itself, but the permissions are maintained at subscription, resource group, management group level or resource level.
In each RBAC we have Actions which basically tells the role what it can perform.
The actions are part of the control plane. Which you get access to manage the service and its settings or configurations. We also have data plane actions. Which provides you the actual data access. Let us take an example of Azure Blob storage, if you get reader role you would be able to see the resource itself but will not be able to see the actual data in blob storage if you authenticate via Azure AD. If you want to see the actual data, then you can get storage blob data contributor role assigned to the ID and you can see the actual data. Similarly, there are services which expose data actions e.g. Azure Key vault, Service Bus.
Getting into where this RBAC roles can be assigned at Resource, Resource Group level or management group level is another discussion which I will cover in another blog post.
Azure AD Roles
This is used when you deal with Azure AD itself or services of which roles are stored in Azure AD like SharePoint, Exchange, or Dynamics 365. Dealing with Azure AD roles might be required during multiple instances, for example using service which creates service principals in the backend like app registration. Azure Migrate, Site recovery etc. would require Azure AD permissions to be assigned to your ID.
This RBAC Domain is separate from the Azure RBAC, this gets stored in Azure AD itself and managed centrally from roles and administrator’s blade.
The person who created the tenant gets a global admin role and then we have fine grained access based on the roles.
Though Azure AD roles are different than Azure RBAC which we assign to subscriptions, a global admin can elevate himself and get access to all the subscriptions in his tenant through a toggle.
Once you enable this toggle you get the user access administrator role at the root scope under which all the management group gets created. So eventually you can access all the subscriptions.
This is a rare and exceptional procedure that requires consultation with your internal team and a clear justification for its activation.
EA RBAC
If you are an enterprise customer and have signed up for the EA Agreement from Microsoft, as a customer in order to create subscriptions and manage billing you need to log on to EA portal which is now moved to Azure portal. Hence we’ve set of 6 RBAC permissions which can be used from cost management + billing section in Azure portal.
Enterprise administrator
EA purchaser
Department administrator
Account owner
Service administrator
Notification contact
Which set of permission is assigned at specific hierarchy can be explained through the below image. this is copied from Microsoft learn documentation mentioned below.
Below is the sample screenshot which you see when you click on cost management + billing portal. Here you will see Accounts, Departments, subscriptions.
MCA RBAC
If you have purchased MCA, then you get hierarchy for permissions to be assigned. Top level permissions are assigned at the billing scope and then billing profile level.
Billing account owner and Billing profile owner are the most common role you will use. More roles are mentioned in the article below which you can go through.
Reserved Instance RBAC
A common request from customers I get, I have got contributor/owner access to the subscription still I do not see the reserved Instance which is purchased by my colleague. Few years back the person who purchased reservation used to be the one who provided access to others by going to individual reservation. This is still possible but now you can get access to all reservations in the tenant.
Reservations when purchased by an admin he can see/manage it and seen by EA Admin or a person with reservation administrator role.
You can do this via PowerShell too, check this document for more information.
More information regarding who got access to RI is mentioned in the article below.
Recent Comments