biz apps
35 TopicsIn-App Notification PCF Control
Overview A robust Power Apps Component Framework (PCF) control for Dataverse, designed to deliver, display, and manage in-app notifications. This control supports secure environment variable lookup, publisher-agnostic logic, recipient resolution, and integrates with Microsoft Graph for advanced scenarios. It is built for easy adoption by developers and customers. Testing Status: This control has been tested for minimum positive flow scenarios. Comprehensive testing for edge cases, error handling, and production readiness is recommended before deployment. For more information about in-app notifications in Dataverse, see Microsoft Docs: Send in-app notifications within model-driven apps. Build & Run npm install npm run build Features Notification Delivery: Send notifications to users or groups in Dataverse. Recipient Resolution: Fetch and display recipient names using systemuser IDs. Environment Variable Lookup: Secure, publisher-agnostic configuration. Microsoft Graph Integration: Authenticate and fetch data from Microsoft Graph using MSAL.js. Robust UI: Modern, responsive React components with Fluent UI styling. Error Handling: Graceful fallback for missing recipients and robust client-side logic. Project Structure NotificationControl/ components/ NotificationDetails.tsx # Displays notification details and recipient info NotificationForm.tsx # Main form for creating and sending notifications NotificationList.tsx # Lists all notifications and handles navigation NotificationForm.css # Styles for the notification form NotificationList.css # Styles for the notification list context/ NotificationContext.tsx # React context for notification state hooks/ useNotifications.ts # Custom hook for notification logic utils/ api.ts # Core notification logic, environment variable lookup, Graph API integration auth.ts # Authentication helpers for MSAL.js ControlManifest.Input.xml # PCF control manifest (input) ControlManifest.xml # PCF control manifest index.ts # Entry point for the control Component Details & Usage NotificationList.tsx Purpose: Displays a list of notifications and allows navigation to detail view. Notifications are grouped by title and body for efficiency. Props: notifications: Array of notification objects. onSelect: Handler to select a notification for detail view. Key Features: Grouping: Notifications are grouped by title and body because the same notification is sent separately to each recipient in Dataverse. This prevents duplicate entries in the list view. Lazy Loading: Recipients are loaded on-demand (lazy loaded) when you click to view details, reducing initial load time and improving performance. Date Filtering: Only notifications from the last 7-14 days are loaded to keep the list manageable and avoid costly aggregation queries on large datasets. Adoption: Use to provide users with an overview of all notifications. Passes context and props to child components. Sent notifications grouped by title and body, showing count and recipient access. NotificationDetails.tsx Purpose: Displays the details of a notification, including title, body, icon, type, and recipient information. Props: notification: The notification object to display. showSystemUsers: Whether to show recipient info. onBack: Handler to return to the notification list. context: Dataverse context for API calls. Key Functions: fetchNames: Fetches recipient names from Dataverse using systemuser IDs. Handles missing recipients gracefully by showing "No recipient assigned". Adoption: Use in detail view to show notification info and recipient names. Handles all edge cases for missing or undefined recipients. Detail view with graceful fallback for missing recipients. NotificationForm.tsx Purpose: Main form for creating and sending notifications. Centralizes environment variable and authentication logic. Props: context: Dataverse context for environment variable lookup and authentication. Key Functions: Handles user input, MSAL authentication, and notification submission. Features: Multiple recipient selection options: System Users, Teams, Queues, and Outlook DLs Required fields: Title and Body Optional settings: Icon Type (Info, Success, Error, Warning) and Notification Type (Toast, Banner) Adoption: Use as the entry point for users to create and send notifications. Integrates with Microsoft Graph for advanced scenarios. Comprehensive form with multiple recipient selection options: System Users, Teams, Queues, and Outlook DLs. NotificationContext.tsx Purpose: Provides global notification state and actions to components via React context. Adoption: Use to share notification state and actions across components. useNotifications.ts Purpose: Custom React hook for fetching, sending, and managing notifications. Adoption: Use in components to access notification logic and state. api.ts Purpose: Contains core logic for notification delivery, environment variable lookup, recipient resolution, and Microsoft Graph API integration. Key Functions: getDLMemberObjectIds, getSystemUserIdsByObjectIds, getSystemUserNamesByIds: Utility functions for recipient resolution. Adoption: Use for all backend logic and API calls related to notifications. auth.ts Purpose: Handles authentication logic using MSAL.js for Microsoft Graph API access. Adoption: Use to authenticate users and obtain tokens for Graph API calls. How to Adopt This Control Import the control into your Dataverse environment. Configure environment variables for publisher-agnostic setup. Grant users read privileges on the appnotifications entity. Users receiving in-app notifications must have read privilege on the appnotifications table in Dataverse to view their notifications. Use NotificationForm to create and send notifications. Display notifications using NotificationList and NotificationDetails. Integrate with Microsoft Graph by configuring Azure AD app registration and MSAL.js. Customize styles using the provided CSS files. Prerequisites: Environment Variables & Attaching the Control 1. Environment Variables Setup Important: This control uses Dataverse environment variables (not local .env files) to store configuration values like Azure AD app registration details for Microsoft Graph integration. Required Environment Variables: You must create the following environment variables in your Dataverse organization: InAppNotif_App_Tenant_Id - Your Azure AD Tenant ID InAppNotif_App_Client_Id - Your Azure AD App Registration Client ID (for Microsoft Graph/MSAL authentication) How to Create Environment Variables in Dataverse: Go to Power Apps Maker Portal (make.powerapps.com) Select Solutions from the left navigation Open your solution (or create a new one) Click New > More > Environment variable Create each variable: Display Name: InAppNotif App Tenant Id (or similar) Name: InAppNotif_App_Tenant_Id Data Type: Text Default Value: Your Azure AD Tenant ID Repeat for InAppNotif_App_Client_Id How the Control Uses These Variables: The control reads these environment variables at runtime using the Dataverse WebAPI: // Example from api.ts const clientId = await getEnvironmentVariable("InAppNotif_App_Client_Id", context); const tenantId = await getEnvironmentVariable("InAppNotif_App_Tenant_Id", context); This approach makes the control publisher-agnostic and easy to configure across different environments without modifying code. 2. Attaching the Control to a Form This PCF control can be attached to any field on any form in Dataverse. The control supports the following field types: Single Line of Text Email Phone URL Multiple Lines of Text Whole Number Important: The field value itself is not used by the control—it only serves as a placeholder for the control's UI. You can place this control on any entity (User, Account, Contact, custom entities, etc.) based on your requirements. Steps: Navigate to the Form Editor Go to Power Apps Maker Portal (make.powerapps.com) Select Tables from the left navigation Choose your desired table (e.g., User, Account, Contact) Click on Forms tab Select and edit the form where you want to add notifications Add or Select a Field Find an existing field that matches one of the supported types (Text, Email, Phone, URL, Multiple Lines, Whole Number) Or add a new field to the form if needed Click on the field to select it Add the Custom Control With the field selected, click + Component in the right panel Or right-click the field and select Properties > Components tab Click + Component Search for and select NotificationControl The control will appear in the Components list Configure Control Properties Label: Set a custom label (e.g., "Notifications") Hide label: Check this to hide the field label (recommended for cleaner UI) Form field width: Set to desired column width (default: 1 column) Show component on: Select platforms (Web, Mobile, Tablet) Click Done Set as Primary Control (Optional) In the Components section, you'll see both the default field control and NotificationControl You can make NotificationControl the primary control for better visibility Or keep both and configure display preferences Save and Publish Click Save to save your changes Click Publish to make the control available to users The control will now appear on the form when users access it Example: NotificationControl configured on Primary Email field of User form Result: The control will appear on the form, showing all in-app notifications for the current environment Users can view, create, and manage notifications directly from any form where the control is placed The control displays with a "New Notification" button and list of existing notifications grouped by title/body Prerequisites: Outlook DL Selection & Graph API Access 1. Outlook Distribution List (DL) Selection To enable users to select Outlook Distribution Lists (DLs) for notifications, your control integrates with Microsoft Graph API. This requires: Azure AD app registration with delegated permissions to read DLs and users. Environment variable(s) to store the Azure AD Client ID and other config. Steps: Register an app in Azure AD (portal.azure.com > Azure Active Directory > App registrations). Add delegated permissions for Group.Read.All, User.Read, and any other required Graph scopes. Store the Client ID and other config in Dataverse environment variables. Configure your control to use these variables for Graph API calls. 2. App Registration & Graph API Access The app registration must allow access to Microsoft Graph for reading users and groups. Redirect URI should be set for SPA (Single Page Application) and implicit grant enabled. The control uses MSAL.js to authenticate and acquire tokens for Graph API. Security Note: By default, the control stores the authentication token in the browser (local/session storage) for convenience and seamless user experience. If customers want to avoid storing tokens in the browser: They can disable persistent authentication or use a different MSAL configuration. This may require users to re-authenticate more frequently and could impact usability. Document this option in your deployment guide and provide configuration instructions. Example MSAL config: const msalConfig = { auth: { clientId: clientIdFromEnvVar, authority: "https://login.microsoftonline.com/common", redirectUri: window.location.origin }, cache: { cacheLocation: "sessionStorage", // or "localStorage" storeAuthStateInCookie: false } }; Redirect URI Requirement for MSAL Silent Authentication Why You Need a Redirect URI For MSAL.js to perform silent authentication (acquire tokens without user interaction), your Azure AD app registration must include a valid redirect URI. In Dataverse/Power Apps, this is often set to an empty HTML web resource. Purpose: The redirect URI is where MSAL will redirect the browser after authentication. For silent authentication, MSAL uses an iframe and the redirect URI must be a valid, accessible page in your environment. An empty HTML web resource is commonly used because it loads quickly and does not display content. How to Set Up Create an empty HTML web resource in Dataverse: Go to Power Apps > Solutions > Add > Web Resource. Name it (e.g., msal-redirect.html). Content can be just:<html><head></head><body></body></html> Save and publish. Add the web resource URL as a redirect URI in Azure AD app registration: Go to Azure Portal > Azure Active Directory > App registrations > Your app. Under Authentication, add the web resource URL (e.g., https://<org>.crm.dynamics.com/WebResources/msal-redirect.html) as a redirect URI for SPA. Enable Access tokens and ID tokens under Implicit grant. Important: When adding your web resource redirect URI as a SPA in Azure AD app registration, ensure you enable both Access token and ID token under Implicit grant. This allows MSAL.js to acquire tokens for authentication and API access, enabling seamless user experience and secure integration with Microsoft Graph. Example Redirect URI: https://yourorg.crm.dynamics.com/WebResources/msal-redirect.html Why This Matters Without a valid redirect URI, MSAL cannot complete silent authentication and users may be prompted to sign in more often. The empty HTML web resource acts as a safe landing page for token acquisition. Using Dataverse Teams for In-App Notifications If your organization configures on-floor teams as Dataverse Teams (with members assigned in Dataverse), supervisors can leverage this feature to send targeted in-app notifications to their team members. How It Works Dataverse Team: A group entity in Dataverse that can have multiple users as members. Teams can represent departments, on-floor groups, or any logical unit. Supervisor Use Case: If you are a supervisor and your team is set up as a Dataverse Team, you can select the team in the notification form and send in-app notifications to all its members at once. Benefits Targeted Communication: Easily notify all team members about important updates, tasks, or alerts. Efficient Workflow: No need to select individual users; simply select the team and send the notification. Integration: The PCF control fetches team members from Dataverse and ensures notifications are delivered to each member. Example Scenario A supervisor wants to notify their on-floor team about a shift change. The team is configured as a Dataverse Team. The supervisor selects the team in the notification form and sends the message. All team members receive the notification instantly in their Dataverse environment. How to Set Up Ensure your teams are created and configured in Dataverse (Power Apps > Teams). Assign users as members to each team. Use the notification form in the PCF control to select a team and send notifications. This approach streamlines communication and ensures all relevant users are informed efficiently. For more details, see Microsoft Docs: Manage teams in Dataverse. Using Queue Selection for Workstream Notifications If your organization uses queues and workstreams (common in Customer Service or Omnichannel scenarios), you can leverage queue selection to send in-app notifications to all agents associated with a specific workstream. How It Works Queue: A Dataverse entity that holds work items and is associated with agents who can work on those items. Workstream: A collection of queues and routing rules that define how work is distributed to agents. Agent Use Case: Supervisors or admins can select a queue in the notification form to send in-app notifications to all agents assigned to that queue's workstream. Benefits Targeted Communication: Notify all agents working on a specific queue or workstream about important updates, new assignments, or urgent issues. Efficient Workflow: No need to manually identify and select agents; simply select the queue and the control will resolve all associated agents. Integration: The PCF control fetches queue members from Dataverse and ensures notifications are delivered to each agent. Example Scenario A supervisor wants to notify all agents working on the "Support Queue" about a critical system update. The supervisor selects the queue in the notification form and sends the message. All agents associated with that queue's workstream receive the notification instantly. How to Set Up Ensure your queues and workstreams are configured in Dataverse (Power Apps > Queues). Assign agents to queues or workstreams. Use the notification form in the PCF control to select a queue and send notifications to all associated agents. This approach is especially useful for Customer Service and Omnichannel environments where agents are organized by workstreams. For more details, see Microsoft Docs: Manage queues in Dataverse. For more details, see the Microsoft Docs: Use environment variables in Dataverse, Add PCF controls to forms, Microsoft Docs: Register an app with Azure AD, and MSAL.js configuration options. Developer Notes All components are documented with JSDoc comments for easy understanding. Error handling and fallback logic are implemented for robust user experience. The codebase is modular and easy to extend for new notification types or integrations. Contributing Fork the repository and create a pull request for improvements or bug fixes. Please document new components and functions using JSDoc comments and update the README as needed. Testing & Production Readiness Current Testing Status: This control has been tested for minimum positive flow scenarios only. Before Production Deployment: Perform comprehensive testing including edge cases, error scenarios, and negative flows Conduct security audits and vulnerability assessments Test with your organization's specific Dataverse configuration and data Validate performance under expected load conditions Ensure compliance with your organization's governance and security policies License MIT Disclaimer USE AT YOUR OWN RISK This control is provided as-is without any warranties, express or implied. Neither the author nor Microsoft Corporation are responsible for any issues, damages, or losses arising from the use, deployment, or adaptation of this control. Important: This control has been tested for minimum positive flow scenarios only Organizations must conduct their own thorough review and testing before deployment Ensure the control meets your organization's security, compliance, and design standards No support or maintenance guarantees are provided Responsibility: By using this control, you acknowledge that you have reviewed the code, tested it in your environment, and accept full responsibility for its deployment and operation within your organization.88Views0likes0CommentsSeamless blend of ILogger with ITracingService inside D365 CE Plugins
👋 Introduction This document introduces a practical approach for integrating Application Insights logging into Dynamics 365 plugins using the ILogger interface alongside the existing ITracingService. The key advantage of this method is its minimal impact on the existing codebase, allowing developers to adopt enhanced logging capabilities without the need to refactor or modify each plugin individually. By wrapping the tracing logic, this pattern promotes maintainability and simplifies the transition to modern observability practices with a single, centralized change. 🧱 The Classic Plugin Base Pattern Most customers already use a base class pattern for plugins. This pattern wraps the IPlugin interface and provides a LocalPluginContext object that encapsulates services like IOrganizationService, ITracingService, and IPluginExecutionContext. ✅ Benefits: Reduces boilerplate Encourages separation of concerns Makes plugins easier to test and maintain 🧩 Base Class with try-catch in Execute public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 📈 Seamless Application Insights Logging with a Tracing Adapter 💡 The Problem Many customers want to adopt Application Insights for plugin telemetry—but hesitate due to the need to refactor hundreds of TracingService.Trace(...) calls across their plugin codebase. 💡 The Innovation The following decorator pattern wraps ITracingService and forwards trace messages to both the original tracing service and an ILogger implementation (e.g., Application Insights). The only change required is in the base class constructor—no changes to existing trace calls. 🧩 Tracing Adapter public class LoggerTracingServiceDecorator : ITracingService { private readonly ITracingService _tracingService; private readonly ILogger _logger; public LoggerTracingServiceDecorator(ITracingService tracingService, ILogger logger) { _tracingService = tracingService; _logger = logger; } public void Trace(string format, params object[] args) { _tracingService.Trace(format, args); _logger?.LogInformation(format, args); } } 🧩 Updated Base Class Constructor public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); var standardTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); TracingService = new LoggerTracingServiceDecorator(standardTracingService, logger); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } This enables all trace calls to be logged both to Plugin Trace Logs and Application Insights. AppInsights Traces allows for easier troubleshooting using Kusto Query Language and enables alerting based on custom trace messages. 🧩 Using ILogger inside a plugin without base class This approach is exactly the same as implemented in plugins with a base class. However, in this case, we make the wrapping assignment once at the beginning of each plugin. This is far better compared to modifying every line that uses tracingService.Trace. public class ExistingPlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { var originalTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); // Wrap the original tracing service with the decorator var tracingService = new LoggerTracingServiceDecorator(originalTracingService, logger); tracingService.Trace("Plugin execution started."); try { // Existing plugin logic tracingService.Trace("Processing business logic..."); } catch (Exception ex) { tracingService.Trace($"Exception occurred: {ex}"); throw; } tracingService.Trace("Plugin execution completed."); } } 📣 Final Thoughts Logging with both ITracingService and ILogger inside LoggerTracingServiceDecorator will double the chance of WorkerCommunication, as it is usually a symptom of large amount text(Strings) tracing from loop constructs inside custom plugin code, hence it is good idea to stick to ILogger alone inside your decorator Trace method.387Views0likes0CommentsWhen Your CRM Plays Hide-and-Seek: The Mystery of Missing Columns in Dynamics 365
What Happened? Personal views across multiple organizations started showing empty columns, even though the data was there. For businesses that rely on these views for daily decisions, this isn’t a normal glitch—the end users can easily assume that “no data” in the view means no data in the record and make decisions based on an incomplete picture of customer information. The Detective Work Our investigation uncovered the culprit: a mismatch between two behind-the-scenes players—View XML and Fetch XML. Think of them as the blueprint and the builder. When they don’t talk to each other properly, your view looks fine but can’t fetch the data it needs. Why It Matters This isn’t just a tech hiccup—it’s a reminder of how small cracks in system design can ripple into big business headaches. It also highlights the need for smarter automation and better error detection in enterprise platforms. The Fix (and the Frustration) The good news? A manual fix was quickly identified over a year ago. The bad news? It was manual. And many impacted users didn’t even know that their views were bad. We needed a better solution and now, we have one. Starting in October 2025, Microsoft rolled out a behind the scenes fix option. Once it’s turned on, any time a user opens a corrupted view, that view will be automatically updated to display the correct data. If the user has permission to edit the view, the updates will be saved so that the view will be permanently fixed. But there’s still a catch. Microsoft doesn’t want to enable a process that makes data changes (in this case, the data is the view definition) without your company’s permission. If your organization is running into this issue, here’s a quick test to assure that the new Microsoft fix will work for you: Identify views that you know are not rendering properly and that you will be able to access If you create a copy of a view that is corrupted, the copy will also have the corruption, so you can create additional views to test In your browser URL bar, append the following to the end of your Dynamics URL: &flags=FCB.DataSetViewFixMissingFetchColumns=true This will enables a “feature flag” that fixes the views issue, but only for the user that added the flag to the URL and only until their session expires. Other users will not see the update. If your URL looks like this: https://org7909d641.crm.dynamics.com/main.aspx?appid=12345678-1234-1234-1234-123456789012 … make it look like this and hit enter https://org7909d641.crm.dynamics.com/main.aspx?appid=12345678-1234-1234-1234-123456789012&flags=FCB.DataSetViewFixMissingFetchColumns=true Wait for Dynamics to reload Test opening the corrupted views. You’ll see that they’re magically working as expected When you are satisfied that this is working for you as desired, open a case with Microsoft Support and request that your organization be enabled for the DataSetViewFixMissingFetchColumns FCB. This will enable the fix for all users across your Dynamics organization Takeaway: If your CRM starts acting like a magician hiding data, don’t panic. The data is still there—you just need to coax it back with the right fix. And now there’s an option to make sure that this issues goes away for good.174Views1like0CommentsToken Cache Service Inside D365 CE Plugin Base
Introduction This document explores a performance optimization technique for Dynamics 365 plugins by implementing centralized token caching using static variables in a plugin base class. Since plugin instances are recreated for every request, holding variables in memory is not feasible. However, static variables—such as a ConcurrentDictionary—defined in a base class can persist across executions, enabling efficient reuse of authentication tokens. This approach avoids tens or even hundreds of thousands of daily calls to identity management endpoints, which can overload those services. Additionally, plugin execution time can increase by 500 milliseconds to one second per request if authentication is performed repeatedly. If in-memory caching fails or is not viable, storing tokens in Dataverse and retrieving them as needed may be a better fallback than re-authenticating each time. 🔐 TokenService Implementation for Plugin Token Caching To optimize authentication performance in Dynamics 365 plugins, a custom TokenService is implemented and injected into the plugin base architecture. This service enables centralized token caching using a **static, read-only **ConcurrentDictionary, which persists across plugin executions. 🧱 Design Overview The TokenService exposes two methods: GetAccessToken(Guid key) – retrieves a cached token if it's still valid. SetAccessToken(Guid key, string token, DateTime expiry) – stores a new token with its expiry. The core of the service is a static dictionary: private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); This dictionary is shared across plugin executions because it's defined in the base class. This is crucial since plugin instances are recreated per request and cannot hold instance-level state. 🧩 Integration into LocalPluginContext The TokenService is injected into the well-known LocalPluginContext alongside other services like IOrganizationService, ITracingService, and IPluginExecutionContext. This makes the token service available to all child plugins via the context object. public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // Existing service setup... TokenService = new TokenService(TracingService); } 🔁 Token Retrieval Logic GetAccessToken checks if a token exists and whether it’s about to expire: public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } If the token is expired or missing, it returns null. It does not fetch a new token itself. 🔄 Token Refresh Responsibility The responsibility to fetch a new token lies with the child plugin, because: It has access to secure configuration values (e.g., client ID, secret, tenant). It knows the context of the external service being called. Once the child plugin fetches a new token, it calls: TokenService.SetAccessToken(key, token, expiry); This updates the shared cache for future executions. 🧱 Classic Plugin Base Pattern (Preserved) public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 🧩 TokenService Implementation public interface ITokenService { string GetAccessToken(Guid key); void SetAccessToken(Guid key, string token, DateTime expiry); } public sealed class TokenService : ITokenService { private readonly ITracingService _tracingService; private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); public TokenService(ITracingService tracingService) { _tracingService = tracingService; } public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } public void SetAccessToken(Guid key, string token, DateTime expiry) { TokenCache[key] = new CachedAccessToken(token, expiry); _tracingService.Trace($"Token stored for key {key} with expiry at {expiry}."); } private class CachedAccessToken { public string Token { get; } public DateTime Expiry { get; } public CachedAccessToken(string token, DateTime expiry) { Token = token; Expiry = expiry; } } } 🧩 Add TokenService to LocalPluginContext public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // ... existing setup ... TokenService = new TokenService(TracingService); } 🧪 Full Child Plugin Example with Secure Config and Token Usage public class ExternalApiPlugin : PluginBase { private readonly SecureSettings _settings; public ExternalApiPlugin(string unsecureConfig, string secureConfig) { // Parse secure config into settings object _settings = JsonConvert.DeserializeObject<SecureSettings>(secureConfig); } protected override void ExecutePlugin(LocalPluginContext localContext) { localContext.TracingService.Trace("ExternalApiPlugin execution started."); // Get token string token = AccessTokenGenerator(_settings, localContext); // Use token to call external API CallExternalService(token, localContext.TracingService); localContext.TracingService.Trace("ExternalApiPlugin execution completed."); } private string AccessTokenGenerator(SecureSettings settings, LocalPluginContext localContext) { var token = localContext.TokenService.GetAccessToken(settings.TokenKeyGuid); if (!string.IsNullOrEmpty(token)) { var payload = DecodeJwtPayload(token); var expiryUnix = long.Parse(payload["exp"]); var expiryDate = DateTimeOffset.FromUnixTimeSeconds(expiryUnix).UtcDateTime; if ((expiryDate - DateTime.UtcNow).TotalMinutes > 2) { return token; } } // Fetch new token var newToken = FetchTokenFromOAuth(settings); var newPayload = DecodeJwtPayload(newToken); var newExpiry = DateTimeOffset.FromUnixTimeSeconds(long.Parse(newPayload["exp"])).UtcDateTime; localContext.TokenService.SetAccessToken(settings.TokenKeyGuid, newToken, newExpiry); return newToken; } private Dictionary<string, string> DecodeJwtPayload(string jwt) { var parts = jwt.Split('.'); var payload = parts[1].PadRight(parts[1].Length + (4 - parts[1].Length % 4) % 4, '='); var bytes = Convert.FromBase64String(payload); var json = Encoding.UTF8.GetString(bytes); return JsonConvert.DeserializeObject<Dictionary<string, string>>(json); } private string FetchTokenFromOAuth(SecureSettings settings) { // Simulated token fetch logic return "eyJhbGciOi..."; // JWT token } private void CallExternalService(string token, ITracingService tracingService) { tracingService.Trace("Calling external service with token..."); // Simulated API call } } 🧩 Wrapping It All Together By combining these patterns into the base class–child class structure, we get a plugin framework that is: ✅ Familiar and extensible ⚡️ Optimized for performance with token caching 🗣️ Final Thoughts These patterns weren’t invented in a vacuum—they were shaped by real customer needs and constraints. Whether you're modernizing legacy plugins or building new ones, I hope these ideas help you deliver more robust, scalable, and supportable solutions.748Views0likes4CommentsDemystifying SessionTrackingId and Request Correlation in D365 CE Integrations
Introduction As a Cloud Solution Architect working with Dynamics 365 Customer Engagement (D365 CE) customers, I often see organizations building high-volume integrations using the ServiceClient SDK for .NET Core (or CrmServiceClient for classic .NET Framework). While these SDKs make it easy to connect and interact with Dataverse, tracking and diagnosing issues—especially in production—can be challenging. One of the least-known but most powerful features for support and diagnostics is the SessionTrackingId property on ServiceClient. In this post, I’ll explain what it is, how to use it, and why it’s invaluable for working with Microsoft support and troubleshooting performance issues. The Problem: Diagnosing Integration Performance Many customers log the time before and after each Dataverse call, and if they see high latency, they often suspect the Dataverse service itself. However, in large-scale, high-volume integrations, the real culprit is frequently client-side resource contention or SNAT port exhaustion. This can lead to slow connection establishment, WCF binding timeouts, and misleadingly high client-side timings—even when the server-side execution is fast. The Solution: SessionTrackingId and Request Correlation With ServiceClient, you cannot manually set HTTP headers. You rely on the SessionTrackingId property, which internally maps to the x-ms-client-session-id header. Microsoft support can use this ID to trace your request end-to-end, from your client to the Dataverse backend. When using HttpClient, you can achieve similar tracking by utilizing the User-Agent HTTP header to append a GUID, which helps Microsoft support correlate requests in telemetry for diagnostics and troubleshooting. Sample: Using SessionTrackingId and Logging Request Timing using Microsoft.PowerPlatform.Dataverse.Client; using Microsoft.Extensions.Configuration; using System; using System.IO; using System.Diagnostics; using Microsoft.Crm.Sdk.Messages; namespace D365UserAgentDemo { class Program { static void Main(string[] args) { var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false) .Build(); var d365Section = config.GetSection("D365"); string appId = d365Section["AppId"]; string clientSecret = d365Section["ClientSecret"]; string orgUrl = d365Section["OrgUrl"]; string tenantId = d365Section["TenantId"]; string connectionString = $"AuthType=ClientSecret;Url={orgUrl};ClientId={appId};ClientSecret={clientSecret};TenantId={tenantId};"; // 1. Call with ServiceClient var sessionTrackingId1 = Guid.NewGuid(); Console.WriteLine($"\n[ServiceClient] Using SessionTrackingId: {sessionTrackingId1}"); string? accessToken = null; using (var svc = new ServiceClient(connectionString)) { svc.SessionTrackingId = sessionTrackingId1; try { var sw = Stopwatch.StartNew(); var response = svc.Execute(new WhoAmIRequest()); sw.Stop(); var userId = ((WhoAmIResponse)response).UserId; Console.WriteLine($"[ServiceClient] WhoAmI UserId: {userId}"); Console.WriteLine($"[ServiceClient] Time taken: {sw.ElapsedMilliseconds} ms"); Console.WriteLine($"[ServiceClient] TrackingId: {sessionTrackingId1}"); accessToken = svc.CurrentAccessToken; } catch (Exception ex) { Console.WriteLine($"[ServiceClient] Error: {ex.Message}"); } } // 2. Call with HttpClient var trackingId2 = Guid.NewGuid(); Console.WriteLine($"\n[HttpClient] Using User-Agent with embedded tracking ID: {trackingId2}"); if (!string.IsNullOrWhiteSpace(accessToken)) { using (var httpClient = new System.Net.Http.HttpClient()) { httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken); httpClient.DefaultRequestHeaders.UserAgent.Clear(); httpClient.DefaultRequestHeaders.UserAgent.ParseAdd($"D365UserAgentDemo/{trackingId2}"); httpClient.DefaultRequestHeaders.Remove("x-ms-client-request-id"); httpClient.DefaultRequestHeaders.Add("x-ms-client-request-id", trackingId2.ToString()); var apiBase = orgUrl.TrimEnd('/'); var apiUrl = $"{apiBase}/api/data/v9.2/WhoAmI"; try { var sw = Stopwatch.StartNew(); var httpResponse = httpClient.GetAsync(apiUrl).GetAwaiter().GetResult(); sw.Stop(); var content = httpResponse.Content.ReadAsStringAsync().GetAwaiter().GetResult(); Console.WriteLine($"[HttpClient] WhoAmI response: {content}"); Console.WriteLine($"[HttpClient] Time taken: {sw.ElapsedMilliseconds} ms"); Console.WriteLine($"[HttpClient] TrackingId (User-Agent): {trackingId2}"); } catch (Exception ex) { Console.WriteLine($"[HttpClient] Error: {ex.Message}"); } } } else { Console.WriteLine("No access token available from ServiceClient for HttpClient call."); } } } } Why This Matters Supportability: When you open a support ticket, provide the SessionTrackingId (for SDK). Microsoft support can trace your request through the entire stack. Root Cause Analysis: If you see high client-side latency, but Microsoft support shows fast server-side execution for your tracking ID, the issue is likely on the client (resource contention, SNAT exhaustion, etc.). Bonus: Classic .NET Framework If you’re using classic .NET Framework, use CrmServiceClient instead. The pattern is similar, and the property is also called SessionTrackingId. Conclusion The SessionTrackingId and embedded tracking IDs in User-Agent headers are powerful but underutilized features for D365 CE integrations. They enable precise request tracking, better support experiences, and faster root cause analysis. Although SessionTrackingId is documented as a session-level identifier, in practice it is best used to track individual requests—especially when diagnosing performance issues. Since ServiceClient does not expose a separate property for request-level correlation, assigning a unique SessionTrackingId per request is the most effective approach. Takeaway When client-side telemetry shows significantly higher latency than Power Platform telemetry, it’s a strong indicator that your integration needs horizontal scaling. Logging request-level timing with a unique tracking ID helps pinpoint these bottlenecks and guides you toward the right architectural decisions.314Views0likes0Comments🚀 Export D365 CE Dataverse Org Data to Cosmos DB via the Office365 Management Activity API
📘 Preface This post demonstrates one method to export Dynamics 365 Customer Engagement (CE) Dataverse organization data using the Office 365 Management Activity API and Azure Functions. It is feasible for customers to build a custom lake-house architecture with this feed, enabling advanced analytics, archiving, or ML/AI scenarios. https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference 🧭 When to Use This Custom Integration While Microsoft offers powerful native integrations like Dataverse Synapse Link and Microsoft Fabric, this custom solution is observed implemented and relevant in the following scenarios: Third-party observability and security tools already use this approach Solutions such as Splunk and other enterprise-grade platforms commonly implement integrations based on the Office 365 Management Activity API to ingest tenant-wide audit data. This makes it easier for customers to align with existing observability pipelines or security frameworks. Customers opt out of Synapse Link or Fabric Whether due to architectural preferences, licensing constraints, or specific compliance requirements, some customers choose not to adopt Microsoft’s native integrations. The Office Management API offers a viable alternative for building custom data export and monitoring solutions tailored to their needs. 🎯 Why Use the Office 365 Management Activity API? Tenant-wide Data Capture: Captures audit logs and activity data across all Dataverse orgs in a tenant. Integration Flexibility: Enables export to Cosmos DB, cold storage, or other platforms for analytics, compliance, or ML/AI. Third-party Compatibility: Many enterprise tools use similar mechanisms to ingest and archive activity data. 🏗️ Architecture Overview Azure Function App (.NET Isolated): Built as webhook, processes notifications, fetches audit content, and stores filtered events in Cosmos DB. Cosmos DB: Stores audit events for further analysis or archiving. Application Insights: Captures logs and diagnostics for troubleshooting. 🛠️ Step-by-Step Implementation https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis#build-your-app 1. Prerequisites Azure subscription Dynamics 365 CE environment (Dataverse) Azure Cosmos DB account (SQL API) Office 365 tenant admin rights Enable Auditing in Dataverse org 2. Register an Azure AD App Go to Azure Portal > Azure Active Directory > App registrations > New registration Note: Application (client) ID Directory (tenant) ID Create a client secret Grant API permissions: ActivityFeed.Read ActivityFeed.ReadDlp ServiceHealth.Read Grant admin consent 3. Set Up Cosmos DB Create a Cosmos DB account (SQL API) Create: Database: officewebhook Container: dynamicsevents Partition key: /tenantId Note endpoint URI and primary key 4. Create the Azure Function App Use Visual Studio or VS Code Create a new Azure Functions project (.NET 8 Isolated Worker) Add NuGet packages: Microsoft.Azure.Functions.Worker Microsoft.Azure.Cosmos Newtonsoft.Json Function Logic: Webhook validation Notification processing Audit content fetching Event filtering Storage in Cosmos DB 5. Configure Environment Variables { "OfficeApiTenantId": "<your-tenant-id>", "OfficeApiClientId": "<your-client-id>", "OfficeApiClientSecret": "<your-client-secret>", "CrmOrganizationUniqueName": "<your-org-name>", "CosmosDbEndpoint": "<your-cosmos-endpoint>", "CosmosDbKey": "<your-cosmos-key>", "CosmosDbDatabaseId": "officewebhook", "CosmosDbContainerId": "dynamicsevents", "EntityOperationsFilter": { "incident": ["create", "update"], "account": ["create"] } } 6. Deploy the Function App Build and publish using Azure Functions Core Tools or Visual Studio Restart the Function App from Azure Portal Monitor logs via Application Insights 🔔 How to Subscribe to the Office 365 Management Activity API for Audit Notifications To receive audit notifications, you must first subscribe to the Office 365 Management Activity API. This is a two-step process: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#start-a-subscription 1. Fetch an OAuth2 Token Authenticate using your Azure AD app credentials to get a bearer token: https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis # Define your Azure AD app credentials $tenantId = "<your-tenant-id>" $clientId = "<your-client-id>" $clientSecret = "<your-client-secret>" # Prepare the request body for token fetch $body = @{ grant_type = "client_credentials" client_id = $clientId client_secret = $clientSecret scope = "https://manage.office.com/.default" } # Fetch the OAuth2 token $tokenResponse = Invoke-RestMethod -Method Post -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Body $body $token = $tokenResponse.access_token 2. Subscribe to the Content Type Use the token to subscribe to the desired content type (e.g., Audit.General): https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api $contentType = "Audit.General" $headers = @{ Authorization = "Bearer $token" "Content-Type" = "application/json" } $uri = "https://manage.office.com/api/v1.0/$tenantId/activity/feed/subscriptions/start?contentType=$contentType" $response = Invoke-RestMethod -Method Post -Uri $uri -Headers $headers $response ⚙️ How the Azure Function Works 🔸 Trigger The Azure Function is triggered by notifications from the Office 365 Management Activity API. These notifications include audit events across your entire Azure tenant—not just Dynamics 365. 🔸 Filtering Logic Each notification is evaluated against your business rules: Organization match Entity type (e.g., incident, account) Operation type (e.g., create, update) These filters are defined in the EntityOperationsFilter environment variable: { "incident": ["create", "update"], "account": ["create"] } 🔸 Processing If the event matches your filters, the function fetches the full audit data and stores it in Cosmos DB. If not, the event is ignored. 🔍 Code Explanation: The Run Method 1. Webhook Validation https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#webhook-validation string validationToken = query["validationToken"]; if (!string.IsNullOrEmpty(validationToken)) { await response.WriteStringAsync(validationToken); response.StatusCode = HttpStatusCode.OK; return response; } 2. Notification Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#receiving-notifications var notifications = JsonConvert.DeserializeObject<dynamic[]>(requestBody); foreach (var notification in notifications) { if (notification.contentType == "Audit.General" && notification.contentUri != null) { // Process each notification } } 3. Bearer Token Fetch string bearerToken = await GetBearerTokenAsync(log); if (string.IsNullOrEmpty(bearerToken)) continue; 4. Fetch Audit Content https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#retrieve-content var requestMsg = new HttpRequestMessage(HttpMethod.Get, contentUri); requestMsg.Headers.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken); var result = await httpClient.SendAsync(requestMsg); if (!result.IsSuccessStatusCode) continue; var auditContentJson = await result.Content.ReadAsStringAsync(); 5. Deserialize and Filter Audit Records https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#dynamics-365-schema var auditRecords = JsonConvert.DeserializeObject<dynamic[]>(auditContentJson); foreach (var eventData in auditRecords) { string orgName = eventData.CrmOrganizationUniqueName ?? ""; string workload = eventData.Workload ?? ""; string entityName = eventData.EntityName ?? ""; string operation = eventData.Message ?? ""; if (workload != "Dynamics 365" && workload != "CRM" && workload != "Power Platform") continue; if (!entityOpsFilter.ContainsKey(entityName)) continue; if (!entityOpsFilter[entityName].Contains(operation)) continue; // Store in Cosmos DB } 6. Store in Cosmos DB var cosmosDoc = new { id = Guid.NewGuid().ToString(), tenantId = notification.tenantId, raw = eventData }; var partitionKey = (string)notification.tenantId; var resp = await cosmosContainer.CreateItemAsync(cosmosDoc, new PartitionKey(partitionKey)); 7. Logging and Error Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#errors log.LogInformation($"Stored notification in Cosmos DB for contentUri: {notification.contentUri}, DocumentId: {cosmosDoc.id}"); catch (Exception dbEx) { log.LogError($"Error storing notification in Cosmos DB: {dbEx.Message}"); } 🧠 Conclusion This solution provides a robust, extensible pattern for exporting Dynamics 365 CE Dataverse org data to Cosmos DB using the Office 365 Management Activity API. Solution architects can use this as a reference for building or evaluating similar integrations, especially when working with third-party archiving or analytics solutions.202Views1like0Comments🚀 Scaling Dynamics 365 CRM Integrations in Azure: The Right Way to Use the SDK ServiceClient
This blog explores common pitfalls and presents a scalable pattern using the .Clone() method to ensure thread safety, avoid redundant authentication, and prevent SNAT port exhaustion. 🗺️ Connection Factory with Optimized Configuration The first step to building a scalable integration is to configure your ServiceClient properly. Here's how to set up a connection factory that includes all the necessary performance optimizations: public static class CrmClientFactory { private static readonly ServiceClient _baseClient; static CrmClientFactory() { ThreadPool.SetMinThreads(100, 100); // Faster thread ramp-up ServicePointManager.DefaultConnectionLimit = 65000; // Avoid connection bottlenecks ServicePointManager.Expect100Continue = false; // Reduce HTTP latency ServicePointManager.UseNagleAlgorithm = false; // Improve responsiveness _baseClient = new ServiceClient(connectionString); _baseClient.EnableAffinityCookie = false; // Distribute load across Dataverse web servers } public static ServiceClient GetClient() => _baseClient.Clone(); } ❌ Anti-Pattern: One Static Client for All Operations A common anti-pattern is to create a single static instance of ServiceClient and reuse it across all operations: public static class CrmClientFactory { private static readonly ServiceClient _client = new ServiceClient(connectionString); public static ServiceClient GetClient() => _client; } This struggles under load due to thread contention, throttling, and unpredictable behavior. ⚠️ Misleading Fix: New Client Per Request To avoid thread contention, some developers create a new ServiceClient per request, however the below does not truly create seperate connection unless RequireNewInstance=True connection string param or useUniqueInstance:true constructor param are utilized. Many a times these intricate details are missed out and causing same connection be shared across threads with high lock times compounding overall slowness. public async Task Run(HttpRequest req) { var client = new ServiceClient(connectionString); // Use client here } Even with above flags there is a risk of auth failures and SNAT exhaustion in a high throughout service integration scenario due to repeated OAuth authentication every time a ServiceClient instance is created with constructor. ✅ Best Practice: Clone Once, Reuse Per Request The best practice is to create a single authenticated ServiceClient and use its .Clone() method to generate lightweight, thread-safe copies for each request: public static class CrmClientFactory { private static readonly ServiceClient _baseClient = new ServiceClient(connectionString); public static ServiceClient GetClient() => _baseClient.Clone(); } Then, in your Azure Function or App Service operation: ❗ Avoid calling the factory again inside helper methods. Clone once and pass the client down the call stack. public async Task HandleRequest() { var client = CrmClientFactory.GetClient(); // Clone once per request await DoSomething1(client); await DoSomething2(client); } public async Task DoSomething1(ServiceClient client) { await client.Create(); // Avoid new client cloning but just use passed down client as is } 🧵 Parallel Processing with Batching When working with large datasets, combining parallelism with batching using ExecuteMultiple can significantly improve throughput—if done correctly. 🔄 Common Mistake: Dynamic Batching Inside Parallel Loops Many implementations dynamically batch records inside Parallel.ForEach, assuming consistent batch sizes. But in practice, this leads to: Inconsistent batch sizes (1 to 100+) Unpredictable performance Difficult-to-analyze telemetry ✅ Fix: Chunk Before You Batch public static List> ChunkRecords(List records, int chunkSize) { return records .Select((record, index) => new { record, index }) .GroupBy(x => x.index / chunkSize) .Select(g => g.Select(x => x.record).ToList()) .ToList(); } public static void ProcessBatches(List records, ServiceClient serviceClient, int batchSize = 100, int maxParallelism = 5) { var batches = ChunkRecords(records, batchSize); Parallel.ForEach(batches, new ParallelOptions { MaxDegreeOfParallelism = maxParallelism }, batch => { using var service = serviceClient.Clone(); // Clone once per thread var executeMultiple = new ExecuteMultipleRequest { Requests = new OrganizationRequestCollection(), Settings = new ExecuteMultipleSettings { ContinueOnError = true, ReturnResponses = false } }; foreach (var record in batch) { executeMultiple.Requests.Add(new CreateRequest { Target = record }); } service.Execute(executeMultiple); }); } 🚫 Avoiding Throttling: Plan, Don’t Just Retry While it’s possible to implement retry logic for HTTP 429 responses using the Retry-After header, the best approach is to avoid throttling altogether. ✅ Best Practices Control DOP and batch size: Keep them conservative and telemetry driven. Use alternate app registrations: Distribute load across identities but do not overload the Dataverse org. Avoid triggering sync plugins or real-time workflows: These amplify load. Address long-running queries: Optimize operations with Microsoft support help before scaling Relax time constraints: There’s no need to finish a job in 1 hour if it can be done safely in 3. 🌐 When to Consider Horizontal Scaling Even with all the right optimizations, your integration may still hit limits under the HTTP stack—such as: WCF binding timeouts SNAT port exhaustion Slowness not explained by Dataverse telemetry In these cases, horizontal scaling becomes essential. App Services: Easily scale out using auto scale rules. Function Apps (service model): Scale well with HTTP or service bus triggers. Scheduled Functions: Require deduplication logic to avoid duplicate processing. On-Premises VM: When D365 SDK based integrations hosted on VM infra, they shall need horizontal scaling by increasing servers. 🧠 Final Thoughts Scaling CRM integrations in Azure is about resilience, observability, and control. Follow these patterns: Clone once per thread Pre-chunk batches Tune with telemetry evidence Avoid overload when you can Scale horizontally when needed—but wisely Build integrations that are fast, reliable, and future proof.396Views1like0CommentsTransform business process with agentic business applications (Asia)
October 1-2, 2025 | 7:30 AM – 10:30 AM ASIA (IST) Overview This bootcamp is designed to equip you with the AI skills and clarity needed to harness the power of Copilot Studio and AI Agents in Dynamics 365. Participants will learn what AI agents are, why they matter in Dynamics 365, and how to design and build agents that address customer needs today while preparing for the AI-native ERP and CRM future. Building from the fundamentals of Copilot Studio and its integration across Dynamics 365 applications, we’ll explore how first-party agents are built, why Microsoft created them, and where their current limitations open opportunities for partner-led innovation. We’ll then expand into third-party agent design and extensibility, showing how partners can create differentiated solutions that deliver unique value. Finally, we will provide a forward-looking perspective on Microsoft’s strategy with Model Context Protocol (MCP), Agent-to-Agent (A2A) orchestration, and AI-native business applications - inspiring partners to create industry-specific agents that solve real customer pain points and showcase their expertise. Join this virtual event to accelerate your technical readiness journey on AI agents in Dynamics 365. Register today and mark your calendars to gain valuable insights from our Microsoft SMEs. Don’t miss this opportunity to learn about the latest developments and elevate your partnership with Microsoft Event prerequisites Participants should have some familiarity and work experience with the associated solutions. Additionally, we suggest having knowledge of the relevant role-based certification content (although passing the exam is not mandatory). You can find free self-paced learning content and technical documentation related to the workshop topics at Microsoft Learn. Earn a digital badge Attendees who participate in the live sessions of this workshop will earn a digital badge. These badges, which serve as a testament to your engagement and learning, can be conveniently accessed and shared through the Credly digital platform. Please note that accessing on-demand content does not meet the criteria for earning a badge. REGISTER TODAY!208Views0likes0CommentsTransform business process with agentic business applications (Americas)
September 30 - October 1, 2025 | 7:00 AM – 10:00 AM AMER (PDT) Overview This bootcamp is designed to equip you with the AI skills and clarity needed to harness the power of Copilot Studio and AI Agents in Dynamics 365. Participants will learn what AI agents are, why they matter in Dynamics 365, and how to design and build agents that address customer needs today while preparing for the AI-native ERP and CRM future. Building from the fundamentals of Copilot Studio and its integration across Dynamics 365 applications, we’ll explore how first-party agents are built, why Microsoft created them, and where their current limitations open opportunities for partner-led innovation. We’ll then expand into third-party agent design and extensibility, showing how partners can create differentiated solutions that deliver unique value. Finally, we will provide a forward-looking perspective on Microsoft’s strategy with Model Context Protocol (MCP), Agent-to-Agent (A2A) orchestration, and AI-native business applications - inspiring partners to create industry-specific agents that solve real customer pain points and showcase their expertise. Join this virtual event to accelerate your technical readiness journey on AI agents in Dynamics 365. Register today and mark your calendars to gain valuable insights from our Microsoft SMEs. Don’t miss this opportunity to learn about the latest developments and elevate your partnership with Microsoft Event prerequisites Participants should have some familiarity and work experience with the associated solutions. Additionally, we suggest having knowledge of the relevant role-based certification content (although passing the exam is not mandatory). You can find free self-paced learning content and technical documentation related to the workshop topics at Microsoft Learn. Earn a digital badge Attendees who participate in the live sessions of this workshop will earn a digital badge. These badges, which serve as a testament to your engagement and learning, can be conveniently accessed and shared through the Credly digital platform. Please note that accessing on-demand content does not meet the criteria for earning a badge. REGISTER TODAY!99Views1like0CommentsDrive demand for your offers with solution area campaigns in a box
Take your marketing campaigns further with campaigns in a box (CiaBs), collections of partner-ready, high-quality marketing assets designed to deepen customer engagement, simplify your marketing efforts, and grow revenue. Microsoft offers both new and refreshed campaigns for the following solution areas: Data & AI (Azure), Modern Work, Business Applications, Digital & App Innovation (Azure), Infrastructure (Azure), and Security. Check out the latest CiaBs below and get started today by visiting the Partner Marketing Center, where you’ll find resources such as step-by-step execution guides, customizable nurture tactics, and assets including presentations, e-books, infographics, and more. AI transformation Generate interest in AI adoption among your customers. As AI technology grabs headlines and captures imaginations, use this campaign to illustrate your audience’s unique opportunity to harness the power of AI to deliver value faster. Learn more about the campaign: AI Transformation (formerly Era of AI): Show audiences how to take advantage of the potential of AI to drive business value and showcase the usefulness of Microsoft AI solutions delivered and tailored by your organization. Data & AI (Azure) Our Data & AI campaigns demonstrate how your customers can win customers with AI-enabled differentiation. Show how they can transform their businesses with generative AI, a unified data estate, and solutions like Microsoft Fabric, Microsoft Power BI, and Azure Databricks. Campaigns include: Unify your intelligent data - Banking: Help your banking customers understand how you can help them break down data silos, meet compliance demands, and deliver on rising customer expectations. Innovate with the Azure AI platform: Help your customers understand the potential of generative AI solutions to differentiate themselves in the market—and inspire them to build GenAI solutions with the Azure AI platform. Unify your intelligent data and analytics platform - ENT: Show enterprise audiences how unifying data and analytics on an open foundation can help streamline data transformation and business intelligence. Unify your intelligent data and analytics platform - SMB: Create awareness and urgency for SMBs to adopt Microsoft Fabric as the AI-powered, unified data platform that will suit their analytics needs. Modern Work Our Modern Work campaigns help current and potential customers understand how they can effectively transform their businesses with AI capabilities. Campaigns include: Connect and empower your frontline workers: Empower your customers' frontline workers with smart, AI-enhanced workflows with solutions based on Microsoft Teams and Microsoft 365 Copilot. Use this campaign to show your customers how they can make their frontline workers feel more connected, leading to improved productivity and efficiency. Microsoft 365 Copilot SMB: Increase your audience's understanding of the larger potential of Microsoft 365 Copilot and how AI capabilities can accelerate growth and transform operations. Smart workplace with Teams: Use this campaign to show your customers how to use AI to unlock smarter communication and collaboration with Microsoft Teams and Microsoft 365 Copilot. This campaign demonstrates to customers how you can help them seamlessly integrate meetings, calls, chat, and collaboration to break down silos, gain deeper insights, and focus on the work that matters. Cloud endpoints: Help customers bring virtualized applications to the cloud by providing secure AI-powered productivity and development on any device with Microsoft Intune Suite and Windows in the cloud solutions. Business Applications Nurture interest with audiences ready to modernize and transform their business operations with these BizApps go-to-market resources. Campaigns include: AI-powered customer service: Highlight how AI-powered solutions like Microsoft Dynamics 365 are transforming customer service with more personalized experiences, smarter teamwork, and improved efficiency. Migrate and modernize your ERP with Microsoft Dynamics 365: Position yourself as the right partner to modernize or replace your customers' legacy on-premises ERP systems with a Copilot-powered ERP. Business Central for SMB: Offer customers Microsoft Dynamics 365, a comprehensive business management solution that connects finance, sales, service, and operations teams with a single application to boost productivity and improve decision-making. AI-powered CRM: Help your customers enhance their customer experiences and close more deals with Microsoft 365 Dynamics Sales by making data AI-ready, which empowers them to create effective marketing content with Microsoft 365 Copilot and pass qualified leads on to sales teams. Use this campaign to show audiences how Copilot and AI can supercharge their CRM to increase productivity and efficiency, ultimately leading to better customer outcomes. Modernize at scale with AI and Microsoft Power Platform: This campaign is designed to introduce the business values unlocked with Microsoft Power Platform, show how low-code solutions can accelerate development and drive productivity, and position your company as a valuable asset in the deployment of these solutions. Digital & App Innovation (Azure) Position yourself as the strategic AI partner of choice and empower your customers to grow their businesses by helping them gain agility and build new AI applications faster with intelligent experiences. Campaigns include: Build and modernize AI apps: Help customers building new AI-infused applications and modernizing their application estate take advantage of the Azure AI application platform. Accelerate developer productivity: Help customers reimagine the developer experience with the world’s most-adopted AI-powered platform. Use this campaign to show customers how you can use Microsoft and GitHub tools to help streamline workflows, collaborate better, and deliver intelligent apps faster. Infrastructure (Azure) Help customers tap into the cloud to expand capabilities and boost their return on investment by transforming their digital operations. Campaigns include: Modernize VDI to Azure Virtual Desktop - SMB: Show SMB customers how they can meet the challenges of virtual work with Azure Virtual Desktop and gain flexibility, reliability, and cost-effectiveness. Migrate VMware workloads to Azure: Help customers capitalize on the partnership between VMware and Microsoft so they can migrate VMware workloads to Azure in an efficient and cost-effective manner. Migrate and secure Windows Server and SQL Server and Linux - ENT: Showcase the high return on investment (ROI) of using an adaptive cloud purpose-built for AI workloads, and help customers understand the value of migrating to Azure at their own pace. Modernize SAP on the Microsoft Cloud: Reach SAP customers before the 2027 end-of-support deadline for SAP S/4HANA to show them the importance of having a plan to migrate to the cloud. This campaign also underscores the value of moving to Microsoft Azure in the era of AI. Migrate and secure Windows Server and SQL Server and Linux estate - SMB: Use this campaign to increase understanding of the value gained by migrating from an on-premises environment to a hybrid or cloud environment. Show small and medium-sized businesses that they can grow their business, save money, improve security, and more when they move their workload from Windows Server, SQL Server, and Linux to Microsoft Azure. Security Demonstrate the power of modern security solutions and help customers understand the importance of robust cybersecurity in today’s landscape. Campaigns include: Defend against cybersecurity threats: Increase your audience's understanding of the powerful, AI-driven Microsoft unified security platform, which integrates Microsoft Sentinel, Microsoft Defender XDR, Security Exposure Management, Security Copilot, and Microsoft Threat Intelligence. Data security: Show customers how Microsoft Purview can help fortify data security in a world facing increasing cybersecurity threats. Modernize security operations: Use this campaign to sell Microsoft Sentinel, an industry-leading cloud-native SIEM that can help your customers stay protected and scale their security operations.1.3KViews2likes0Comments