biz apps
33 TopicsToken Cache Service Inside D365 CE Plugin Base
Introduction This document explores a performance optimization technique for Dynamics 365 plugins by implementing centralized token caching using static variables in a plugin base class. Since plugin instances are recreated for every request, holding variables in memory is not feasible. However, static variables—such as a ConcurrentDictionary—defined in a base class can persist across executions, enabling efficient reuse of authentication tokens. This approach avoids tens or even hundreds of thousands of daily calls to identity management endpoints, which can overload those services. Additionally, plugin execution time can increase by 500 milliseconds to one second per request if authentication is performed repeatedly. If in-memory caching fails or is not viable, storing tokens in Dataverse and retrieving them as needed may be a better fallback than re-authenticating each time. 🔐 TokenService Implementation for Plugin Token Caching To optimize authentication performance in Dynamics 365 plugins, a custom TokenService is implemented and injected into the plugin base architecture. This service enables centralized token caching using a **static, read-only **ConcurrentDictionary, which persists across plugin executions. 🧱 Design Overview The TokenService exposes two methods: GetAccessToken(Guid key) – retrieves a cached token if it's still valid. SetAccessToken(Guid key, string token, DateTime expiry) – stores a new token with its expiry. The core of the service is a static dictionary: private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); This dictionary is shared across plugin executions because it's defined in the base class. This is crucial since plugin instances are recreated per request and cannot hold instance-level state. 🧩 Integration into LocalPluginContext The TokenService is injected into the well-known LocalPluginContext alongside other services like IOrganizationService, ITracingService, and IPluginExecutionContext. This makes the token service available to all child plugins via the context object. public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // Existing service setup... TokenService = new TokenService(TracingService); } 🔁 Token Retrieval Logic GetAccessToken checks if a token exists and whether it’s about to expire: public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } If the token is expired or missing, it returns null. It does not fetch a new token itself. 🔄 Token Refresh Responsibility The responsibility to fetch a new token lies with the child plugin, because: It has access to secure configuration values (e.g., client ID, secret, tenant). It knows the context of the external service being called. Once the child plugin fetches a new token, it calls: TokenService.SetAccessToken(key, token, expiry); This updates the shared cache for future executions. 🧱 Classic Plugin Base Pattern (Preserved) public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 🧩 TokenService Implementation public interface ITokenService { string GetAccessToken(Guid key); void SetAccessToken(Guid key, string token, DateTime expiry); } public sealed class TokenService : ITokenService { private readonly ITracingService _tracingService; private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); public TokenService(ITracingService tracingService) { _tracingService = tracingService; } public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } public void SetAccessToken(Guid key, string token, DateTime expiry) { TokenCache[key] = new CachedAccessToken(token, expiry); _tracingService.Trace($"Token stored for key {key} with expiry at {expiry}."); } private class CachedAccessToken { public string Token { get; } public DateTime Expiry { get; } public CachedAccessToken(string token, DateTime expiry) { Token = token; Expiry = expiry; } } } 🧩 Add TokenService to LocalPluginContext public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // ... existing setup ... TokenService = new TokenService(TracingService); } 🧪 Full Child Plugin Example with Secure Config and Token Usage public class ExternalApiPlugin : PluginBase { private readonly SecureSettings _settings; public ExternalApiPlugin(string unsecureConfig, string secureConfig) { // Parse secure config into settings object _settings = JsonConvert.DeserializeObject<SecureSettings>(secureConfig); } protected override void ExecutePlugin(LocalPluginContext localContext) { localContext.TracingService.Trace("ExternalApiPlugin execution started."); // Get token string token = AccessTokenGenerator(_settings, localContext); // Use token to call external API CallExternalService(token, localContext.TracingService); localContext.TracingService.Trace("ExternalApiPlugin execution completed."); } private string AccessTokenGenerator(SecureSettings settings, LocalPluginContext localContext) { var token = localContext.TokenService.GetAccessToken(settings.TokenKeyGuid); if (!string.IsNullOrEmpty(token)) { var payload = DecodeJwtPayload(token); var expiryUnix = long.Parse(payload["exp"]); var expiryDate = DateTimeOffset.FromUnixTimeSeconds(expiryUnix).UtcDateTime; if ((expiryDate - DateTime.UtcNow).TotalMinutes > 2) { return token; } } // Fetch new token var newToken = FetchTokenFromOAuth(settings); var newPayload = DecodeJwtPayload(newToken); var newExpiry = DateTimeOffset.FromUnixTimeSeconds(long.Parse(newPayload["exp"])).UtcDateTime; localContext.TokenService.SetAccessToken(settings.TokenKeyGuid, newToken, newExpiry); return newToken; } private Dictionary<string, string> DecodeJwtPayload(string jwt) { var parts = jwt.Split('.'); var payload = parts[1].PadRight(parts[1].Length + (4 - parts[1].Length % 4) % 4, '='); var bytes = Convert.FromBase64String(payload); var json = Encoding.UTF8.GetString(bytes); return JsonConvert.DeserializeObject<Dictionary<string, string>>(json); } private string FetchTokenFromOAuth(SecureSettings settings) { // Simulated token fetch logic return "eyJhbGciOi..."; // JWT token } private void CallExternalService(string token, ITracingService tracingService) { tracingService.Trace("Calling external service with token..."); // Simulated API call } } 🧩 Wrapping It All Together By combining these patterns into the base class–child class structure, we get a plugin framework that is: ✅ Familiar and extensible ⚡️ Optimized for performance with token caching 🗣️ Final Thoughts These patterns weren’t invented in a vacuum—they were shaped by real customer needs and constraints. Whether you're modernizing legacy plugins or building new ones, I hope these ideas help you deliver more robust, scalable, and supportable solutions.320Views0likes4CommentsBuilding a Custom Continuous Export Pipeline for Azure Application Insights
1. Introduction MonitorLiftApp is a modular Python application designed to export telemetry data from Azure Application Insights to custom sinks such as Azure Data Lake Storage Gen2 (ADLS). This solution is ideal when Event Hub integration is not feasible, providing a flexible, maintainable, and scalable alternative for organizations needing to move telemetry data for analytics, compliance, or integration scenarios. 2. Problem Statement Exporting telemetry from Azure Application Insights is commonly achieved via Event Hub streaming. However, there are scenarios where Event Hub integration is not possible due to cost, complexity, or architectural constraints. In such cases, organizations need a reliable, incremental, and customizable way to extract telemetry data and move it to a destination of their choice, such as ADLS, SQL, or REST APIs. 3. Investigation Why Not Event Hub? Event Hub integration may not be available in all environments. Some organizations have security or cost concerns. Custom sinks (like ADLS) may be required for downstream analytics or compliance. Alternative Approach Use the Azure Monitor Query SDK to periodically export data. Build a modular, pluggable Python app for maintainability and extensibility. 4. Solution 4.1 Architecture Overview MonitorLiftApp is structured into four main components: Configuration: Centralizes all settings and credentials. Main Application: Orchestrates the export process. Query Execution: Runs KQL queries and serializes results. Sink Abstraction: Allows easy swapping of data targets (e.g., ADLS, SQL). 4.2 Configuration (app_config.py) All configuration is centralized in app_config.py, making it easy to adapt the app to different environments. CONFIG = { "APPINSIGHTS_APP_ID": "<your-app-id>", "APPINSIGHTS_WORKSPACE_ID": "<your-workspace-id>", "STORAGE_ACCOUNT_URL": "<your-adls-url>", "CONTAINER_NAME": "<your-container>", "Dependencies_KQL": "dependencies \n limit 10000", "Exceptions_KQL": "exceptions \n limit 10000", "Pages_KQL": "pageViews \n limit 10000", "Requests_KQL": "requests \n limit 10000", "Traces_KQL": "traces \n limit 10000", "START_STOP_MINUTES": 5, "TIMER_MINUTES": 5, "CLIENT_ID": "<your-client-id>", "CLIENT_SECRET": "<your-client-secret>", "TENANT_ID": "<your-tenant-id>" } Explanation: This configuration file contains all the necessary parameters for connecting to Azure resources, defining KQL queries, and scheduling the export job. By centralizing these settings, the app becomes easy to maintain and adapt. 4.3 Main Application (main.py) The main application is the entry point and can be run as a Python console app. It loads the configuration, sets up credentials, and runs the export job on a schedule. from app_config import CONFIG from azure.identity import ClientSecretCredential from monitorlift.query_runner import run_all_queries from monitorlift.target_repository import ADLSTargetRepository def build_env(): env = {} keys = [ "APPINSIGHTS_WORKSPACE_ID", "APPINSIGHTS_APP_ID", "STORAGE_ACCOUNT_URL", "CONTAINER_NAME" ] for k in keys: env[k] = CONFIG[k] for k, v in CONFIG.items(): if k.endswith("KQL"): env[k] = v return env class MonitorLiftApp: def __init__(self, client_id, client_secret, tenant_id): self.env = build_env() self.credential = ClientSecretCredential(tenant_id, client_id, client_secret) self.target_repo = ADLSTargetRepository( account_url=self.env["STORAGE_ACCOUNT_URL"], container_name=self.env["CONTAINER_NAME"], cred=self.credential ) def run(self): run_all_queries(self.target_repo, self.credential, self.env) if __name__ == "__main__": import time client_id = CONFIG["CLIENT_ID"] client_secret = CONFIG["CLIENT_SECRET"] tenant_id = CONFIG["TENANT_ID"] app = MonitorLiftApp(client_id, client_secret, tenant_id) timer_interval = app.env.get("TIMER_MINUTES", 5) print(f"Starting continuous export job. Interval: {timer_interval} minutes.") while True: print("\n[INFO] Running export job at", time.strftime('%Y-%m-%d %H:%M:%S')) try: app.run() print("[INFO] Export complete.") except Exception as e: print(f"[ERROR] Export failed: {e}") print(f"[INFO] Sleeping for {timer_interval} minutes...") time.sleep(timer_interval * 60) Explanation: The app can be run from any machine with Python and the required libraries installed—locally or in the cloud (VM, container, etc.). No compilation is needed; just run as a Python script. Optionally, you can package it as an executable using tools like PyInstaller. The main loop schedules the export job at regular intervals. 4.4 Query Execution (query_runner.py) This module orchestrates KQL queries, runs them in parallel, and serializes results. import datetime import json from concurrent.futures import ThreadPoolExecutor, as_completed from azure.monitor.query import LogsQueryClient def run_query_for_kql_var(kql_var, target_repo, credential, env): query_name = kql_var[:-4] print(f"[START] run_query_for_kql_var: {kql_var}") query_template = env[kql_var] app_id = env["APPINSIGHTS_APP_ID"] workspace_id = env["APPINSIGHTS_WORKSPACE_ID"] try: latest_ts = target_repo.get_latest_timestamp(query_name) print(f"Latest timestamp for {query_name}: {latest_ts}") except Exception as e: print(f"Error getting latest timestamp for {query_name}: {e}") return start = latest_ts time_window = env.get("START_STOP_MINUTES", 5) end = start + datetime.timedelta(minutes=time_window) query = f"app('{app_id}')." + query_template logs_client = LogsQueryClient(credential) try: response = logs_client.query_workspace(workspace_id, query, timespan=(start, end)) if response.tables and len(response.tables[0].rows) > 0: print(f"Query for {query_name} returned {len(response.tables[0].rows)} rows.") table = response.tables[0] rows = [ [v.isoformat() if isinstance(v, datetime.datetime) else v for v in row] for row in table.rows ] result_json = json.dumps({"columns": table.columns, "rows": rows}) target_repo.save_results(query_name, result_json, start, end) print(f"Saved results for {query_name}") except Exception as e: print(f"Error running query or saving results for {query_name}: {e}") def run_all_queries(target_repo, credential, env): print("[INFO] run_all_queries triggered.") kql_vars = [k for k in env if k.endswith('KQL') and not k.startswith('APPSETTING_')] print(f"Number of KQL queries to run: {len(kql_vars)}. KQL vars: {kql_vars}") with ThreadPoolExecutor(max_workers=len(kql_vars)) as executor: futures = { executor.submit(run_query_for_kql_var, kql_var, target_repo, credential, env): kql_var for kql_var in kql_vars } for future in as_completed(futures): kql_var = futures[future] try: future.result() except Exception as exc: print(f"[ERROR] Exception in query {kql_var}: {exc}") Explanation: Queries are executed in parallel for efficiency. Results are serialized and saved to the configured sink. Incremental export is achieved by tracking the latest timestamp for each query. 4.5 Sink Abstraction (target_repository.py) This module abstracts the sink implementation, allowing you to swap out ADLS for SQL, REST API, or other targets. from abc import ABC, abstractmethod import datetime from azure.storage.blob import BlobServiceClient class TargetRepository(ABC): @abstractmethod def get_latest_timestamp(self, query_name): pass @abstractmethod def save_results(self, query_name, data, start, end): pass class ADLSTargetRepository(TargetRepository): def __init__(self, account_url, container_name, cred): self.account_url = account_url self.container_name = container_name self.credential = cred self.blob_service_client = BlobServiceClient(account_url=account_url, credential=cred) def get_latest_timestamp(self, query_name, fallback_hours=3): blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: timestamp_str = blob_client.download_blob().readall().decode() return datetime.datetime.fromisoformat(timestamp_str) except Exception as e: if hasattr(e, 'error_code') and e.error_code == 'BlobNotFound': print(f"[INFO] No timestamp blob for {query_name}, starting from {fallback_hours} hours ago.") else: print(f"[WARNING] Could not get latest timestamp for {query_name}: {type(e).__name__}: {e}") return datetime.datetime.utcnow() - datetime.timedelta(hours=fallback_hours) def save_results(self, query_name, data, start, end): filename = f"{query_name}/{start:%Y%m%d%H%M}_{end:%Y%m%d%H%M}.json" blob_client = self.blob_service_client.get_blob_client(self.container_name, filename) try: blob_client.upload_blob(data, overwrite=True) print(f"[SUCCESS] Saved results to blob for {query_name} from {start} to {end}") except Exception as e: print(f"[ERROR] Failed to save results to blob for {query_name}: {type(e).__name__}: {e}") ts_blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: ts_blob_client.upload_blob(end.isoformat(), overwrite=True) except Exception as e: print(f"[ERROR] Failed to update latest timestamp for {query_name}: {type(e).__name__}: {e}") Explanation: The sink abstraction allows you to easily switch between different storage backends. The ADLS implementation saves both the results and the latest timestamp for incremental exports. End-to-End Setup Guide Prerequisites Python 3.8+ Azure SDKs: azure-identity, azure-monitor-query, azure-storage-blob Access to Azure Application Insights and ADLS Service principal credentials with appropriate permissions Steps Clone or Download the Repository Place all code files in a working directory. 2. **Configure **app_config.py Fill in your Azure resource IDs, credentials, and KQL queries. 3. Install Dependencies pip install azure-identity azure-monitor-query azure-storage-blob 4. Run the Application Locally: python monitorliftapp/main.py Deploy to a VM, Azure Container Instance, or as a scheduled job. (Optional) Package as Executable Use PyInstaller or similar tools if you want a standalone executable. 2. Verify Data Movement Check your ADLS container for exported JSON files. 6. Screenshots App Insights Logs:\ Exported Data in ADLS: \ 7. Final Thoughts MonitorLiftApp provides a robust, modular, and extensible solution for exporting telemetry from Azure Monitor to custom sinks. Its design supports parallel query execution, pluggable storage backends, and incremental data movement, making it suitable for a wide range of enterprise scenarios.119Views1like0CommentsSeamless blend of ILogger with ITracingService inside D365 CE Plugins
👋 Introduction This document introduces a practical approach for integrating Application Insights logging into Dynamics 365 plugins using the ILogger interface alongside the existing ITracingService. The key advantage of this method is its minimal impact on the existing codebase, allowing developers to adopt enhanced logging capabilities without the need to refactor or modify each plugin individually. By wrapping the tracing logic, this pattern promotes maintainability and simplifies the transition to modern observability practices with a single, centralized change. 🧱 The Classic Plugin Base Pattern Most customers already use a base class pattern for plugins. This pattern wraps the IPlugin interface and provides a LocalPluginContext object that encapsulates services like IOrganizationService, ITracingService, and IPluginExecutionContext. ✅ Benefits: Reduces boilerplate Encourages separation of concerns Makes plugins easier to test and maintain 🧩 Base Class with try-catch in Execute public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 📈 Seamless Application Insights Logging with a Tracing Adapter 💡 The Problem Many customers want to adopt Application Insights for plugin telemetry—but hesitate due to the need to refactor hundreds of TracingService.Trace(...) calls across their plugin codebase. 💡 The Innovation The following decorator pattern wraps ITracingService and forwards trace messages to both the original tracing service and an ILogger implementation (e.g., Application Insights). The only change required is in the base class constructor—no changes to existing trace calls. 🧩 Tracing Adapter public class LoggerTracingServiceDecorator : ITracingService { private readonly ITracingService _tracingService; private readonly ILogger _logger; public LoggerTracingServiceDecorator(ITracingService tracingService, ILogger logger) { _tracingService = tracingService; _logger = logger; } public void Trace(string format, params object[] args) { _tracingService.Trace(format, args); _logger?.LogInformation(format, args); } } 🧩 Updated Base Class Constructor public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); var standardTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); TracingService = new LoggerTracingServiceDecorator(standardTracingService, logger); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } This enables all trace calls to be logged both to Plugin Trace Logs and Application Insights. AppInsights Traces allows for easier troubleshooting using Kusto Query Language and enables alerting based on custom trace messages. 🧩 Using ILogger inside a plugin without base class This approach is exactly the same as implemented in plugins with a base class. However, in this case, we make the wrapping assignment once at the beginning of each plugin. This is far better compared to modifying every line that uses tracingService.Trace. public class ExistingPlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { var originalTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); // Wrap the original tracing service with the decorator var tracingService = new LoggerTracingServiceDecorator(originalTracingService, logger); tracingService.Trace("Plugin execution started."); try { // Existing plugin logic tracingService.Trace("Processing business logic..."); } catch (Exception ex) { tracingService.Trace($"Exception occurred: {ex}"); throw; } tracingService.Trace("Plugin execution completed."); } } 📣 Final Thoughts These patterns weren’t invented in a vacuum—they were shaped by real customer needs and constraints. Whether you're modernizing legacy plugins or building new ones, this approach helps you deliver supportable solutions with minimal disruption to your existing codebase.139Views0likes0Comments🚀 Export D365 CE Dataverse Org Data to Cosmos DB via the Office365 Management Activity API
📘 Preface This post demonstrates one method to export Dynamics 365 Customer Engagement (CE) Dataverse organization data using the Office 365 Management Activity API and Azure Functions. It is feasible for customers to build a custom lake-house architecture with this feed, enabling advanced analytics, archiving, or ML/AI scenarios. https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference 🧭 When to Use This Custom Integration While Microsoft offers powerful native integrations like Dataverse Synapse Link and Microsoft Fabric, this custom solution is observed implemented and relevant in the following scenarios: Third-party observability and security tools already use this approach Solutions such as Splunk and other enterprise-grade platforms commonly implement integrations based on the Office 365 Management Activity API to ingest tenant-wide audit data. This makes it easier for customers to align with existing observability pipelines or security frameworks. Customers opt out of Synapse Link or Fabric Whether due to architectural preferences, licensing constraints, or specific compliance requirements, some customers choose not to adopt Microsoft’s native integrations. The Office Management API offers a viable alternative for building custom data export and monitoring solutions tailored to their needs. 🎯 Why Use the Office 365 Management Activity API? Tenant-wide Data Capture: Captures audit logs and activity data across all Dataverse orgs in a tenant. Integration Flexibility: Enables export to Cosmos DB, cold storage, or other platforms for analytics, compliance, or ML/AI. Third-party Compatibility: Many enterprise tools use similar mechanisms to ingest and archive activity data. 🏗️ Architecture Overview Azure Function App (.NET Isolated): Built as webhook, processes notifications, fetches audit content, and stores filtered events in Cosmos DB. Cosmos DB: Stores audit events for further analysis or archiving. Application Insights: Captures logs and diagnostics for troubleshooting. 🛠️ Step-by-Step Implementation https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis#build-your-app 1. Prerequisites Azure subscription Dynamics 365 CE environment (Dataverse) Azure Cosmos DB account (SQL API) Office 365 tenant admin rights Enable Auditing in Dataverse org 2. Register an Azure AD App Go to Azure Portal > Azure Active Directory > App registrations > New registration Note: Application (client) ID Directory (tenant) ID Create a client secret Grant API permissions: ActivityFeed.Read ActivityFeed.ReadDlp ServiceHealth.Read Grant admin consent 3. Set Up Cosmos DB Create a Cosmos DB account (SQL API) Create: Database: officewebhook Container: dynamicsevents Partition key: /tenantId Note endpoint URI and primary key 4. Create the Azure Function App Use Visual Studio or VS Code Create a new Azure Functions project (.NET 8 Isolated Worker) Add NuGet packages: Microsoft.Azure.Functions.Worker Microsoft.Azure.Cosmos Newtonsoft.Json Function Logic: Webhook validation Notification processing Audit content fetching Event filtering Storage in Cosmos DB 5. Configure Environment Variables { "OfficeApiTenantId": "<your-tenant-id>", "OfficeApiClientId": "<your-client-id>", "OfficeApiClientSecret": "<your-client-secret>", "CrmOrganizationUniqueName": "<your-org-name>", "CosmosDbEndpoint": "<your-cosmos-endpoint>", "CosmosDbKey": "<your-cosmos-key>", "CosmosDbDatabaseId": "officewebhook", "CosmosDbContainerId": "dynamicsevents", "EntityOperationsFilter": { "incident": ["create", "update"], "account": ["create"] } } 6. Deploy the Function App Build and publish using Azure Functions Core Tools or Visual Studio Restart the Function App from Azure Portal Monitor logs via Application Insights 🔔 How to Subscribe to the Office 365 Management Activity API for Audit Notifications To receive audit notifications, you must first subscribe to the Office 365 Management Activity API. This is a two-step process: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#start-a-subscription 1. Fetch an OAuth2 Token Authenticate using your Azure AD app credentials to get a bearer token: https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis # Define your Azure AD app credentials $tenantId = "<your-tenant-id>" $clientId = "<your-client-id>" $clientSecret = "<your-client-secret>" # Prepare the request body for token fetch $body = @{ grant_type = "client_credentials" client_id = $clientId client_secret = $clientSecret scope = "https://manage.office.com/.default" } # Fetch the OAuth2 token $tokenResponse = Invoke-RestMethod -Method Post -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Body $body $token = $tokenResponse.access_token 2. Subscribe to the Content Type Use the token to subscribe to the desired content type (e.g., Audit.General): https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api $contentType = "Audit.General" $headers = @{ Authorization = "Bearer $token" "Content-Type" = "application/json" } $uri = "https://manage.office.com/api/v1.0/$tenantId/activity/feed/subscriptions/start?contentType=$contentType" $response = Invoke-RestMethod -Method Post -Uri $uri -Headers $headers $response ⚙️ How the Azure Function Works 🔸 Trigger The Azure Function is triggered by notifications from the Office 365 Management Activity API. These notifications include audit events across your entire Azure tenant—not just Dynamics 365. 🔸 Filtering Logic Each notification is evaluated against your business rules: Organization match Entity type (e.g., incident, account) Operation type (e.g., create, update) These filters are defined in the EntityOperationsFilter environment variable: { "incident": ["create", "update"], "account": ["create"] } 🔸 Processing If the event matches your filters, the function fetches the full audit data and stores it in Cosmos DB. If not, the event is ignored. 🔍 Code Explanation: The Run Method 1. Webhook Validation https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#webhook-validation string validationToken = query["validationToken"]; if (!string.IsNullOrEmpty(validationToken)) { await response.WriteStringAsync(validationToken); response.StatusCode = HttpStatusCode.OK; return response; } 2. Notification Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#receiving-notifications var notifications = JsonConvert.DeserializeObject<dynamic[]>(requestBody); foreach (var notification in notifications) { if (notification.contentType == "Audit.General" && notification.contentUri != null) { // Process each notification } } 3. Bearer Token Fetch string bearerToken = await GetBearerTokenAsync(log); if (string.IsNullOrEmpty(bearerToken)) continue; 4. Fetch Audit Content https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#retrieve-content var requestMsg = new HttpRequestMessage(HttpMethod.Get, contentUri); requestMsg.Headers.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken); var result = await httpClient.SendAsync(requestMsg); if (!result.IsSuccessStatusCode) continue; var auditContentJson = await result.Content.ReadAsStringAsync(); 5. Deserialize and Filter Audit Records https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#dynamics-365-schema var auditRecords = JsonConvert.DeserializeObject<dynamic[]>(auditContentJson); foreach (var eventData in auditRecords) { string orgName = eventData.CrmOrganizationUniqueName ?? ""; string workload = eventData.Workload ?? ""; string entityName = eventData.EntityName ?? ""; string operation = eventData.Message ?? ""; if (workload != "Dynamics 365" && workload != "CRM" && workload != "Power Platform") continue; if (!entityOpsFilter.ContainsKey(entityName)) continue; if (!entityOpsFilter[entityName].Contains(operation)) continue; // Store in Cosmos DB } 6. Store in Cosmos DB var cosmosDoc = new { id = Guid.NewGuid().ToString(), tenantId = notification.tenantId, raw = eventData }; var partitionKey = (string)notification.tenantId; var resp = await cosmosContainer.CreateItemAsync(cosmosDoc, new PartitionKey(partitionKey)); 7. Logging and Error Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#errors log.LogInformation($"Stored notification in Cosmos DB for contentUri: {notification.contentUri}, DocumentId: {cosmosDoc.id}"); catch (Exception dbEx) { log.LogError($"Error storing notification in Cosmos DB: {dbEx.Message}"); } 🧠 Conclusion This solution provides a robust, extensible pattern for exporting Dynamics 365 CE Dataverse org data to Cosmos DB using the Office 365 Management Activity API. Solution architects can use this as a reference for building or evaluating similar integrations, especially when working with third-party archiving or analytics solutions.166Views1like0Comments🚀 Scaling Dynamics 365 CRM Integrations in Azure: The Right Way to Use the SDK ServiceClient
This blog explores common pitfalls and presents a scalable pattern using the .Clone() method to ensure thread safety, avoid redundant authentication, and prevent SNAT port exhaustion. 🗺️ Connection Factory with Optimized Configuration The first step to building a scalable integration is to configure your ServiceClient properly. Here's how to set up a connection factory that includes all the necessary performance optimizations: public static class CrmClientFactory { private static readonly ServiceClient _baseClient; static CrmClientFactory() { ThreadPool.SetMinThreads(100, 100); // Faster thread ramp-up ServicePointManager.DefaultConnectionLimit = 65000; // Avoid connection bottlenecks ServicePointManager.Expect100Continue = false; // Reduce HTTP latency ServicePointManager.UseNagleAlgorithm = false; // Improve responsiveness _baseClient = new ServiceClient(connectionString); _baseClient.EnableAffinityCookie = false; // Distribute load across Dataverse web servers } public static ServiceClient GetClient() => _baseClient.Clone(); } ❌ Anti-Pattern: One Static Client for All Operations A common anti-pattern is to create a single static instance of ServiceClient and reuse it across all operations: public static class CrmClientFactory { private static readonly ServiceClient _client = new ServiceClient(connectionString); public static ServiceClient GetClient() => _client; } This struggles under load due to thread contention, throttling, and unpredictable behavior. ⚠️ Misleading Fix: New Client Per Request To avoid thread contention, some developers create a new ServiceClient per request, however the below does not truly create seperate connection unless RequireNewInstance=True connection string param or useUniqueInstance:true constructor param are utilized. Many a times these intricate details are missed out and causing same connection be shared across threads with high lock times compounding overall slowness. public async Task Run(HttpRequest req) { var client = new ServiceClient(connectionString); // Use client here } Even with above flags there is a risk of auth failures and SNAT exhaustion in a high throughout service integration scenario due to repeated OAuth authentication every time a ServiceClient instance is created with constructor. ✅ Best Practice: Clone Once, Reuse Per Request The best practice is to create a single authenticated ServiceClient and use its .Clone() method to generate lightweight, thread-safe copies for each request: public static class CrmClientFactory { private static readonly ServiceClient _baseClient = new ServiceClient(connectionString); public static ServiceClient GetClient() => _baseClient.Clone(); } Then, in your Azure Function or App Service operation: ❗ Avoid calling the factory again inside helper methods. Clone once and pass the client down the call stack. public async Task HandleRequest() { var client = CrmClientFactory.GetClient(); // Clone once per request await DoSomething1(client); await DoSomething2(client); } public async Task DoSomething1(ServiceClient client) { await client.Create(); // Avoid new client cloning but just use passed down client as is } 🧵 Parallel Processing with Batching When working with large datasets, combining parallelism with batching using ExecuteMultiple can significantly improve throughput—if done correctly. 🔄 Common Mistake: Dynamic Batching Inside Parallel Loops Many implementations dynamically batch records inside Parallel.ForEach, assuming consistent batch sizes. But in practice, this leads to: Inconsistent batch sizes (1 to 100+) Unpredictable performance Difficult-to-analyze telemetry ✅ Fix: Chunk Before You Batch public static List> ChunkRecords(List records, int chunkSize) { return records .Select((record, index) => new { record, index }) .GroupBy(x => x.index / chunkSize) .Select(g => g.Select(x => x.record).ToList()) .ToList(); } public static void ProcessBatches(List records, ServiceClient serviceClient, int batchSize = 100, int maxParallelism = 5) { var batches = ChunkRecords(records, batchSize); Parallel.ForEach(batches, new ParallelOptions { MaxDegreeOfParallelism = maxParallelism }, batch => { using var service = serviceClient.Clone(); // Clone once per thread var executeMultiple = new ExecuteMultipleRequest { Requests = new OrganizationRequestCollection(), Settings = new ExecuteMultipleSettings { ContinueOnError = true, ReturnResponses = false } }; foreach (var record in batch) { executeMultiple.Requests.Add(new CreateRequest { Target = record }); } service.Execute(executeMultiple); }); } 🚫 Avoiding Throttling: Plan, Don’t Just Retry While it’s possible to implement retry logic for HTTP 429 responses using the Retry-After header, the best approach is to avoid throttling altogether. ✅ Best Practices Control DOP and batch size: Keep them conservative and telemetry driven. Use alternate app registrations: Distribute load across identities but do not overload the Dataverse org. Avoid triggering sync plugins or real-time workflows: These amplify load. Address long-running queries: Optimize operations with Microsoft support help before scaling Relax time constraints: There’s no need to finish a job in 1 hour if it can be done safely in 3. 🌐 When to Consider Horizontal Scaling Even with all the right optimizations, your integration may still hit limits under the HTTP stack—such as: WCF binding timeouts SNAT port exhaustion Slowness not explained by Dataverse telemetry In these cases, horizontal scaling becomes essential. App Services: Easily scale out using auto scale rules. Function Apps (service model): Scale well with HTTP or service bus triggers. Scheduled Functions: Require deduplication logic to avoid duplicate processing. On-Premises VM: When D365 SDK based integrations hosted on VM infra, they shall need horizontal scaling by increasing servers. 🧠 Final Thoughts Scaling CRM integrations in Azure is about resilience, observability, and control. Follow these patterns: Clone once per thread Pre-chunk batches Tune with telemetry evidence Avoid overload when you can Scale horizontally when needed—but wisely Build integrations that are fast, reliable, and future proof.268Views1like0CommentsTransform business process with agentic business applications (Asia)
October 1-2, 2025 | 7:30 AM – 10:30 AM ASIA (IST) Overview This bootcamp is designed to equip you with the AI skills and clarity needed to harness the power of Copilot Studio and AI Agents in Dynamics 365. Participants will learn what AI agents are, why they matter in Dynamics 365, and how to design and build agents that address customer needs today while preparing for the AI-native ERP and CRM future. Building from the fundamentals of Copilot Studio and its integration across Dynamics 365 applications, we’ll explore how first-party agents are built, why Microsoft created them, and where their current limitations open opportunities for partner-led innovation. We’ll then expand into third-party agent design and extensibility, showing how partners can create differentiated solutions that deliver unique value. Finally, we will provide a forward-looking perspective on Microsoft’s strategy with Model Context Protocol (MCP), Agent-to-Agent (A2A) orchestration, and AI-native business applications - inspiring partners to create industry-specific agents that solve real customer pain points and showcase their expertise. Join this virtual event to accelerate your technical readiness journey on AI agents in Dynamics 365. Register today and mark your calendars to gain valuable insights from our Microsoft SMEs. Don’t miss this opportunity to learn about the latest developments and elevate your partnership with Microsoft Event prerequisites Participants should have some familiarity and work experience with the associated solutions. Additionally, we suggest having knowledge of the relevant role-based certification content (although passing the exam is not mandatory). You can find free self-paced learning content and technical documentation related to the workshop topics at Microsoft Learn. Earn a digital badge Attendees who participate in the live sessions of this workshop will earn a digital badge. These badges, which serve as a testament to your engagement and learning, can be conveniently accessed and shared through the Credly digital platform. Please note that accessing on-demand content does not meet the criteria for earning a badge. REGISTER TODAY!194Views0likes0CommentsTransform business process with agentic business applications (Americas)
September 30 - October 1, 2025 | 7:00 AM – 10:00 AM AMER (PDT) Overview This bootcamp is designed to equip you with the AI skills and clarity needed to harness the power of Copilot Studio and AI Agents in Dynamics 365. Participants will learn what AI agents are, why they matter in Dynamics 365, and how to design and build agents that address customer needs today while preparing for the AI-native ERP and CRM future. Building from the fundamentals of Copilot Studio and its integration across Dynamics 365 applications, we’ll explore how first-party agents are built, why Microsoft created them, and where their current limitations open opportunities for partner-led innovation. We’ll then expand into third-party agent design and extensibility, showing how partners can create differentiated solutions that deliver unique value. Finally, we will provide a forward-looking perspective on Microsoft’s strategy with Model Context Protocol (MCP), Agent-to-Agent (A2A) orchestration, and AI-native business applications - inspiring partners to create industry-specific agents that solve real customer pain points and showcase their expertise. Join this virtual event to accelerate your technical readiness journey on AI agents in Dynamics 365. Register today and mark your calendars to gain valuable insights from our Microsoft SMEs. Don’t miss this opportunity to learn about the latest developments and elevate your partnership with Microsoft Event prerequisites Participants should have some familiarity and work experience with the associated solutions. Additionally, we suggest having knowledge of the relevant role-based certification content (although passing the exam is not mandatory). You can find free self-paced learning content and technical documentation related to the workshop topics at Microsoft Learn. Earn a digital badge Attendees who participate in the live sessions of this workshop will earn a digital badge. These badges, which serve as a testament to your engagement and learning, can be conveniently accessed and shared through the Credly digital platform. Please note that accessing on-demand content does not meet the criteria for earning a badge. REGISTER TODAY!80Views1like0CommentsDrive demand for your offers with solution area campaigns in a box
Take your marketing campaigns further with campaigns in a box (CiaBs), collections of partner-ready, high-quality marketing assets designed to deepen customer engagement, simplify your marketing efforts, and grow revenue. Microsoft offers both new and refreshed campaigns for the following solution areas: Data & AI (Azure), Modern Work, Business Applications, Digital & App Innovation (Azure), Infrastructure (Azure), and Security. Check out the latest CiaBs below and get started today by visiting the Partner Marketing Center, where you’ll find resources such as step-by-step execution guides, customizable nurture tactics, and assets including presentations, e-books, infographics, and more. AI transformation Generate interest in AI adoption among your customers. As AI technology grabs headlines and captures imaginations, use this campaign to illustrate your audience’s unique opportunity to harness the power of AI to deliver value faster. Learn more about the campaign: AI Transformation (formerly Era of AI): Show audiences how to take advantage of the potential of AI to drive business value and showcase the usefulness of Microsoft AI solutions delivered and tailored by your organization. Data & AI (Azure) Our Data & AI campaigns demonstrate how your customers can win customers with AI-enabled differentiation. Show how they can transform their businesses with generative AI, a unified data estate, and solutions like Microsoft Fabric, Microsoft Power BI, and Azure Databricks. Campaigns include: Unify your intelligent data - Banking: Help your banking customers understand how you can help them break down data silos, meet compliance demands, and deliver on rising customer expectations. Innovate with the Azure AI platform: Help your customers understand the potential of generative AI solutions to differentiate themselves in the market—and inspire them to build GenAI solutions with the Azure AI platform. Unify your intelligent data and analytics platform - ENT: Show enterprise audiences how unifying data and analytics on an open foundation can help streamline data transformation and business intelligence. Unify your intelligent data and analytics platform - SMB: Create awareness and urgency for SMBs to adopt Microsoft Fabric as the AI-powered, unified data platform that will suit their analytics needs. Modern Work Our Modern Work campaigns help current and potential customers understand how they can effectively transform their businesses with AI capabilities. Campaigns include: Connect and empower your frontline workers: Empower your customers' frontline workers with smart, AI-enhanced workflows with solutions based on Microsoft Teams and Microsoft 365 Copilot. Use this campaign to show your customers how they can make their frontline workers feel more connected, leading to improved productivity and efficiency. Microsoft 365 Copilot SMB: Increase your audience's understanding of the larger potential of Microsoft 365 Copilot and how AI capabilities can accelerate growth and transform operations. Smart workplace with Teams: Use this campaign to show your customers how to use AI to unlock smarter communication and collaboration with Microsoft Teams and Microsoft 365 Copilot. This campaign demonstrates to customers how you can help them seamlessly integrate meetings, calls, chat, and collaboration to break down silos, gain deeper insights, and focus on the work that matters. Cloud endpoints: Help customers bring virtualized applications to the cloud by providing secure AI-powered productivity and development on any device with Microsoft Intune Suite and Windows in the cloud solutions. Business Applications Nurture interest with audiences ready to modernize and transform their business operations with these BizApps go-to-market resources. Campaigns include: AI-powered customer service: Highlight how AI-powered solutions like Microsoft Dynamics 365 are transforming customer service with more personalized experiences, smarter teamwork, and improved efficiency. Migrate and modernize your ERP with Microsoft Dynamics 365: Position yourself as the right partner to modernize or replace your customers' legacy on-premises ERP systems with a Copilot-powered ERP. Business Central for SMB: Offer customers Microsoft Dynamics 365, a comprehensive business management solution that connects finance, sales, service, and operations teams with a single application to boost productivity and improve decision-making. AI-powered CRM: Help your customers enhance their customer experiences and close more deals with Microsoft 365 Dynamics Sales by making data AI-ready, which empowers them to create effective marketing content with Microsoft 365 Copilot and pass qualified leads on to sales teams. Use this campaign to show audiences how Copilot and AI can supercharge their CRM to increase productivity and efficiency, ultimately leading to better customer outcomes. Modernize at scale with AI and Microsoft Power Platform: This campaign is designed to introduce the business values unlocked with Microsoft Power Platform, show how low-code solutions can accelerate development and drive productivity, and position your company as a valuable asset in the deployment of these solutions. Digital & App Innovation (Azure) Position yourself as the strategic AI partner of choice and empower your customers to grow their businesses by helping them gain agility and build new AI applications faster with intelligent experiences. Campaigns include: Build and modernize AI apps: Help customers building new AI-infused applications and modernizing their application estate take advantage of the Azure AI application platform. Accelerate developer productivity: Help customers reimagine the developer experience with the world’s most-adopted AI-powered platform. Use this campaign to show customers how you can use Microsoft and GitHub tools to help streamline workflows, collaborate better, and deliver intelligent apps faster. Infrastructure (Azure) Help customers tap into the cloud to expand capabilities and boost their return on investment by transforming their digital operations. Campaigns include: Modernize VDI to Azure Virtual Desktop - SMB: Show SMB customers how they can meet the challenges of virtual work with Azure Virtual Desktop and gain flexibility, reliability, and cost-effectiveness. Migrate VMware workloads to Azure: Help customers capitalize on the partnership between VMware and Microsoft so they can migrate VMware workloads to Azure in an efficient and cost-effective manner. Migrate and secure Windows Server and SQL Server and Linux - ENT: Showcase the high return on investment (ROI) of using an adaptive cloud purpose-built for AI workloads, and help customers understand the value of migrating to Azure at their own pace. Modernize SAP on the Microsoft Cloud: Reach SAP customers before the 2027 end-of-support deadline for SAP S/4HANA to show them the importance of having a plan to migrate to the cloud. This campaign also underscores the value of moving to Microsoft Azure in the era of AI. Migrate and secure Windows Server and SQL Server and Linux estate - SMB: Use this campaign to increase understanding of the value gained by migrating from an on-premises environment to a hybrid or cloud environment. Show small and medium-sized businesses that they can grow their business, save money, improve security, and more when they move their workload from Windows Server, SQL Server, and Linux to Microsoft Azure. Security Demonstrate the power of modern security solutions and help customers understand the importance of robust cybersecurity in today’s landscape. Campaigns include: Defend against cybersecurity threats: Increase your audience's understanding of the powerful, AI-driven Microsoft unified security platform, which integrates Microsoft Sentinel, Microsoft Defender XDR, Security Exposure Management, Security Copilot, and Microsoft Threat Intelligence. Data security: Show customers how Microsoft Purview can help fortify data security in a world facing increasing cybersecurity threats. Modernize security operations: Use this campaign to sell Microsoft Sentinel, an industry-leading cloud-native SIEM that can help your customers stay protected and scale their security operations.1KViews2likes0CommentsDrive customer demand and accelerate growth with Microsoft skilling opportunities
Developing cutting-edge skills isn’t just a priority—it’s a competitive necessity, as data from 2024 skilling trends revealed. According to TalentLMS, 71% of employees now seek more frequent skill updates, and 80% are calling on their organizations to prioritize upskilling and reskilling investments. Microsoft partners can turn this willingness to learn into an opportunity to drive innovation and growth: skilling trends in 2025 show growing demand for new, industry-specific skilling, particularly in AI and machine learning, cybersecurity, and data literacy and analytics. Studies also found a thirst for self-guided learning, bite-sized certifications, and career resilience. Microsoft is here to help meet that demand. We're dedicated to providing our partners with tools and opportunities to help your organization thrive in 2025 and beyond. We’ve developed skilling events around the world, such as bootcamps, workshops, and self-guided learning opportunities to strengthen sales and technical skills in the topics most critical to your organization. Explore upcoming online and in-person events in your region to discover how we can help you learn and grow. Microsoft AI Partner Training Days Join us in person to hear about the latest trends and technology in the era of Al, with guidance from Microsoft executives and industry leaders. Hear about lessons learned from real-world Al deployments, discuss sales best practices followed by Microsoft teams, get hands-on experience with the Microsoft Al platform, and discover go-to-market tools designed to build and expand your Al practice. Register today and share with your global teams. We’re excited to host you at one of our training days around the world: Americas January 28, 2025: New York, NY Asia February 25, 2025: Mumbai March 19, 2025: Seoul April 3, 2025: Tokyo Europe, Middle East, Africa January 22, 2025: Johannesburg March 4, 2025: London Partner Project Ready workshops Your organization’s developers, solution architects, and data scientists can register for the Microsoft Azure, Business Applications, Modern Work, or Microsoft Security Partner Project Ready workshops. These multiday skilling events help deepen the expertise needed to assess customer needs and seamlessly implement Azure solutions. Plus, participating in the live sessions of certain events gives you the opportunity to earn a digital badge that serves as a testament to your engagement and training. We are continually adding new events across the globe, so be sure to check the Microsoft Global Partner Skilling Hub for the latest events and registration links. In the meantime, you can register today for the following events to begin building the skills that will help you become project-ready: Azure Build and modernize AI Apps workshop February 17-20, 2025: IST, GMT, PST Learn more and register Power your AI transformation with Copilot and the Copilot stack January 22-24, 2025: IST, GMT Learn more and register Microsoft Fabric workshops Track 1: Fabric administration and governance Track 2: Data warehousing with Microsoft Fabric January 28-29, 2025: IST, GMT, PST Learn more and register Microsoft Fabric Workshops Track 1: Modern data engineering with Microsoft Fabric February 11-13, 2025: IST, GMT, PST Track 2: Data insights with AI in Microsoft Fabric February 11-12, 2025: IST, GMT, PST Learn more and register Microsoft Fabric workshops Coming soon: Track 1: Real-time intelligence with Microsoft Fabric Track 2: Data warehousing with Microsoft Fabric March 11-12, 2025: IST, GMT, PST Azure Databricks: migration and integration February 18-20, 2025: IST February 19-21, 2025: GMT, PST Learn more and register Build and extend your own agents using pro-code capabilities January 28-30, 2025: IST January 29-31, 2025: GMT, PST Learn more and register Fortify your data with Microsoft Purview February 4-6, 2025: IST, GMT, PST Learn more and register Build, extend, or buy? Driving customer conversations with Copilot and Copilot stack February 11-13, 2025: IST, GMT, PST Learn more and register Business Applications Power your AI transformation with Copilot and the Copilot stack January 22-24, 2025: IST, GMT Learn more and register Full contact center solution with Power Platform & Dynamics 365 January 28, 2025: PST January 30, 2025: PST February 4, 2025: PST February 6, 2025: PST February 11, 2025: PST February 13, 2025: PST February 18, 2025: PST February 20, 2025: PST Learn more and register Build and extend your own agents using pro-code capabilities January 28-30, 2025: IST January 29-31, 2025: GMT, PST Learn more and register Build, extend, or buy? Driving customer conversations with Copilot and Copilot stack February 11-13, 2025: IST, GMT, PST Learn more and register Innovate with Microsoft 365 Copilot and build your own agents February 18-20, 2025: IST, GMT, PST Learn more and register Modern Work Power your AI transformation with Copilot and the Copilot stack January 22-24, 2025: IST, GMT Learn more and register Build and extend your own agents using pro-code capabilities January 28-30, 2025: IST January 29-31, 2025: GMT, PST Learn more and register Build, extend, or buy? Driving customer conversations with Copilot and Copilot stack February 11-13, 2025: IST, GMT, PST Learn more and register Innovate with Microsoft 365 Copilot and build your own agents February 18-20, 2025: IST, GMT, PST Learn more and register Security Certification week for Microsoft AI Cloud Partner Program – Security and GitHub January 27-31, 2025: IST, GMT, PST Learn more and register Power your AI transformation with Copilot and the Copilot stack January 22-24, 2025: IST, GMT Learn more and register Fortify your data with Microsoft Purview February 4-6, 2025: IST, GMT, PST Learn more and register Build, extend, or buy? Driving customer conversations with Copilot and Copilot stack February 11-13, 2025: IST, GMT, PST Learn more and register Migrating your SIEM solution to Microsoft Sentinel February 18-20, 2025: IST, GMT, PST Learn more and register Innovate with Microsoft 365 Copilot and build your own agents February 18-20, 2025: IST, GMT, PST Learn more and register Develop new business with Sales and Pre-Sales Partner Skilling bootcamps To grow your business, you need to understand the critical points in the customer journey, identify business opportunities, and articulate the differentiated value of your AI and cloud solutions. We can help your sales team build their knowledge and techniques in Microsoft solution areas with training based on our industry expertise, market research, and decades of experience in the sales journey. Our on-demand, hybrid, and interactive Sales and Pre-Sales Skilling bootcamps provide scheduling flexibility for learners at every level of experience—so your team can learn wherever and whenever their calendar allows. Register today for one of our upcoming Sales and Pre-Sales Skilling bootcamps: Power your AI transformation with Copilot and the Copilot stack January 22-24, 2025: IST, GMT Learn more and register SMB sales bootcamp January 28-30, 2025: PST January 29-31, 2025: IST, GMT Learn more and register Low code + Copilot Studio sales bootcamp February 11-13, 2025: PST February 12-14, 2025: IST, GMT Learn more and register Explore our in-person technical workshops Our in-person regional partner skilling workshops are meticulously crafted for the AI era. These hands-on workshops provide the essential skills required in today's dynamic landscape, including: Build and extend your own agents using pro-code capabilities Build and extend Copilots to improve business productivity Technical workshop implementing Azure VMware solutions AI-powered threat protection with Microsoft Sentinel and Microsoft Security Copilot Fortify your data security with Microsoft Purview Empowering business growth: modernizing data and analytics with Microsoft Fabric in the AI era Find an exclusive workshop coming to a city near you: Build and extend Copilots to improve business productivity Empowering business growth: modernizing data and analytics with Microsoft Fabric in the AI era Technical workshop implementing Azure VMware Solution February 25, 2025: Tokyo Fortify your data security with Microsoft Purview Build and extend your own agents using pro-code capabilities Upcoming: February 11, 2025: Bogota February 19, 2025: Cologne April 16, 2025: Osaka Technical workshop implementing Azure VMware Solution February 17, 2025: Singapore Upcoming: February 13, 2025: Bogota Empowering business growth: modernizing data and analytics with Microsoft Fabric in the AI era February 11, 2025: Bengaluru February 20, 2025: Cologne March 6, 2025: Tokyo March 12, 2025: Prague April 1, 2025: Milan April 2, 2025: Melbourne April 24, 2025: Beijing May 8, 2025: Munich AI-powered threat protection with Microsoft Sentinel and Microsoft Security Copilot February, 12, 2025: Bogota Upcoming: February 18, 2025: Singapore February 18, 2025: Cologne February 19, 2025: Seoul March 7, 2025: Tokyo March 17, 2025: Gurgaon April 2, 2025: Milan Migrate to innovate March 13, 2025: Sydney We are continually adding new events that take place around the globe, so if you do not see an event near you, be sure to bookmark the following links and check back frequently. In the Americas: Americas Partner Workshops In EMEA: Partner Enablement Workshops In Asia: Asia Partner Workshops Grow your AI capabilities with our Industry AI Envisioning series Join one of the Business and Capability Envisioning webinars to explore AI’s impact on your industry and how you can harness it to drive more business value and better customer experiences. You’ll gain insights from industry-specific use case examples, learn how to strategically solve business problems using an envisioning framework, and better understand how to recognize AI-driven solution opportunities for your customers and organization. Register for an upcoming webinar for your industry: AI for Retail AI for Manufacturing AI for Healthcare AI for Sustainability AI for Financial Services448Views1like0Comments