biz apps
4 TopicsSeamless blend of ILogger with ITracingService inside D365 CE Plugins
👋 Introduction This document introduces a practical approach for integrating Application Insights logging into Dynamics 365 plugins using the ILogger interface alongside the existing ITracingService. The key advantage of this method is its minimal impact on the existing codebase, allowing developers to adopt enhanced logging capabilities without the need to refactor or modify each plugin individually. By wrapping the tracing logic, this pattern promotes maintainability and simplifies the transition to modern observability practices with a single, centralized change. 🧱 The Classic Plugin Base Pattern Most customers already use a base class pattern for plugins. This pattern wraps the IPlugin interface and provides a LocalPluginContext object that encapsulates services like IOrganizationService, ITracingService, and IPluginExecutionContext. ✅ Benefits: Reduces boilerplate Encourages separation of concerns Makes plugins easier to test and maintain 🧩 Base Class with try-catch in Execute public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 📈 Seamless Application Insights Logging with a Tracing Adapter 💡 The Problem Many customers want to adopt Application Insights for plugin telemetry—but hesitate due to the need to refactor hundreds of TracingService.Trace(...) calls across their plugin codebase. 💡 The Innovation The following decorator pattern wraps ITracingService and forwards trace messages to both the original tracing service and an ILogger implementation (e.g., Application Insights). The only change required is in the base class constructor—no changes to existing trace calls. 🧩 Tracing Adapter public class LoggerTracingServiceDecorator : ITracingService { private readonly ITracingService _tracingService; private readonly ILogger _logger; public LoggerTracingServiceDecorator(ITracingService tracingService, ILogger logger) { _tracingService = tracingService; _logger = logger; } public void Trace(string format, params object[] args) { _tracingService.Trace(format, args); _logger?.LogInformation(format, args); } } 🧩 Updated Base Class Constructor public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); var standardTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); TracingService = new LoggerTracingServiceDecorator(standardTracingService, logger); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } This enables all trace calls to be logged both to Plugin Trace Logs and Application Insights. AppInsights Traces allows for easier troubleshooting using Kusto Query Language and enables alerting based on custom trace messages. 🧩 Using ILogger inside a plugin without base class This approach is exactly the same as implemented in plugins with a base class. However, in this case, we make the wrapping assignment once at the beginning of each plugin. This is far better compared to modifying every line that uses tracingService.Trace. public class ExistingPlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { var originalTracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); var logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); // Wrap the original tracing service with the decorator var tracingService = new LoggerTracingServiceDecorator(originalTracingService, logger); tracingService.Trace("Plugin execution started."); try { // Existing plugin logic tracingService.Trace("Processing business logic..."); } catch (Exception ex) { tracingService.Trace($"Exception occurred: {ex}"); throw; } tracingService.Trace("Plugin execution completed."); } } 📣 Final Thoughts These patterns weren’t invented in a vacuum—they were shaped by real customer needs and constraints. Whether you're modernizing legacy plugins or building new ones, this approach helps you deliver supportable solutions with minimal disruption to your existing codebase.98Views0likes0Comments🚀 Export D365 CE Dataverse Org Data to Cosmos DB via the Office365 Management Activity API
📘 Preface This post demonstrates one method to export Dynamics 365 Customer Engagement (CE) Dataverse organization data using the Office 365 Management Activity API and Azure Functions. It is feasible for customers to build a custom lake-house architecture with this feed, enabling advanced analytics, archiving, or ML/AI scenarios. https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference 🧭 When to Use This Custom Integration While Microsoft offers powerful native integrations like Dataverse Synapse Link and Microsoft Fabric, this custom solution is observed implemented and relevant in the following scenarios: Third-party observability and security tools already use this approach Solutions such as Splunk and other enterprise-grade platforms commonly implement integrations based on the Office 365 Management Activity API to ingest tenant-wide audit data. This makes it easier for customers to align with existing observability pipelines or security frameworks. Customers opt out of Synapse Link or Fabric Whether due to architectural preferences, licensing constraints, or specific compliance requirements, some customers choose not to adopt Microsoft’s native integrations. The Office Management API offers a viable alternative for building custom data export and monitoring solutions tailored to their needs. 🎯 Why Use the Office 365 Management Activity API? Tenant-wide Data Capture: Captures audit logs and activity data across all Dataverse orgs in a tenant. Integration Flexibility: Enables export to Cosmos DB, cold storage, or other platforms for analytics, compliance, or ML/AI. Third-party Compatibility: Many enterprise tools use similar mechanisms to ingest and archive activity data. 🏗️ Architecture Overview Azure Function App (.NET Isolated): Built as webhook, processes notifications, fetches audit content, and stores filtered events in Cosmos DB. Cosmos DB: Stores audit events for further analysis or archiving. Application Insights: Captures logs and diagnostics for troubleshooting. 🛠️ Step-by-Step Implementation https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis#build-your-app 1. Prerequisites Azure subscription Dynamics 365 CE environment (Dataverse) Azure Cosmos DB account (SQL API) Office 365 tenant admin rights Enable Auditing in Dataverse org 2. Register an Azure AD App Go to Azure Portal > Azure Active Directory > App registrations > New registration Note: Application (client) ID Directory (tenant) ID Create a client secret Grant API permissions: ActivityFeed.Read ActivityFeed.ReadDlp ServiceHealth.Read Grant admin consent 3. Set Up Cosmos DB Create a Cosmos DB account (SQL API) Create: Database: officewebhook Container: dynamicsevents Partition key: /tenantId Note endpoint URI and primary key 4. Create the Azure Function App Use Visual Studio or VS Code Create a new Azure Functions project (.NET 8 Isolated Worker) Add NuGet packages: Microsoft.Azure.Functions.Worker Microsoft.Azure.Cosmos Newtonsoft.Json Function Logic: Webhook validation Notification processing Audit content fetching Event filtering Storage in Cosmos DB 5. Configure Environment Variables { "OfficeApiTenantId": "<your-tenant-id>", "OfficeApiClientId": "<your-client-id>", "OfficeApiClientSecret": "<your-client-secret>", "CrmOrganizationUniqueName": "<your-org-name>", "CosmosDbEndpoint": "<your-cosmos-endpoint>", "CosmosDbKey": "<your-cosmos-key>", "CosmosDbDatabaseId": "officewebhook", "CosmosDbContainerId": "dynamicsevents", "EntityOperationsFilter": { "incident": ["create", "update"], "account": ["create"] } } 6. Deploy the Function App Build and publish using Azure Functions Core Tools or Visual Studio Restart the Function App from Azure Portal Monitor logs via Application Insights 🔔 How to Subscribe to the Office 365 Management Activity API for Audit Notifications To receive audit notifications, you must first subscribe to the Office 365 Management Activity API. This is a two-step process: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#start-a-subscription 1. Fetch an OAuth2 Token Authenticate using your Azure AD app credentials to get a bearer token: https://learn.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis # Define your Azure AD app credentials $tenantId = "<your-tenant-id>" $clientId = "<your-client-id>" $clientSecret = "<your-client-secret>" # Prepare the request body for token fetch $body = @{ grant_type = "client_credentials" client_id = $clientId client_secret = $clientSecret scope = "https://manage.office.com/.default" } # Fetch the OAuth2 token $tokenResponse = Invoke-RestMethod -Method Post -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Body $body $token = $tokenResponse.access_token 2. Subscribe to the Content Type Use the token to subscribe to the desired content type (e.g., Audit.General): https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api $contentType = "Audit.General" $headers = @{ Authorization = "Bearer $token" "Content-Type" = "application/json" } $uri = "https://manage.office.com/api/v1.0/$tenantId/activity/feed/subscriptions/start?contentType=$contentType" $response = Invoke-RestMethod -Method Post -Uri $uri -Headers $headers $response ⚙️ How the Azure Function Works 🔸 Trigger The Azure Function is triggered by notifications from the Office 365 Management Activity API. These notifications include audit events across your entire Azure tenant—not just Dynamics 365. 🔸 Filtering Logic Each notification is evaluated against your business rules: Organization match Entity type (e.g., incident, account) Operation type (e.g., create, update) These filters are defined in the EntityOperationsFilter environment variable: { "incident": ["create", "update"], "account": ["create"] } 🔸 Processing If the event matches your filters, the function fetches the full audit data and stores it in Cosmos DB. If not, the event is ignored. 🔍 Code Explanation: The Run Method 1. Webhook Validation https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#webhook-validation string validationToken = query["validationToken"]; if (!string.IsNullOrEmpty(validationToken)) { await response.WriteStringAsync(validationToken); response.StatusCode = HttpStatusCode.OK; return response; } 2. Notification Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#receiving-notifications var notifications = JsonConvert.DeserializeObject<dynamic[]>(requestBody); foreach (var notification in notifications) { if (notification.contentType == "Audit.General" && notification.contentUri != null) { // Process each notification } } 3. Bearer Token Fetch string bearerToken = await GetBearerTokenAsync(log); if (string.IsNullOrEmpty(bearerToken)) continue; 4. Fetch Audit Content https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#retrieve-content var requestMsg = new HttpRequestMessage(HttpMethod.Get, contentUri); requestMsg.Headers.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken); var result = await httpClient.SendAsync(requestMsg); if (!result.IsSuccessStatusCode) continue; var auditContentJson = await result.Content.ReadAsStringAsync(); 5. Deserialize and Filter Audit Records https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#dynamics-365-schema var auditRecords = JsonConvert.DeserializeObject<dynamic[]>(auditContentJson); foreach (var eventData in auditRecords) { string orgName = eventData.CrmOrganizationUniqueName ?? ""; string workload = eventData.Workload ?? ""; string entityName = eventData.EntityName ?? ""; string operation = eventData.Message ?? ""; if (workload != "Dynamics 365" && workload != "CRM" && workload != "Power Platform") continue; if (!entityOpsFilter.ContainsKey(entityName)) continue; if (!entityOpsFilter[entityName].Contains(operation)) continue; // Store in Cosmos DB } 6. Store in Cosmos DB var cosmosDoc = new { id = Guid.NewGuid().ToString(), tenantId = notification.tenantId, raw = eventData }; var partitionKey = (string)notification.tenantId; var resp = await cosmosContainer.CreateItemAsync(cosmosDoc, new PartitionKey(partitionKey)); 7. Logging and Error Handling https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#errors log.LogInformation($"Stored notification in Cosmos DB for contentUri: {notification.contentUri}, DocumentId: {cosmosDoc.id}"); catch (Exception dbEx) { log.LogError($"Error storing notification in Cosmos DB: {dbEx.Message}"); } 🧠 Conclusion This solution provides a robust, extensible pattern for exporting Dynamics 365 CE Dataverse org data to Cosmos DB using the Office 365 Management Activity API. Solution architects can use this as a reference for building or evaluating similar integrations, especially when working with third-party archiving or analytics solutions.106Views0likes0Comments🚀 Scaling Dynamics 365 CRM Integrations in Azure: The Right Way to Use the SDK ServiceClient
This blog explores common pitfalls and presents a scalable pattern using the .Clone() method to ensure thread safety, avoid redundant authentication, and prevent SNAT port exhaustion. 🗺️ Connection Factory with Optimized Configuration The first step to building a scalable integration is to configure your ServiceClient properly. Here's how to set up a connection factory that includes all the necessary performance optimizations: public static class CrmClientFactory { private static readonly ServiceClient _baseClient; static CrmClientFactory() { ThreadPool.SetMinThreads(100, 100); // Faster thread ramp-up ServicePointManager.DefaultConnectionLimit = 65000; // Avoid connection bottlenecks ServicePointManager.Expect100Continue = false; // Reduce HTTP latency ServicePointManager.UseNagleAlgorithm = false; // Improve responsiveness _baseClient = new ServiceClient(connectionString); _baseClient.EnableAffinityCookie = false; // Distribute load across Dataverse web servers } public static ServiceClient GetClient() => _baseClient.Clone(); } ❌ Anti-Pattern: One Static Client for All Operations A common anti-pattern is to create a single static instance of ServiceClient and reuse it across all operations: public static class CrmClientFactory { private static readonly ServiceClient _client = new ServiceClient(connectionString); public static ServiceClient GetClient() => _client; } This struggles under load due to thread contention, throttling, and unpredictable behavior. ⚠️ Misleading Fix: New Client Per Request To avoid thread contention, some developers create a new ServiceClient per request, however the below does not truly create seperate connection unless RequireNewInstance=True connection string param or useUniqueInstance:true constructor param are utilized. Many a times these intricate details are missed out and causing same connection be shared across threads with high lock times compounding overall slowness. public async Task Run(HttpRequest req) { var client = new ServiceClient(connectionString); // Use client here } Even with above flags there is a risk of auth failures and SNAT exhaustion in a high throughout service integration scenario due to repeated OAuth authentication every time a ServiceClient instance is created with constructor. ✅ Best Practice: Clone Once, Reuse Per Request The best practice is to create a single authenticated ServiceClient and use its .Clone() method to generate lightweight, thread-safe copies for each request: public static class CrmClientFactory { private static readonly ServiceClient _baseClient = new ServiceClient(connectionString); public static ServiceClient GetClient() => _baseClient.Clone(); } Then, in your Azure Function or App Service operation: ❗ Avoid calling the factory again inside helper methods. Clone once and pass the client down the call stack. public async Task HandleRequest() { var client = CrmClientFactory.GetClient(); // Clone once per request await DoSomething1(client); await DoSomething2(client); } public async Task DoSomething1(ServiceClient client) { await client.Create(); // Avoid new client cloning but just use passed down client as is } 🧵 Parallel Processing with Batching When working with large datasets, combining parallelism with batching using ExecuteMultiple can significantly improve throughput—if done correctly. 🔄 Common Mistake: Dynamic Batching Inside Parallel Loops Many implementations dynamically batch records inside Parallel.ForEach, assuming consistent batch sizes. But in practice, this leads to: Inconsistent batch sizes (1 to 100+) Unpredictable performance Difficult-to-analyze telemetry ✅ Fix: Chunk Before You Batch public static List> ChunkRecords(List records, int chunkSize) { return records .Select((record, index) => new { record, index }) .GroupBy(x => x.index / chunkSize) .Select(g => g.Select(x => x.record).ToList()) .ToList(); } public static void ProcessBatches(List records, ServiceClient serviceClient, int batchSize = 100, int maxParallelism = 5) { var batches = ChunkRecords(records, batchSize); Parallel.ForEach(batches, new ParallelOptions { MaxDegreeOfParallelism = maxParallelism }, batch => { using var service = serviceClient.Clone(); // Clone once per thread var executeMultiple = new ExecuteMultipleRequest { Requests = new OrganizationRequestCollection(), Settings = new ExecuteMultipleSettings { ContinueOnError = true, ReturnResponses = false } }; foreach (var record in batch) { executeMultiple.Requests.Add(new CreateRequest { Target = record }); } service.Execute(executeMultiple); }); } 🚫 Avoiding Throttling: Plan, Don’t Just Retry While it’s possible to implement retry logic for HTTP 429 responses using the Retry-After header, the best approach is to avoid throttling altogether. ✅ Best Practices Control DOP and batch size: Keep them conservative and telemetry driven. Use alternate app registrations: Distribute load across identities but do not overload the Dataverse org. Avoid triggering sync plugins or real-time workflows: These amplify load. Address long-running queries: Optimize operations with Microsoft support help before scaling Relax time constraints: There’s no need to finish a job in 1 hour if it can be done safely in 3. 🌐 When to Consider Horizontal Scaling Even with all the right optimizations, your integration may still hit limits under the HTTP stack—such as: WCF binding timeouts SNAT port exhaustion Slowness not explained by Dataverse telemetry In these cases, horizontal scaling becomes essential. App Services: Easily scale out using auto scale rules. Function Apps (service model): Scale well with HTTP or service bus triggers. Scheduled Functions: Require deduplication logic to avoid duplicate processing. On-Premises VM: When D365 SDK based integrations hosted on VM infra, they shall need horizontal scaling by increasing servers. 🧠 Final Thoughts Scaling CRM integrations in Azure is about resilience, observability, and control. Follow these patterns: Clone once per thread Pre-chunk batches Tune with telemetry evidence Avoid overload when you can Scale horizontally when needed—but wisely Build integrations that are fast, reliable, and future proof.150Views0likes0CommentsToken Cache Service Inside D365 CE Plugin Base
Introduction This document explores a performance optimization technique for Dynamics 365 plugins by implementing centralized token caching using static variables in a plugin base class. Since plugin instances are recreated for every request, holding variables in memory is not feasible. However, static variables—such as a ConcurrentDictionary—defined in a base class can persist across executions, enabling efficient reuse of authentication tokens. This approach avoids tens or even hundreds of thousands of daily calls to identity management endpoints, which can overload those services. Additionally, plugin execution time can increase by 500 milliseconds to one second per request if authentication is performed repeatedly. If in-memory caching fails or is not viable, storing tokens in Dataverse and retrieving them as needed may be a better fallback than re-authenticating each time. 🔐 TokenService Implementation for Plugin Token Caching To optimize authentication performance in Dynamics 365 plugins, a custom TokenService is implemented and injected into the plugin base architecture. This service enables centralized token caching using a **static, read-only **ConcurrentDictionary, which persists across plugin executions. 🧱 Design Overview The TokenService exposes two methods: GetAccessToken(Guid key) – retrieves a cached token if it's still valid. SetAccessToken(Guid key, string token, DateTime expiry) – stores a new token with its expiry. The core of the service is a static dictionary: private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); This dictionary is shared across plugin executions because it's defined in the base class. This is crucial since plugin instances are recreated per request and cannot hold instance-level state. 🧩 Integration into LocalPluginContext The TokenService is injected into the well-known LocalPluginContext alongside other services like IOrganizationService, ITracingService, and IPluginExecutionContext. This makes the token service available to all child plugins via the context object. public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // Existing service setup... TokenService = new TokenService(TracingService); } 🔁 Token Retrieval Logic GetAccessToken checks if a token exists and whether it’s about to expire: public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } If the token is expired or missing, it returns null. It does not fetch a new token itself. 🔄 Token Refresh Responsibility The responsibility to fetch a new token lies with the child plugin, because: It has access to secure configuration values (e.g., client ID, secret, tenant). It knows the context of the external service being called. Once the child plugin fetches a new token, it calls: TokenService.SetAccessToken(key, token, expiry); This updates the shared cache for future executions. 🧱 Classic Plugin Base Pattern (Preserved) public abstract class PluginBase : IPlugin { protected internal class LocalPluginContext { public IOrganizationService OrganizationService { get; } public ITracingService TracingService { get; } public IPluginExecutionContext PluginExecutionContext { get; } public LocalPluginContext(IServiceProvider serviceProvider) { PluginExecutionContext = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); TracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); OrganizationService = ((IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory))) .CreateOrganizationService(PluginExecutionContext.UserId); } } public void Execute(IServiceProvider serviceProvider) { try { var localContext = new LocalPluginContext(serviceProvider); localContext.TracingService.Trace("Plugin execution started."); ExecutePlugin(localContext); localContext.TracingService.Trace("Plugin execution completed."); } catch (Exception ex) { var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace($"Unhandled exception: {ex}"); throw new InvalidPluginExecutionException("An error occurred in the plugin.", ex); } } protected abstract void ExecutePlugin(LocalPluginContext localContext); } 🧩 TokenService Implementation public interface ITokenService { string GetAccessToken(Guid key); void SetAccessToken(Guid key, string token, DateTime expiry); } public sealed class TokenService : ITokenService { private readonly ITracingService _tracingService; private static readonly ConcurrentDictionary<Guid, CachedAccessToken> TokenCache = new(); public TokenService(ITracingService tracingService) { _tracingService = tracingService; } public string GetAccessToken(Guid key) { if (TokenCache.TryGetValue(key, out var cachedToken)) { var timeRemaining = (cachedToken.Expiry - DateTime.UtcNow).TotalMinutes; if (timeRemaining > 2) { _tracingService.Trace($"Using cached token. Expires in {timeRemaining} minutes."); return cachedToken.Token; } } return null; } public void SetAccessToken(Guid key, string token, DateTime expiry) { TokenCache[key] = new CachedAccessToken(token, expiry); _tracingService.Trace($"Token stored for key {key} with expiry at {expiry}."); } private class CachedAccessToken { public string Token { get; } public DateTime Expiry { get; } public CachedAccessToken(string token, DateTime expiry) { Token = token; Expiry = expiry; } } } 🧩 Add TokenService to LocalPluginContext public ITokenService TokenService { get; } public LocalPluginContext(IServiceProvider serviceProvider) { // ... existing setup ... TokenService = new TokenService(TracingService); } 🧪 Full Child Plugin Example with Secure Config and Token Usage public class ExternalApiPlugin : PluginBase { private readonly SecureSettings _settings; public ExternalApiPlugin(string unsecureConfig, string secureConfig) { // Parse secure config into settings object _settings = JsonConvert.DeserializeObject<SecureSettings>(secureConfig); } protected override void ExecutePlugin(LocalPluginContext localContext) { localContext.TracingService.Trace("ExternalApiPlugin execution started."); // Get token string token = AccessTokenGenerator(_settings, localContext); // Use token to call external API CallExternalService(token, localContext.TracingService); localContext.TracingService.Trace("ExternalApiPlugin execution completed."); } private string AccessTokenGenerator(SecureSettings settings, LocalPluginContext localContext) { var token = localContext.TokenService.GetAccessToken(settings.TokenKeyGuid); if (!string.IsNullOrEmpty(token)) { var payload = DecodeJwtPayload(token); var expiryUnix = long.Parse(payload["exp"]); var expiryDate = DateTimeOffset.FromUnixTimeSeconds(expiryUnix).UtcDateTime; if ((expiryDate - DateTime.UtcNow).TotalMinutes > 2) { return token; } } // Fetch new token var newToken = FetchTokenFromOAuth(settings); var newPayload = DecodeJwtPayload(newToken); var newExpiry = DateTimeOffset.FromUnixTimeSeconds(long.Parse(newPayload["exp"])).UtcDateTime; localContext.TokenService.SetAccessToken(settings.TokenKeyGuid, newToken, newExpiry); return newToken; } private Dictionary<string, string> DecodeJwtPayload(string jwt) { var parts = jwt.Split('.'); var payload = parts[1].PadRight(parts[1].Length + (4 - parts[1].Length % 4) % 4, '='); var bytes = Convert.FromBase64String(payload); var json = Encoding.UTF8.GetString(bytes); return JsonConvert.DeserializeObject<Dictionary<string, string>>(json); } private string FetchTokenFromOAuth(SecureSettings settings) { // Simulated token fetch logic return "eyJhbGciOi..."; // JWT token } private void CallExternalService(string token, ITracingService tracingService) { tracingService.Trace("Calling external service with token..."); // Simulated API call } } 🧩 Wrapping It All Together By combining these patterns into the base class–child class structure, we get a plugin framework that is: ✅ Familiar and extensible ⚡️ Optimized for performance with token caching 🗣️ Final Thoughts These patterns weren’t invented in a vacuum—they were shaped by real customer needs and constraints. Whether you're modernizing legacy plugins or building new ones, I hope these ideas help you deliver more robust, scalable, and supportable solutions.123Views0likes0Comments