.net core
191 TopicsDotnet core Oracle 23c SSL connection not working on Linux environment and works on Windows
Our data direct ADO.NET oracle driver is having an issue with SSL connection on Linux platform and the same connection is working on windows environment with Oracle23c server. Are there any known limitations with dotnet core on linux environment with SSL/TLS connections. Trace : InnerException: System.IO.IOException Message: Unable to read data from the transport connection: Connection reset by peer. Source: System.Net.Sockets Stack Trace at System.Net.Sockets.NetworkStream.Read(Span`1 buffer) at System.Net.Security.SslStream.EnsureFullTlsFrameAsync[TIOAdapter](TIOAdapter adapter) at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](TIOAdapter adapter, Memory`1 buffer) at System.Net.Security.SslStream.Read(Byte[] buffer, Int32 offset, Int32 count) The same application works on Windows environment.483Views0likes4CommentsDisciplined Guardrail Development in enterprise application with GitHub Copilot
What Is Disciplined Guardrail-Based Development? In AI-assisted software development, approaches like Vibe Coding—which prioritize momentum and intuition—often fail to ensure code quality and maintainability. To address this, Disciplined Guardrail-Based Development introduces structured rules ("guardrails") that guide AI systems during coding and maintenance tasks, ensuring consistent quality and reliability. To get AI (LLMs) to generate appropriate code, developers must provide clear and specific instructions. Two key elements are essential: What to build – Clarifying requirements and breaking down tasks How to build it – Defining the application architecture The way these two elements are handled depends on the development methodology or process being used. Here are examples as follows. How to Set Up Disciplined Guardrails in GitHub Copilot To implement disciplined guardrail-based development with GitHub Copilot, two key configuration features are used: 1. Custom Instructions (.github/copilot-instructions.md): This file allows you to define persistent instructions that GitHub Copilot will always refer to when generating code. Purpose: Establish coding standards, architectural rules, naming conventions, and other quality guidelines. Best Practice: Instead of placing all instructions in a single file, split them into multiple modular files and reference them accordingly. This improves maintainability and clarity. Example Use: You might define rules like using camelCase for variables, enforcing error boundaries in React, or requiring TypeScript for all new code. https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions 2. Chat Modes (.github/chatmodes/*.chatmode.md): These files define specialized chat modes tailored to specific tasks or workflows. Purpose: Customize Copilot’s behavior for different development contexts (e.g., debugging, writing tests, refactoring). Structure: Each .chatmode.md file includes metadata and instructions that guide Copilot’s responses in that mode. Example Use: A debug.chatmode.md might instruct Copilot to focus on identifying and resolving runtime errors, while a test.chatmode.md could prioritize generating unit tests with specific frameworks. https://code.visualstudio.com/docs/copilot/customization/custom-chat-modes The files to be created and their relationships are as follows. Next, there are introductions for the specific creation method. #1: Custom Instructions With custom instructions, you can define commands that are always provided to GitHub Copilot. The prepared files are always referenced during chat sessions and passed to the LLM (this can also be confirmed from the chat history). An important note is to split the content into several files and include links to those files within the .github/copilot-instructions.md file. Because it can become too long if everything is written in a single file. There are mainly two types of content that should be described in custom instructions: A: Development Process (≒ outcome + Creation Method) What documents or code will be created: requirements specification, design documents, task breakdown tables, implementation code, etc. In what order and by whom they will be created: for example, proceed in the order of requirements definition → design → task breakdown → coding. B: Application Architecture How will the outcome be defined in A be created? What technology stack and component structure will be used? A concrete example of copilot-instructions.md is shown below. # Development Rules ## Architecture - When performing design and coding tasks, always refer to the following architecture documents and strictly follow them as rules. ### Product Overview - Document the product overview in `.github/architecture/product.md` ### Technology Stack - Document the technologies used in `.github/architecture/techstack.md` ### Coding Standards - Document coding standards in `.github/architecture/codingrule.md` ### Project Structure - Document the project directory structure in `.github/architecture/structure.md` ### Glossary (Japanese-English) - Document the list of terms used in the project in `.github/architecture/dictionary.md` ## Development Flow - Follow a disciplined development flow and execute the following four stages in order (proceed to the next stage only after completing the current one): 1. Requirement Definition 2. Design 3. Task Breakdown 4. Coding ### 1. Requirement Definition - Document requirements in `docs/[subsystem_name]/[business_name]/requirement.md` - Use `requirement.chatmode.md` to define requirements - Focus on clarifying objectives, understanding the current situation, and setting success criteria - Once requirements are defined, obtain user confirmation before proceeding to the next stage ### 2. Design - Document design in `docs/[subsystem_name]/[business_name]/design.md` - Use `design.chatmode.md` to define the design - Define UI, module structure, and interface design - Once the design is complete, obtain user confirmation before proceeding to the next stage ### 3. Task Breakdown - Document tasks in `docs/[subsystem_name]/[business_name]/tasks.md` - Use `tasks.chatmode.md` to define tasks - Break down tasks into executable units and set priorities - Once task breakdown is complete, obtain user confirmation before proceeding to the next stage ### 4. Coding - Implement code under `src/[subsystem_name]/[business_name]/` - Perform coding task by task - Update progress in `docs/[subsystem_name]/[business_name]/tasks.md` - Report to the user upon completion of each task Note: The only file that is always sent to the LLM is `copilot-instructions.md`. Documents linked from there (such as `product.md` or `techstack.md`) are not guaranteed to be read by the LLM. That said, a reasonably capable LLM will usually review these files before proceeding with the work. If the LLM does not properly reference each file, you may explicitly add these architecture documents to the context. Another approach is to instruct the LLM to review these files in the **chat mode settings**, which will be described later. There are various “schools of thought” regarding application architecture, and it is still an ongoing challenge to determine exactly what should be defined and what documents should be created. The choice of architecture depends on factors such as the business context, development scale, and team structure, so it is difficult to prescribe a one-size-fits-all approach. That said, as a general guideline, it is desirable to summarize the following: Product Overview: Overview of the product, service, or business, including its overall characteristics Technology Stack: What technologies will be used to develop the application? Project Structure: How will folders and directories be organized during development? Module Structure: How will the application be divided into modules? Coding Rules: Rules for handling exceptions, naming conventions, and other coding practices Writing all of this from scratch can be challenging. A practical approach is to create template information with the help of Copilot and then refine it. Specifically, you can: Use tools like M365 Copilot Researcher to create content based on general principles Analyze a prototype application and have the architecture information summarized (using Ask mode or Edit mode, feed the solution files to a capable LLM for analysis) However, in most cases, the output cannot be used as-is. The structure may not be analyzed correctly (hallucinations may occur) Project-specific practices and rules may not be captured Use the generated content as a starting point, and then refine it to create architecture documentation tailored to your own project. When creating architecture documents for enterprise-scale application development, a useful approach is to distinguish between the foundational parts and the individual application parts. Discipline-based guardrail development is particularly effective when building multiple applications in a “cookie-cutter” style on top of a common foundation. A cler example of this is Data-Oriented Architecture (DOA). In DOA, individual business applications are built on top of a shared database that serves as the overall common foundation. In this case, the foundational parts (the database layer) should not be modified arbitrarily by individual developers. Instead, focus on how to standardize the development of the individual application parts (the blue-framed sections) while ensuring consistency. Architecture documentation should be organized with this distinction in mind, emphasizing the uniformity of application-level development built upon the stable foundation. #2 Chat Mode By default, GitHub Copilot provides three chat modes: Ask, Edit, and Agent. However, by creating files under .github/chatmodes/*.chatmode.md, you can customize the Agent mode to create chat modes tailored for specific tasks. Specifically, you can configure the following three aspects. Functionally, this allows you to perform a specific task without having to manually change the model or tools, or write detailed instructions each time: model: Specify the default LLM to use (Note: The user can still manually switch to another LLM if desired) tools: Restrict which tools can be used (Note: The user can still manually select other tools if desired) custom instructions: Provide custom instructions specific to this chat mode A concrete example of .github/chatmodes/*.chatmode.md is shown below. description: This mode is used for requirement definition tasks. model: Claude Sonnet 4 tools: ['changes', 'codebase', 'editFiles', 'fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'runCommands', 'search', 'searchResults', 'terminalLastCommand', 'terminalSelection', 'usages', 'vscodeAPI', 'mssql_connect', 'mssql_disconnect', 'mssql_list_servers', 'mssql_show_schema'] --- # Requirement Definition Mode In this mode, requirement definition tasks are performed. Specifically, the project requirements are clarified, and necessary functions and specifications are defined. Based on instructions or interviews with the user, document the requirements according to the format below. If any specifications are ambiguous or unclear, Copilot should ask the user questions to clarify them. ## File Storage Location Save the requirement definition file in the following location: - Save as `requirement.md` under the directory `docs/[subsystem_name]/[business_name]/` ## Requirement Definition Format While interviewing the user, document the following items in the Markdown file: - **Subsystem Name**: The name of the subsystem to which this business belongs - **Business Name**: The name of the business - **Overview**: A summary of the business - **Use Cases**: Clarify who uses this business, when/under what circumstances, and for what purpose, using the following structure: - **Who (Persona)**: User or system roles - **When/Under What Circumstances (Scenario)**: Timing when the business is executed - **Purpose (Goal)**: Objectives or expected outcomes of the business - **Importance**: The importance of the business (e.g., High, Medium, Low) - **Acceptance Criteria**: Conditions that must be satisfied for the requirement to be considered met - **Status**: Current state of the requirement (e.g., In Progress, Completed) ## After Completion - Once requirement definition is complete, obtain user confirmation and proceed to the next stage (Design). Tips for Creating Chat Modes Here are some tips for creating custom chat modes: Align with the development process: Create chat modes based on the workflow and the deliverables. Instruct the LLM to ask the user when unsure: Direct the LLM to request clarification from the user if any information is missing. Clarify what deliverables to create and where to save them: Make it explicit which outputs are expected and their storage locations. The second point is particularly important. Many AI (LLMs) tend to respond to user prompts in a sycophantic manner (known as sycophancy). As a result, they may fill in unspecified requirements or perform tasks that were not requested, often with the intention of being helpful. The key difference between Ask/Edit modes and Agent mode is that Agent mode allows the LLM to proactively ask questions and engage in dialogue with the user. However, unless the user explicitly includes a prompt such as “ask if you don’t know,” the AI rarely initiates questions on its own. By creating a custom chat mode and instructing the LLM to “ask the user when unsure,” you can fully leverage the benefits of Agent mode. About Tools You can easily check tool names from the list of available tools in the command palette. Alternatively, as shown in the diagram below, it can be convenient to open the custom chat mode file and specify the tool configuration. You can specify not only the MCP server functionality but also built-in tools and Copilot Extensions. Example of Actual Operation An example interaction when using this chat mode is as follows: The LLM behaves according to the custom instructions defined in the chat mode. When you answer questions from GHC, the LLM uses that information to reason and proceed with the task. However, the output is not guaranteed to be correct (hallucinations may occur) → A human should review the output and make any necessary corrections before committing. The basic approach to disciplined guardrail-based development has been covered above. In actual business application development, it is also helpful to understand the following two points: Referencing the database schema Integrated management of design documents and implementation code (Important) Reading the Database Schema In business application development, requirements definition and functional design are often based on the schema information of entities. There are two main ways to allow the system to read schema information: Dynamically read the schema from a development/test DB server using MCP or similar tools. Include a file containing schema information within the project and read from it. A development/test database can be prepared, and schema information can be read via the MCP server or Copilot Extensions. For SQL Server or Azure SQL Database, an MCP Server is available, but its setup can be cumbersome. Therefore, using Copilot Extensions is often easier and recommended. This approach is often seen online, but it is not recommended for the following reasons: Setting up MCP Server or Copilot Extensions can be cumbersome (installation, connection string management, etc.) It is time-consuming (the LLM needs schema information → reads the schema → writes code based on it) Connecting to a DB server via MCP or similar tools is useful for scenarios such as “querying a database in natural language” for non-engineers performing data analysis. However, if the goal is simply to obtain the schema information of entities needed for business application development, the method described below is much simpler. Storing Schema Information Within the Project Place a file containing the schema information inside the project. Any of the following formats is recommended. Write custom instructions so that development refers to this file: DDL (full CREATE DATABASE scripts) O/R mapper files (e.g., Entity Framework context files) Text files documenting schema information, etc. DDL files are difficult for humans to read, but AI (LLMs) can easily read and accurately understand them. In .NET + SQL development, it is recommended to include both the DDL and EF O/R mapper files. Additionally, if you include links to these files in your architecture documents and chat mode instructions, the LLM can generate code while understanding the schema with high accuracy. Integrated Management of Design Documents and Implementation Code Disciplined guardrail-based development with LLMs has made it practical to synchronize and manage design documents and implementation code together—something that was traditionally very difficult. In long-standing systems, it is common for old design documents to become largely useless. During maintenance, code changes are often prioritized. As a result, updating and maintaining design documents tends to be neglected, leading to a significant divergence between design documents and the actual code. For these reasons, the following have been considered best practices (though often not followed in reality): Limit requirements and external design documents to the minimum necessary. Do not create internal design documents; instead, document within the code itself. Always update design documents before making changes to the implementation code. When using LLMs, guardrail-based development makes it easier to enforce a “write the documentation first” workflow. Following the flow of defining specifications, updating the documents, and then writing code also helps the LLM generate appropriate code more reliably. Even if code is written first, LLM-assisted code analysis can significantly reduce the effort required to update the documentation afterward. However, the following points should be noted when doing this: Create and manage design documents as text files, not Word, Excel, or PowerPoint. Use text-based technologies like Mermaid for diagrams. Clearly define how design documents correspond to the code. The last point is especially important. It is crucial to align the structure of requirements and design documents with the structure of the implementation code. For example: Place design documents directly alongside the implementation code. Align folder structures, e.g., /doc and /src. Information about grouping methods and folder mapping should be explicitly included in the custom instructions. Conclusion of Disciplined Guardrail-Based Development with GHC Formalizing and Applying Guardrails Define the development flow and architecture documents in .github/copilot-instructions.md using split references. Prepare .github/chatmodes/* for each development phase, enforcing “ask the AI if anything is unclear.” Synchronization of Documents and Implementation Code Update docs first → use the diff as the basis for implementation (Doc-first). Keep docs in text format (Markdown/Mermaid). Fix folder correspondence between /docs and /src. Handling Schemas Store DDL/O-R mapper files (e.g., EF) in the repository and have the LLM reference them. Minimize dynamic DB connections, prioritizing speed, reproducibility, and security. This disciplined guardrail-based development technique is an AI-assisted approach that significantly improves the quality, maintainability, and team efficiency of enterprise business application development. Adapt it appropriately to each project to maximize productivity in application development.132Views0likes0CommentsTrying to add new Type to Dotnet Runtime from github
Hi I'm trying to modify the .NET runtime from github.com/dotnet/runtime. I created a copy of the Dictionary class called Dictionary2 in these source code files: C:\rt5\src\libraries\System.Collections\ref\System.Collections.cs C:\rt5\src\libraries\System.Private.CoreLib\src\System\Collections\Generic\Dictionary.cs When I use dot peek from Jetbrains, Dictionary2 exists in my System.Collections.dll, but when I import System.Collections.dll in my main console C# application, Dictionary2 gives me compiler errors. Am, I missing some code somewhere? Can you guys help me out here? Thanks!96Views4likes2CommentsGenerating Classes with Custom Naming Conventions Using GitHub Copilot and a Custom MCP Server
GitHub Spark and GitHub Copilot are powerful development tools that can significantly boost productivity even when used out of the box. However, in enterprise settings, a common request is for development support that aligns with specific compliances or regulations. While GitHub Copilot allows you to choose models like GPT-4o or others, it does not currently support the use of custom fine-tuned models. Additionally, many users might find it unclear how to integrate Copilot with external services, which can be a source of frustration. To address such needs, one possible approach is to build a custom MCP server and connect it to GitHub Copilot. For a basic “Hello World” style guide on how to set this up, please refer to the articles below. https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/ https://learn.microsoft.com/en-us/dotnet/ai/quickstarts/build-mcp-server By building an MCP server as an ASP.NET Core application and integrating it with GitHub Copilot, you can introduce custom rules and functionality tailored to your organization’s needs. The architecture for this post would look like the following: While the MCP server can technically be hosted anywhere as long as HTTP communication is possible, for enterprise use cases, it’s often recommended to deploy it within a private endpoint inside a closed Virtual Network. In production environments, this setup can be securely accessed from client machines via ExpressRoute, ensuring both compliance and network isolation. Building an MCP Server Using ASP.NET Core Start by creating a new ASP.NET Core Web API project in Visual Studio. Then, install the required libraries via NuGet. Note: Make sure to enable the option to include preview versions—otherwise, some of the necessary packages may not appear in the list. ModelContextProtocol ModelContextProtocol.AspNetCore Next, update the Program.cs file as shown below to enable the MCP server functionality. We’ll create the NamingConventionManagerTool class later, but as you can see, it’s being registered via Dependency Injection during application startup. This allows it to be integrated as part of the MCP server’s capabilities. using MCPServerLab01.Tools; var builder = WebApplication.CreateBuilder(args); builder.WebHost.ConfigureKestrel(servreOptions => { servreOptions.ListenAnyIP(8888); // you can change any port to adopt your environment }); builder.Logging.AddConsole(consoleLogOptions => { consoleLogOptions.LogToStandardErrorThreshold = LogLevel.Trace; }); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<NamingConventionManagerTool>(); var app = builder.Build(); app.MapMcp(); app.Run(); Next, create the NamingConventionManagerTool.cs file. By decorating the class with the [McpServerToolType] and [McpServerTool] attributes, you can expose it as a feature accessible via the MCP server. In this example, we’ll add a tool that assigns class names based on business logic IDs—a common pattern in system integration projects. The class will include the following methods: GetNamingRules: Provides an overview of the naming conventions to follow. GenerateClassNamingConvention: Generates a class name based on a given business logic ID. DetermineBusinessCategory: Extracts the business logic ID from a given class name. As noted in the prompt, we’ll assume this is part of a fictional project called Normalian Project. using ModelContextProtocol.Server; using System.ComponentModel; using System.Text.Json.Serialization; namespace MCPServerLab01.Tools; [McpServerToolType] [Description()] public class NamingConventionManagerTool { // Counter for sequential management // This is just a trial, so here is a static variable. For production, consider saving to a DB or other persistent storage. static int _counter = 0; [McpServerTool, Description(""" Provides Normalian Project rules that must be followed when adding or modifying programs. Be sure to refer to these rules as they are mandatory. """)] public string GetNamingRules() { return """ In this Normalian project, to facilitate management, class names must follow naming conventions based on business categories. Class names according to business categories are provided by the `GenerateClassNamingConvention` tool. Please define classes using the names provided here. Do not define classes with any other names. If you are unsure about the business category from the class name, use the `DetermineBusinessCategory` tool to obtain the business category. """; } [McpServerTool, Description(""" Retrieves a set of classes and namespaces that should be created for the specified business category in Normalian project. You must create classes using the names suggested here. """)] public ClassNamingConvention GenerateClassNamingConvention( [Description("Business category for the class to be created")] BusinessCategory businessCategory) { var number = _counter++; var prefix = businessCategory switch { BusinessCategory.NormalianOrder => "A", BusinessCategory.NormalianProduct => "B", BusinessCategory.NormalianCustomer => "C", BusinessCategory.NormalianSupplier => "D", BusinessCategory.NormalianEmployee => "E", _ => throw new ArgumentException("Unknown category."), }; var name = $"{prefix}{number:D4}"; return new ClassNamingConvention( ServiceNamespace: "{YourRootNamespace}.Services", ServiceClassName: $"{name}Service", UsecaseNamespace: "{YourRootNamespace}.Usecases", UsecaseClassName: $"{name}Usecase", DtoNamespace: "{YourRootNamespace}.Dtos", DtoClassName: $"{name}Dto"); } [McpServerTool, Description("If you do not know the business category in Normalian project from the class name, check the naming convention to obtain the business category to which the class belongs.")] public BusinessCategory DetermineBusinessCategory( [Description("Class name")] string className) { ArgumentException.ThrowIfNullOrEmpty(className); var prefix = className[0]; return prefix switch { 'A' => BusinessCategory.NormalianOrder, 'B' => BusinessCategory.NormalianProduct, 'C' => BusinessCategory.NormalianCustomer, 'D' => BusinessCategory.NormalianSupplier, 'E' => BusinessCategory.NormalianEmployee, _ => throw new ArgumentException("Unknown class name."), }; } } [Description("Class name to use in Normalian project")] public record ClassNamingConvention( [Description("Service namespace")] string ServiceNamespace, [Description("Class name to use for the service layer")] string ServiceClassName, [Description("Usecase namespace")] string UsecaseNamespace, [Description("Class name to use for the usecase layer")] string UsecaseClassName, [Description("DTO namespace")] string DtoNamespace, [Description("Class name to use for DTOs")] string DtoClassName); [JsonConverter(typeof(JsonStringEnumConverter))] public enum BusinessCategory { NormalianOrder, NormalianProduct, NormalianCustomer, NormalianSupplier, NormalianEmployee, } Next, run the project in Visual Studio to launch the MCP server as an ASP.NET Core application. Once the application starts, take note of the HTTP endpoint displayed in the console or output window as follows —this will be used to interact with the MCP server. Connecting GitHub Copilot Agent to the MCP Server Next, connect the GitHub Copilot Agent to the MCP server. You can easily connect the GitHub Copilot Agent by simply specifying the MCP server's endpoint. To add the server, select Agent mode and click the wrench icon as shown below. From Add MCP Server, select HTTP and specify http://localhost:<<PORT>>/sse endpoint. Give it an appropriate name, then choose where to save the MCP server settings—either User Settings or Workspace Settings. If you will use it just for yourself, User Settings should be fine. However, selecting Workspace Settings is useful when you want to share the configuration with your team. Since this is just a trial and we only want to use it within this workspace, we chose Workspace Settings. This will create a .vscode/mcp.json file with the following content: { "servers": { "mine-mcp-server": { "type": "http", "url": "http://localhost:8888/" } }, "inputs": [] } You'll see a Start icon on top of the JSON file—click it to launch the MCP server. Once started, the MCP server will run as shown below. You can also start it from the GitHub Copilot Chat window. After launching, if you click the wrench icon in the Chat window, you'll see a list of tools available on the connected MCP server. Using the Created Features via GitHub Copilot Agent Now, let’s try it out. Open the folder of your .NET console app project in VS Code, and make a request to the Agent like the one shown below. It’s doing a great job trying to check the rules. Next, it continues by trying to figure out the class name and other details needed for implementation. Then, following the naming conventions, it looks up the appropriate namespace and even starts creating folders. Once the folder is created, the class is properly generated with the specified class name. Even when you ask about the business category, it uses the tool correctly to look it up. Impressive! Conclusion In this article, we introduced how to build your own MCP server and use it via the GitHub Copilot Agent. By implementing an MCP server and adding tools tailored to your business needs, you can gain a certain level of control over the Agent’s behavior and build your own ecosystem. For this trial, we used a local machine to test the behavior, but for production use, you'll need to consider deploying the MCP server to Azure, adding authentication features, and other enhancements.417Views2likes0CommentsDevelop Custom Engine Agent to Microsoft 365 Copilot Chat with pro-code
There are some great articles that explain how to integrate an MCP server built on Azure with a declarative agent created using Microsoft Copilot Studio. These approaches aim to extend the agent’s capabilities by supplying it with tools, rather than defining a fixed role. Here were some of the challenges w encountered: The agent's behavior can only be tested through the Copilot Studio web interface, which isn't ideal for iterative development. You don’t have control over which LLM is used as the orchestrator—for example, there's no way to specify GPT-4o. The agent's responses don’t always behave the same as they would if you were prompting the LLM directly. These limitations got me thinking: why not build the entire agent myself? At the same time, I still wanted to take advantage of the familiar Microsoft 365 Copilot interface on the frontend. As I explored further, I discovered that the Microsoft 365 Copilot SDK makes it possible to bring in your own custom-built agent.1.7KViews11likes1CommentHow to Secure your pro-code Custom Engine Agent of Microsoft 365 Copilot?
Prerequisite This article assumes that you’ve already gone through a following post. Please make sure to read it before proceeding: Developing a Custom Engine Agent for Microsoft 365 Copilot Chat Using Pro-Code With the article, we have found how to publish a Custom Engine Agent using pro-code approaches such as C#. In this post, I’d like to shift the focus to security, specifically how to protect the endpoint of our custom Microsoft 365 Copilot. Through several architectural explorations, we found an approach that seems to work well. However, I strongly encourage you to review and evaluate it carefully for your production environment. Which Endpoints Can Be Controlled? In the current architecture, there are three key endpoints to consider from a security perspective: Teams Endpoint This is the entry point where users interact with the Custom Engine Agent through Microsoft Teams. Azure Bot Service Endpoint This is the publicly accessible endpoint provided by Azure Bot Service that relays messages between Teams and your bot backend. ASP.NET Core Endpoint In the previous article, we used a local devtunnel for development purposes. In a production environment, however, this would likely be hosted on Azure App Service or others. Each of these endpoints may require different protection strategies, which we’ll explore in the following sections. 1. Controlling the Teams Endpoint When it comes to the Teams endpoint, control ultimately comes down to Teams app management within your Microsoft 365 tenant. Specifically, the manifest file for your custom Teams app (i.e., the Custom Agent) needs to be uploaded in your tenant, and access is governed via the Teams Admin Center. This isn’t about controlling the endpoint, but rather about limiting who can access the app. You can restrict access on a per-user or per-group basis, effectively preventing malicious users inside your organization from using the app. However, you cannot restrict access at the endpoint level, nor could you prevent a malicious external organization from copying the app package. This limitation may pose a concern, especially when thinking about endpoint-level security outside your tenant’s control. 2. Controlling the Azure Bot Service Endpoint The Azure Bot Service endpoint acts as a bridge between the Teams Channel and your pro-code backend. Here, the only available security configuration is to specify the Service Principal that the agent uses. There isn’t much room for granular control here—it’s essentially a relay point managed by Azure Bot Service, and protection depends largely on how you secure the endpoints it connects to. 3. Controlling the ASP.NET Core Endpoint This is where endpoint protection becomes critical. When you configure your bot in Azure Bot Service, you must expose your pro-code endpoint to the public internet. In our earlier article, we used a local devtunnel for development. But in production, you’ll likely use Azure App Service or others, which results in a publicly accessible endpoint. While Microsoft provides documentation on network isolation options for Azure Bot Service, these are currently only supported when using the Direct Line channel - not the Teams channel. This means that when using Teams as the entry point, you cannot isolate the backend endpoint via a private network, making it critical to implement other security measures at the app level (e.g., token validation, IP restrictions, mutual TLS, etc.). https://learn.microsoft.com/en-us/azure/bot-service/dl-network-isolation-concept?view=azure-bot-service-4.0 Let’s Review Other Articles on this Topic There are several valuable resources that describe this topic. Since Microsoft Teams is a SaaS application, the bot endpoint (e.g., https://my-webapp-endpoint.net/api/messages) must be publicly accessible when integrated through the Teams channel. Is it possible to integrate Azure Bot with Teams without public access? How to create Azure Bot Service in a private network? In particular, this article provides an excellent deep dive into the traffic flow between Teams and Azure Bot Service: Azure Bot Service, Microsoft Teams architecture, and message flow In the section titled “Challenge 2: Network isolation vs. Teams connectivity,” the article clearly explains why network-level isolation is fundamentally incompatible with the Teams channel. The article also outlines a practical security approach using Azure Firewall, NSG (Network Security Groups), and JWT token validation at the application level. If you're using the Teams channel, complete network isolation is not feasible—which makes sense, given that Teams itself is a SaaS platform and cannot be brought into your private network. As a result, protecting the backend bot (e.g., the ASP.NET Core endpoint) will require application-level controls, particularly JWT token validation to ensure that only trusted sources can invoke the bot. Let’s now take a closer look at how to implement that in C#. Controlling Endpoints in the ASP.NET Core Application So, what does endpoint control look like at the application level? Let’s return to the ASP.NET Core side of things and take a closer look at the default project structure. If you recall, the Program.cs in the template project contains a specific line worth revisiting. This configuration plays an important role in how the application handles and secures incoming requests. Let’s take a look at that setup. // Register the WeatherForecastAgent builder.Services.AddTransient<WeatherForecastAgent>(); // Add AspNet token validation - ** HERE ** builder.Services.AddBotAspNetAuthentication(builder.Configuration); // Register IStorage. For development, MemoryStorage is suitable. // For production Agents, persisted storage should be used so // that state survives Agent restarts, and operate correctly // in a cluster of Agent instances. builder.Services.AddSingleton<IStorage, MemoryStorage>(); As it turns out, the AddBotAspNetAuthentication method referenced earlier in Program.cs is actually defined in the same project, within a file named AspNetExtensions.cs. This method is where access token validation is implemented and enforced. Let’s take a closer look at a key portion of the AddBotAspNetAuthentication method from AspNetExtensions.cs: public static void AddBotAspNetAuthentication(this IServiceCollection services, IConfiguration configuration, string tokenValidationSectionName = "TokenValidation", ILogger logger = null) { IConfigurationSection tokenValidationSection = configuration.GetSection(tokenValidationSectionName); List<string> validTokenIssuers = tokenValidationSection.GetSection("ValidIssuers").Get<List<string>>(); List<string> audiences = tokenValidationSection.GetSection("Audiences").Get<List<string>>(); if (!tokenValidationSection.Exists()) { logger?.LogError("Missing configuration section '{tokenValidationSectionName}'. This section is required to be present in appsettings.json",tokenValidationSectionName); throw new InvalidOperationException($"Missing configuration section '{tokenValidationSectionName}'. This section is required to be present in appsettings.json"); } // If ValidIssuers is empty, default for ABS Public Cloud if (validTokenIssuers == null || validTokenIssuers.Count == 0) { validTokenIssuers = [ "https://api.botframework.com", "https://sts.windows.net/d6d49420-f39b-4df7-a1dc-d59a935871db/", "https://login.microsoftonline.com/d6d49420-f39b-4df7-a1dc-d59a935871db/v2.0", "https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/", "https://login.microsoftonline.com/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0", "https://sts.windows.net/69e9b82d-4842-4902-8d1e-abc5b98a55e8/", "https://login.microsoftonline.com/69e9b82d-4842-4902-8d1e-abc5b98a55e8/v2.0", ]; string tenantId = tokenValidationSection["TenantId"]; if (!string.IsNullOrEmpty(tenantId)) { validTokenIssuers.Add(string.Format(CultureInfo.InvariantCulture, AuthenticationConstants.ValidTokenIssuerUrlTemplateV1, tenantId)); validTokenIssuers.Add(string.Format(CultureInfo.InvariantCulture, AuthenticationConstants.ValidTokenIssuerUrlTemplateV2, tenantId)); } } if (audiences == null || audiences.Count == 0) { throw new ArgumentException($"{tokenValidationSectionName}:Audiences requires at least one value"); } bool isGov = tokenValidationSection.GetValue("IsGov", false); bool azureBotServiceTokenHandling = tokenValidationSection.GetValue("AzureBotServiceTokenHandling", true); // If the `AzureBotServiceOpenIdMetadataUrl` setting is not specified, use the default based on `IsGov`. This is what is used to authenticate ABS tokens. string azureBotServiceOpenIdMetadataUrl = tokenValidationSection["AzureBotServiceOpenIdMetadataUrl"]; if (string.IsNullOrEmpty(azureBotServiceOpenIdMetadataUrl)) { azureBotServiceOpenIdMetadataUrl = isGov ? AuthenticationConstants.GovAzureBotServiceOpenIdMetadataUrl : AuthenticationConstants.PublicAzureBotServiceOpenIdMetadataUrl; } // If the `OpenIdMetadataUrl` setting is not specified, use the default based on `IsGov`. This is what is used to authenticate Entra ID tokens. string openIdMetadataUrl = tokenValidationSection["OpenIdMetadataUrl"]; if (string.IsNullOrEmpty(openIdMetadataUrl)) { openIdMetadataUrl = isGov ? AuthenticationConstants.GovOpenIdMetadataUrl : AuthenticationConstants.PublicOpenIdMetadataUrl; } TimeSpan openIdRefreshInterval = tokenValidationSection.GetValue("OpenIdMetadataRefresh", BaseConfigurationManager.DefaultAutomaticRefreshInterval); _ = services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(options => { options.SaveToken = true; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(5), ValidIssuers = validTokenIssuers, ValidAudiences = audiences, ValidateIssuerSigningKey = true, RequireSignedTokens = true, }; // Using Microsoft.IdentityModel.Validators options.TokenValidationParameters.EnableAadSigningKeyIssuerValidation(); options.Events = new JwtBearerEvents { // Create a ConfigurationManager based on the requestor. This is to handle ABS non-Entra tokens. OnMessageReceived = async context => { string authorizationHeader = context.Request.Headers.Authorization.ToString(); if (string.IsNullOrEmpty(authorizationHeader)) { // Default to AadTokenValidation handling context.Options.TokenValidationParameters.ConfigurationManager ??= options.ConfigurationManager as BaseConfigurationManager; await Task.CompletedTask.ConfigureAwait(false); return; } string[] parts = authorizationHeader?.Split(' '); if (parts.Length != 2 || parts[0] != "Bearer") { // Default to AadTokenValidation handling context.Options.TokenValidationParameters.ConfigurationManager ??= options.ConfigurationManager as BaseConfigurationManager; await Task.CompletedTask.ConfigureAwait(false); return; } JwtSecurityToken token = new(parts[1]); string issuer = token.Claims.FirstOrDefault(claim => claim.Type == AuthenticationConstants.IssuerClaim)?.Value; if (azureBotServiceTokenHandling && AuthenticationConstants.BotFrameworkTokenIssuer.Equals(issuer)) { // Use the Bot Framework authority for this configuration manager context.Options.TokenValidationParameters.ConfigurationManager = _openIdMetadataCache.GetOrAdd(azureBotServiceOpenIdMetadataUrl, key => { return new ConfigurationManager<OpenIdConnectConfiguration>(azureBotServiceOpenIdMetadataUrl, new OpenIdConnectConfigurationRetriever(), new HttpClient()) { AutomaticRefreshInterval = openIdRefreshInterval }; }); } else { context.Options.TokenValidationParameters.ConfigurationManager = _openIdMetadataCache.GetOrAdd(openIdMetadataUrl, key => { return new ConfigurationManager<OpenIdConnectConfiguration>(openIdMetadataUrl, new OpenIdConnectConfigurationRetriever(), new HttpClient()) { AutomaticRefreshInterval = openIdRefreshInterval }; }); } await Task.CompletedTask.ConfigureAwait(false); }, OnTokenValidated = context => { logger?.LogDebug("TOKEN Validated"); return Task.CompletedTask; }, OnForbidden = context => { logger?.LogWarning("Forbidden: {m}", context.Result.ToString()); return Task.CompletedTask; }, OnAuthenticationFailed = context => { logger?.LogWarning("Auth Failed {m}", context.Exception.ToString()); return Task.CompletedTask; } }; }); } From examining the code, we can see that it reads configuration settings from the appsettings.{your-env}.json file and uses them during token validation. In particular, the following line stands out: TokenValidationParameters.ValidAudiences = audiences; This ensures that only tokens issued for the configured audience (i.e., your Azure Bot Service's Service Principal) will be accepted. Any requests carrying tokens with mismatched audiences will be rejected during validation. One critical observation is that if no access token is provided at all, the code effectively lets the request through without enforcing validation. This means that if the Service Principal is misconfigured or lacks proper permissions, and therefore no token is issued with the request, the bot may still continue processing it without rejecting the request. This could potentially create a security loophole, especially if the backend API is publicly accessible. OnMessageReceived = async context => { string authorizationHeader = context.Request.Headers.Authorization.ToString(); if (string.IsNullOrEmpty(authorizationHeader)) { // Default to AadTokenValidation handling context.Options.TokenValidationParameters.ConfigurationManager ??= options.ConfigurationManager as BaseConfigurationManager; await Task.CompletedTask.ConfigureAwait(false); return; } Additional Security Concerns and Improvements Another point worth noting about the current code is that the Custom Engine Agent app can be copied and uploaded to a different Entra ID tenant, and it would still work. (Admittedly, this might be intentional since the architecture assumes providing Custom Engine Agent services to multiple organizations.) The project template and Teams settings raise two key security concerns that we should address: Reject requests when the token is missing - token should not be empty. Block access from unknown or unauthorized Entra ID tenants. To enforce the above, you will need to update the Service Principal configuration accordingly. Specifically, open the Service Principal's API permissions tab and add the following permission: User.Read.All Without this permission, access tokens will not be issued, making token validation impossible. After updating the Service Principal permissions, run your ASP.NET Core app and set a breakpoint around the following code to inspect the contents of the token included in the Authorization header. This will help you verify whether the token is correctly issued and contains the expected claims. string authorizationHeader = context.Request.Headers.Authorization.ToString(); The token is Base64 encoded, so let’s decode it to inspect its contents. I asked Copilot to help us decode the token so we can better understand the claims and data included inside. Let's inspect the token contents. After decoding the token (some parts are redacted for privacy), we can see that: The aud (audience) claim contains the Service Principal’s client ID. The serviceurl claim includes the Entra ID tenant ID. I attempted to configure the authorization settings to include the Tenant ID directly in the access token claims, but was not successful this time. Below is a sample code snippet that implements of the following requirements: Reject requests with an empty or missing token. Deny access from unknown Entra ID tenants. This is the sample code for "1. Reject requests with an empty or missing token". I’ve added comments in the code to clearly indicate what was changed. public static void AddBotAspNetAuthentication(this IServiceCollection services, IConfiguration configuration, string tokenValidationSectionName = "TokenValidation", ILogger logger = null) { IConfigurationSection tokenValidationSection = configuration.GetSection(tokenValidationSectionName); List<string> validTokenIssuers = tokenValidationSection.GetSection("ValidIssuers").Get<List<string>>(); List<string> audiences = tokenValidationSection.GetSection("Audiences").Get<List<string>>(); if (!tokenValidationSection.Exists()) { logger?.LogError("Missing configuration section '{tokenValidationSectionName}'. This section is required to be present in appsettings.json",tokenValidationSectionName); throw new InvalidOperationException($"Missing configuration section '{tokenValidationSectionName}'. This section is required to be present in appsettings.json"); } // If ValidIssuers is empty, default for ABS Public Cloud if (validTokenIssuers == null || validTokenIssuers.Count == 0) { validTokenIssuers = [ "https://api.botframework.com", "https://sts.windows.net/d6d49420-f39b-4df7-a1dc-d59a935871db/", "https://login.microsoftonline.com/d6d49420-f39b-4df7-a1dc-d59a935871db/v2.0", "https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/", "https://login.microsoftonline.com/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0", "https://sts.windows.net/69e9b82d-4842-4902-8d1e-abc5b98a55e8/", "https://login.microsoftonline.com/69e9b82d-4842-4902-8d1e-abc5b98a55e8/v2.0", ]; string tenantId = tokenValidationSection["TenantId"]; if (!string.IsNullOrEmpty(tenantId)) { validTokenIssuers.Add(string.Format(CultureInfo.InvariantCulture, AuthenticationConstants.ValidTokenIssuerUrlTemplateV1, tenantId)); validTokenIssuers.Add(string.Format(CultureInfo.InvariantCulture, AuthenticationConstants.ValidTokenIssuerUrlTemplateV2, tenantId)); } } if (audiences == null || audiences.Count == 0) { throw new ArgumentException($"{tokenValidationSectionName}:Audiences requires at least one value"); } bool isGov = tokenValidationSection.GetValue("IsGov", false); bool azureBotServiceTokenHandling = tokenValidationSection.GetValue("AzureBotServiceTokenHandling", true); // If the `AzureBotServiceOpenIdMetadataUrl` setting is not specified, use the default based on `IsGov`. This is what is used to authenticate ABS tokens. string azureBotServiceOpenIdMetadataUrl = tokenValidationSection["AzureBotServiceOpenIdMetadataUrl"]; if (string.IsNullOrEmpty(azureBotServiceOpenIdMetadataUrl)) { azureBotServiceOpenIdMetadataUrl = isGov ? AuthenticationConstants.GovAzureBotServiceOpenIdMetadataUrl : AuthenticationConstants.PublicAzureBotServiceOpenIdMetadataUrl; } // If the `OpenIdMetadataUrl` setting is not specified, use the default based on `IsGov`. This is what is used to authenticate Entra ID tokens. string openIdMetadataUrl = tokenValidationSection["OpenIdMetadataUrl"]; if (string.IsNullOrEmpty(openIdMetadataUrl)) { openIdMetadataUrl = isGov ? AuthenticationConstants.GovOpenIdMetadataUrl : AuthenticationConstants.PublicOpenIdMetadataUrl; } TimeSpan openIdRefreshInterval = tokenValidationSection.GetValue("OpenIdMetadataRefresh", BaseConfigurationManager.DefaultAutomaticRefreshInterval); _ = services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(options => { options.SaveToken = true; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, // this option enables to validate the audience claim with audiences values ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(5), ValidIssuers = validTokenIssuers, ValidAudiences = audiences, ValidateIssuerSigningKey = true, RequireSignedTokens = true, }; // Using Microsoft.IdentityModel.Validators options.TokenValidationParameters.EnableAadSigningKeyIssuerValidation(); options.Events = new JwtBearerEvents { // Create a ConfigurationManager based on the requestor. This is to handle ABS non-Entra tokens. OnMessageReceived = async context => { string authorizationHeader = context.Request.Headers.Authorization.ToString(); if (string.IsNullOrEmpty(authorizationHeader)) { // Default to AadTokenValidation handling // context.Options.TokenValidationParameters.ConfigurationManager ??= options.ConfigurationManager as BaseConfigurationManager; // await Task.CompletedTask.ConfigureAwait(false); // return; // // Fail the request when the token is empty context.Fail("Authorization header is missing."); logger?.LogWarning("Authorization header is missing."); return; } string[] parts = authorizationHeader?.Split(' '); if (parts.Length != 2 || parts[0] != "Bearer") { // Default to AadTokenValidation handling context.Options.TokenValidationParameters.ConfigurationManager ??= options.ConfigurationManager as BaseConfigurationManager; await Task.CompletedTask.ConfigureAwait(false); return; } Next, we should implement about "2. Deny access from unknown Entra ID tenants" We can retrieve the Tenant ID inside the MessageActivityAsync method of Bot/WeatherAgentBot.cs. Let’s extend the logic by referring to the following sample code to capture and utilize the Tenant ID within that method. https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/blob/dcf3b169084d3fff7c1e4c5b68718fb33c3391dd/Source/CompanyCommunicator/Bot/CompanyCommunicatorBotFilterMiddleware.cs#L44 Here is how you can extend the logic to retrieve and use the Tenant ID within the MessageActivityAsync method: using MyM365Agent1.Bot.Agents; using Microsoft.Agents.Builder; using Microsoft.Agents.Builder.App; using Microsoft.Agents.Builder.State; using Microsoft.Agents.Core.Models; using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.ChatCompletion; using Microsoft.Extensions.DependencyInjection.Extensions; namespace MyM365Agent1.Bot; public class WeatherAgentBot : AgentApplication { private WeatherForecastAgent _weatherAgent; private Kernel _kernel; private readonly string _tenantId; private readonly ILogger<WeatherAgentBot> _logger; public WeatherAgentBot(AgentApplicationOptions options, Kernel kernel, IConfiguration configuration, ILogger<WeatherAgentBot> logger) : base(options) { _kernel = kernel ?? throw new ArgumentNullException(nameof(kernel)); _logger = logger ?? throw new ArgumentNullException(nameof(logger)); OnConversationUpdate(ConversationUpdateEvents.MembersAdded, WelcomeMessageAsync); OnActivity(ActivityTypes.Message, MessageActivityAsync, rank: RouteRank.Last); // Get TenantId from TokenValidation section var tokenValidationSection = configuration.GetSection("TokenValidation"); _tenantId = tokenValidationSection["TenantId"]; } protected async Task MessageActivityAsync(ITurnContext turnContext, ITurnState turnState, CancellationToken cancellationToken) { // add validation of tenant ID var activity = turnContext.Activity; // Log: Received activity _logger.LogInformation("Received message activity from: {FromId}, AadObjectId:{AadObjectId}, TenantId: {TenantId}, ChannelId: {ChannelId}, ConversationType: {ConversationType}", activity?.From?.Id, activity?.From?.AadObjectId, activity?.Conversation?.TenantId, activity?.ChannelId, activity?.Conversation?.ConversationType); if (activity.ChannelId != "msteams" // Ignore messages not from Teams || activity.Conversation?.ConversationType?.ToLowerInvariant() != "personal" // Ignore messages from team channels or group chats || string.IsNullOrEmpty(activity.From?.AadObjectId) // Ignore if not an AAD user (e.g., bots, guest users) || (!string.IsNullOrEmpty(_tenantId) && !string.Equals(activity.Conversation?.TenantId, _tenantId, StringComparison.OrdinalIgnoreCase))) // Ignore if tenant ID does not match { _logger.LogWarning("Unauthorized serviceUrl detected: {ServiceUrl}. Expected to contain TenantId: {TenantId}", activity?.ServiceUrl, _tenantId); await turnContext.SendActivityAsync("Unauthorized service URL.", cancellationToken: cancellationToken); return; } // Setup local service connection ServiceCollection serviceCollection = [ new ServiceDescriptor(typeof(ITurnState), turnState), new ServiceDescriptor(typeof(ITurnContext), turnContext), new ServiceDescriptor(typeof(Kernel), _kernel), ]; // Start a Streaming Process await turnContext.StreamingResponse.QueueInformativeUpdateAsync("Working on a response for you"); ChatHistory chatHistory = turnState.GetValue("conversation.chatHistory", () => new ChatHistory()); _weatherAgent = new WeatherForecastAgent(_kernel, serviceCollection.BuildServiceProvider()); // Invoke the WeatherForecastAgent to process the message WeatherForecastAgentResponse forecastResponse = await _weatherAgent.InvokeAgentAsync(turnContext.Activity.Text, chatHistory); if (forecastResponse == null) { turnContext.StreamingResponse.QueueTextChunk("Sorry, I couldn't get the weather forecast at the moment."); await turnContext.StreamingResponse.EndStreamAsync(cancellationToken); return; } // Create a response message based on the response content type from the WeatherForecastAgent // Send the response message back to the user. switch (forecastResponse.ContentType) { case WeatherForecastAgentResponseContentType.Text: turnContext.StreamingResponse.QueueTextChunk(forecastResponse.Content); break; case WeatherForecastAgentResponseContentType.AdaptiveCard: turnContext.StreamingResponse.FinalMessage = MessageFactory.Attachment(new Attachment() { ContentType = "application/vnd.microsoft.card.adaptive", Content = forecastResponse.Content, }); break; default: break; } await turnContext.StreamingResponse.EndStreamAsync(cancellationToken); // End the streaming response } protected async Task WelcomeMessageAsync(ITurnContext turnContext, ITurnState turnState, CancellationToken cancellationToken) { foreach (ChannelAccount member in turnContext.Activity.MembersAdded) { if (member.Id != turnContext.Activity.Recipient.Id) { await turnContext.SendActivityAsync(MessageFactory.Text("Hello and Welcome! I'm here to help with all your weather forecast needs!"), cancellationToken); } } } } Since we’re at it, I’ve added various validations as well. I hope this will be helpful as a reference for everyone.457Views6likes0CommentsDo I need to upgrade Microsoft.AspNetCore.* NuGet packages after upgrading the .NET Runtime?
Hi, I'm encountering an issue with our SCA (Software Composition Analysis) scan, which reports several known vulnerabilities in .NET Core components. Specifically, the scan detects that the following packages are still on version 8.0.0, which are flagged as vulnerable: Microsoft.AspNetCore.Authorization Microsoft.AspNetCore.Components Microsoft.AspNetCore.Http.Connections.Client Microsoft.AspNetCore.SignalR.Client The scanner recommends upgrading these packages to version 8.0.15 to resolve the issues. To address this, I upgraded the .NET Runtime on our environment to version 8.0.15. However, the SCA scan still reports the same vulnerabilities, indicating that the vulnerable component versions have not changed. My question is: Do I also need to manually upgrade the corresponding NuGet package versions in the project to 8.0.15, or is upgrading the .NET Runtime alone sufficient to ensure these components are updated as well? Any clarification would be appreciated. Thank you!95Views0likes1Comment[Suggestion 1][Allow Language Selection During .NET Installation]
Hi, 1. Context: Currently, when installing the .NET SDK or runtime, multiple language resource folders are automatically included in the installation directory. 2. Problem: Many developers only require a single language, typically English and the additional localization files take up unnecessary disk space and clutter the installation directory. Currently, the installer does not provide an option to select which languages should be installed. Removing these manually is time-consuming and could potentially break application dependencies if done incorrectly. 3. Proposed Solution: Introduce an option in the .NET installer that allows users to select or deselect language packs during installation. This feature could be similar to the "Individual components" selection available in the Visual Studio Installer, where users can choose exactly what they need, ensuring a more streamlined installation. This would help: Reduce disk space usage. Improve installation customization. Provide a cleaner development environment. 4. What do you think about this suggestion ? Thank you for considering this request and I’m happy to provide additional details or insights if needed.137Views1like1CommentConnection between .NET and AUTOCAD ELECTRICAL
I want to connect .NET with Autocad Electrical . i am using , AutoCAD Electrical 2025.0.2 , Product Version : 22.0.81.0 , Built on: V.154.0.0 AutoCAD 2025.1.1 Visual Studio Community 2022 - 17.12.3 my system is x64 bit I am creating a console application in .NET 8.0 Runtime with C# language , Added dll accoremgd , acdbmgd, acmgd, Autodesk.AutoCAD.Interop . Also in .NET in Solution Explorer , in project references , i have made "Copy Local" property as False. using System; using Autodesk.AutoCAD.Interop; using Autodesk.AutoCAD.Runtime; using System.Runtime.InteropServices; using Autodesk.AutoCAD.ApplicationServices; namespace AutoCADElecDemo { class Program { static void Main(string[] args) { AcadApplication acadApp = null; const string progId = "AutoCAD.Application.25"; // Adjust for your AutoCAD version try { // Get a running instance of AutoCAD acadApp = GetActiveAutoCAD(progId); } catch (System.Exception ex) { Console.WriteLine($"Error initializing AutoCAD: {ex.Message}"); return; } try { // Ensure AutoCAD is visible acadApp.Visible = true; Console.WriteLine("AutoCAD is now running."); // Register for the BeginQuit event Application.BeginQuit += OnBeginQuit; // Keep the application running to monitor AutoCAD's state Console.WriteLine("Press Enter to exit..."); Console.ReadLine(); } catch (System.Exception ex) { Console.WriteLine($"An error occurred: {ex.Message}"); } finally { // Unregister the event handler before exiting Application.BeginQuit -= OnBeginQuit; } } static AcadApplication GetActiveAutoCAD(string progId) { try { var comObject = Marshal.GetActiveObject(progId); Console.WriteLine($"Connected to AutoCAD: {progId}"); return (AcadApplication)comObject; } catch (System.Exception ex) { Console.WriteLine($"Error accessing AutoCAD: {ex.Message}"); throw; } } static void OnBeginQuit(object sender, EventArgs e) { Console.WriteLine("AutoCAD is being closed."); } } } For above code, my application comes in break mode "Your app has entered a break state, but no code is currently executing that is supported by the selected debug engine (e.g. only native runtime code is executing)." and i am getting below error : System.IO.FileNotFoundException: 'Could not load file or assembly 'accoremgd, Version=25.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.' dll path is also correct. i also tried same code with .NET Framework 4.7.2 and 4.8 targeting pack , but getting same error. How to load accoremgd.dll properly so that this application can use autocad accoremgd functions properly.196Views0likes2Comments