C#
2 Topics.NET 10 and Memory: Less Heap, Smarter GC, Faster Apps
As Microsoft steps into the Ignite 2025 era of “AI-first everything” across Azure, Foundry, and cloud-native workloads, .NET quietly got its own big upgrade: .NET 10, a new long-term support (LTS) release and the official successor to .NET 9, supported for three years. The release was celebrated at .NET Conf 2025 in November, where Microsoft shipped .NET 10 alongside Visual Studio 2026 and highlighted performance, memory efficiency and cloud-readiness as core pillars of the platform. A few days later at Microsoft Ignite 2025 in San Francisco, the story zoomed out: AI agents, Azure-hosted workloads, and App Service / Functions all moved forward with first-class .NET 10 support, positioning this runtime as the default foundation for modern cloud and AI solutions. I’m Hazem Ali, a Microsoft MVP, Principal AI & ML Engineer / Architect, and Founder & CEO of Skytells. In this article, I’ll walk through what .NET 10 actually changes in memory, heap, and stack behavior—and why that matters if you’re building high-throughput APIs, AI agents, or cloud-native services in the post-Ignite world. At a high level, .NET 10 does three big things for memory: Allocates more objects on the stack instead of the heap. Teaches the JIT to understand more “hidden” allocations (delegates, spans, small arrays). Leans on a smarter GC mode (DATAS) that adapts heap size to your app instead of your machine. Let’s unpack that in plain language. 1. Heap vs Stack in 60 Seconds Quick mental model: Stack Used for short-lived data. Allocation is extremely fast — often just advancing a pointer. Memory is released automatically when the function returns. Very cache-friendly due to tight, contiguous layout. Heap Used for long-lived or shared objects. Allocation is slower and requires runtime management. Objects are tracked by the garbage collector (GC) in managed runtimes. Creating too many short-lived objects on the heap increases GC pressure, which can lead to pauses and more cache misses. So any time the runtime can say: “This object definitely dies when this method returns” …it can put it on the stack instead, and the GC never has to care. .NET 10’s main memory trick is expanding how often it can safely do that. 2. From Heap to Stack: .NET 10 Gets Serious About Escape Analysis The JIT in .NET 10 spends more effort answering one question: “Does this object escape the current method?” If the answer is “no”, it can be stack-allocated. This is called escape analysis, and .NET 10 pushes it much further than .NET 9. 2.1 Delegates and Lambdas That Don’t Leak Consider this simple closure: int SumTwice(int y) { Func<int, int=""> addY = x => x + y; return DoubleResult(addY, y); static int DoubleResult(Func<int, int=""> f, int v) => f(v) * 2; }</int,></int,> The C# compiler turns that into: A hidden closure class with a field y . A delegate object pointing at a method on that closure. In .NET 9, both of these lived on the heap. In .NET 10, if the JIT can inline DoubleResult and prove the delegate never escapes the method, the delegate object is stack-allocated instead. Benchmarks in the official performance blog show: ~3× faster for this pattern ~70% fewer bytes allocated (only the closure remains on the heap) You don’t change the code; the JIT just stops paying the “lambda tax” as often. 2.2 Small Arrays of Value Types on the Stack .NET 10 adds the ability to stack-allocate small, fixed-size arrays of value types (that don’t hold GC references) when they clearly don’t outlive the method. Example from the official docs: static void Sum() { int[] numbers = { 1, 2, 3 }; int sum = 0; for (int i = 0; i < numbers.Length; i++) sum += numbers[i]; Console.WriteLine(sum); } The runtime article explicitly states that numbers is now stack-allocated in this scenario, because: The size is known at compile time ( int[3] ). The array never escapes the method. Result: no heap allocation for that small buffer, and one less thing for the GC to track. 2.3 Small Arrays of Reference Types Historically, arrays of reference types ( string[] , object[] , etc.) have always been heap allocations in .NET, and this remains true in .NET 10. The GC must track the references stored in these arrays, which makes stack allocation impossible. However, .NET 10 significantly reduces the cost of using small ref-type arrays by improving escape analysis around the patterns that create and consume them. While the array itself still lives on the heap, many of the associated allocations that previously accompanied these patterns can now be eliminated entirely. Example: static void Print() { string[] words = { "Hello", "World!" }; foreach (var s in words) Console.WriteLine(s); } In .NET 9, using a small string[] like this typically incurred extra hidden allocations (iterator objects, closure artifacts, helper frames). In .NET 10, if the JIT can prove the code is fully local and non-escaping: Iterator-related allocations can be removed, Delegate and closure helpers may be stack-allocated or optimized away, The only remaining heap object is the array itself — with no additional GC noise. A similar pattern appears in the performance blog’s benchmark: [Benchmark] public void Test() { Process(new string[] { "a", "b", "c" }); static void Process(string[] inputs) { foreach (string input in inputs) Use(input); static void Use(string s) { } } } On .NET 10, this benchmark shows zero additional heap allocations beyond the array itself, because the runtime eliminates the iterator and closure allocations that .NET 9 would create. The array still resides on the heap, but the overall memory footprint effectively drops to zero for the surrounding pattern. 2.4 Structs, Spans, and “Hidden” References .NET 10’s improved escape analysis can recognize when neither the struct nor its referenced array escapes, enabling the runtime to eliminate unnecessary heap allocations around the pattern. From the runtime docs: struct GCStruct { public int[] arr; } public static int Main() { int[] x = new int[10]; GCStruct y = new GCStruct() { arr = x }; return y.arr[0]; } In .NET 9, x is treated as escaping (through y ) and lives on the heap. In .NET 10, the JIT understands that neither y nor x escapes, so it can stack-allocate the array and associated data. This also benefits types like Span (which is just a struct with a reference and a length) and unlocks more cases where spans and small arrays become stack-only, not heap noise. 3. DATAS: The GC That Adapts to Your App Size On the GC side, the key concept is DATAS – Dynamic Adaptation To Application Sizes. Introduced as an opt-in mode in .NET 8. Enabled by default in .NET 9. By the time you land on .NET 10 LTS, DATAS is the default GC behavior for most apps, with more tuning and guidance from the GC team. 3.1 What DATAS Actually Does Official docs describe DATAS as a GC mode that: Adapts the heap size to the app’s memory requirements, Keeps the heap size roughly proportional to the live (long-lived) data size, Shrinks when the workload gets lighter and grows when it gets heavier. That’s different from classic Server GC, which: Assumes your process “owns” the machine, Grows the heap aggressively if there’s memory available, May end up with very different heap sizes depending on hardware. DATAS is especially targeted at bursty workloads and containerized apps where memory actually costs money and you might have many processes on the same node. 3.2 How You Control It From the GC configuration docs: DATAS can be toggled via: Environment variable: DOTNET_GCDynamicAdaptationMode=1 → enable DOTNET_GCDynamicAdaptationMode=0 → disable runtimeconfig.json : "System.GC.DynamicAdaptationMode": 1 or 0 MSBuild property: 1 But again: starting with .NET 9, it’s on by default, so in .NET 10 you typically only touch this if you have a very specific perf profile where DATAS isn’t a good fit. 4. What This Means for Your Apps Putting it together: You get fewer “silly” allocations for free .NET 10’s runtime now: Stack-allocates more delegates, closures, spans, and small arrays when they don’t escape. Reduces the abstraction penalty of idiomatic C# (LINQ, foreach , lambdas, etc.), so you don’t have to micro-optimize everything yourself. The GC behaves more like “pay for what you really use” With DATAS: Your heap won’t balloon just because you moved your app to a bigger SKU. Memory usage tracks live data instead of “whatever the machine has spare”. You still keep control when needed If you have: A latency-critical, always-hot service on a big dedicated machine, Or you’ve benchmarked and found DATAS not ideal for a specific scenario, …you can still flip DOTNET_GCDynamicAdaptationMode off and go back to classic Server GC semantics. 5. TL;DR for Busy Teams If you’re scanning this on a Friday: Upgrading from .NET 8 → 10 LTS gives you: Tuned DATAS GC as the default, Better JIT escape analysis, Stack-allocation of more small arrays and delegates. You don’t need to rewrite your code to benefit; just recompile and deploy. For critical services, benchmark with and without DATAS (toggle via DOTNET_GCDynamicAdaptationMode ) and pick what fits your SLOs. That’s the memory game-changer in .NET 10: the runtime quietly moves more stuff off the heap, while the GC learns to grow and shrink based on your real live data, not just the machine it’s sitting on.941Views1like1CommentFire-and-Forget Methods in C# — Best Practices & Pitfalls
When building modern applications, there are often situations where you want to perform tasks in the background without holding up the main flow of your application. This is where “fire-and-forget” methods come into play. In C#, a fire-and-forget method allows you to run a task without awaiting its completion. A common use case is sending a confirmation email after creating a user, but this also brings some limitations, particularly when dealing with scoped services. In this blog, we’ll walk through an example and explain why scoped services, like database contexts or HTTP clients, cannot be accessed in fire-and-forget methods. Why Fire-and-Forget? Fire-and-forget is useful in situations where you don’t need the result of an operation immediately, and it can happen in the background, for example: Sending emails Logging Notification sending Here’s a common scenario where fire-and-forget comes in handy: sending a welcome email after a user is created in the system. public async Task<ServiceResponse<User?>> AddUser(User user) { var response = new ServiceResponse<User?>(); try { // User creation logic (password hashing, saving to DB, etc.) _context.Users.Add(user); await _context.SaveChangesAsync(); // Fire-and-forget task to send an email _ = Task.Run(() => SendUserCreationMail(user)); response.Data = user; response.Message = user.FirstName + " added successfully"; } catch (Exception ex) { // Error handling } return response; } The method SendUserCreationMail sends an email after a user is created, ensuring that the main user creation logic isn’t blocked by the email-sending process. private async Task SendUserCreationMail(int id) { // This will throw an exception be _context is an scoped service var user = await _context.Users.FindAsync(id); var applicationUrl = "https://blogs.shahriyarali.com" string body = $@" <body> <p>Dear {user.FirstName},</p> <p>A new user has been created in the system:</p> <p>Username: {user.Username}</p> <p>Email: {user.Email}</p> <p>Welcome to the system! Please use the provided username and email to log in. You can access the system by clicking on the following link:</p> <p><a href='{applicationUrl}'>{applicationUrl}</a></p> <p>Best regards,</p> <p>Code With Shahri</p> </body>"; var mailParameters = new MailParameters { Subject = $"New User Created - {user.Username}", Body = body, UserEmails = new List<UserEmail> { new() { Name = user.FirstName, Email = user.Email } } }; await _mailSender.SendEmail(mailParameters); } In the code above, the SendUserCreationMail method is executed using Task.Run(). Since it's a fire-and-forget task, we don’t await it, allowing the user creation process to complete without waiting for the email to be sent. The Problem with Scoped Services A major pitfall with fire-and-forget tasks is that you cannot reliably access scoped services (such as DbContext or ILogger) within the task. This is because fire-and-forget tasks continue to run after the HTTP request has been completed, and by that point, scoped services will be disposed of. For example, if _mailSender was scoped services, they could be disposed of before the SendUserCreationMail task completes, leading to exceptions. Why Can’t We Access Scoped Services? Scoped services have a lifecycle tied to the HTTP request in web applications. Once the request ends, these services are disposed of, meaning they are no longer available in any background task that wasn’t awaited within the request lifecycle. In the example above, since the fire-and-forget email sending isn’t awaited, attempting to use scoped services will throw an ObjectDisposedException. To safely access scoped services in a fire-and-forget method, you can leverage IServiceScopeFactory to manually create a service scope, ensuring that the services are available for the task. private async Task SendUserCreationMail(int id) { // Create a service scope. using var scope = _serviceScopeFactory.CreateScope(); var _context = scope.ServiceProvider.GetRequiredService<DataContext>(); var user = await _context.Users.FindAsync(id); var applicationUrl = "https://blogs.shahriyarali.com" string body = $@" <body> <p>Dear {user.FirstName},</p> <p>A new user has been created in the system:</p> <p>Username: {user.Username}</p> <p>Email: {user.Email}</p> <p>Welcome to the system! Please use the provided username and email to log in. You can access the system by clicking on the following link:</p> <p><a href='{applicationUrl}'>{applicationUrl}</a></p> <p>Best regards,</p> <p>Code With Shahri</p> </body>"; var mailParameters = new MailParameters { Subject = $"New User Created - {user.Username}", Body = body, UserEmails = new List<UserEmail> { new() { Name = user.FirstName, Email = user.Email } } }; await _mailSender.SendEmail(mailParameters); } Conclusion Fire-and-forget methods in C# are useful for executing background tasks without blocking the main application flow, but they come with their own set of challenges, particularly when working with scoped services. By leveraging techniques like IServiceScopeFactory, you can safely access scoped services in fire-and-forget tasks without risking lifecycle management issues. Whether you're sending emails, logging, or processing notifications, ensuring proper resource management is crucial to prevent errors like ObjectDisposedException. Always weigh the pros and cons of fire-and-forget and consider alternative approaches like background services or message queuing for more robust solutions in larger systems. To explore more on this topic, you can check out the following resources on Microsoft Learn: Task.Run Method Task asynchronous programming model12KViews2likes2Comments