performance
8 TopicsAzure Skilling at Microsoft Ignite 2025
The energy at Microsoft Ignite was unmistakable. Developers, architects, and technical decision-makers converged in San Francisco to explore the latest innovations in cloud technology, AI applications, and data platforms. Beyond the keynotes and product announcements was something even more valuable: an integrated skilling ecosystem designed to transform how you build with Azure. This year Azure Skilling at Microsoft Ignite 2025 brought together distinct learning experiences, over 150+ hands-on labs, and multiple pathways to industry-recognized credentials—all designed to help you master skills that matter most in today's AI-driven cloud landscape. Just Launched at Ignite Microsoft Ignite 2025 offered an exceptional array of learning opportunities, each designed to meet developers anywhere on the skilling journey. Whether you joined us in-person or on-demand in the virtual experience, multiple touchpoints are available to deepen your Azure expertise. Ignite 2025 is in the books, but you can still engage with the latest Microsoft skilling opportunities, including: The Azure Skills Challenge provides a gamified learning experience that lets you compete while completing task-based achievements across Azure's most critical technologies. These challenges aren't just about badges and bragging rights—they're carefully designed to help you advance technical skills and prepare for Microsoft role-based certifications. The competitive element adds urgency and motivation, turning learning into an engaging race against the clock and your peers. For those seeking structured guidance, Plans on Learn offer curated sets of content designed to help you achieve specific learning outcomes. These carefully assembled learning journeys include built-in milestones, progress tracking, and optional email reminders to keep you on track. Each plan represents 12-15 hours of focused learning, taking you from concept to capability in areas like AI application development, data platform modernization, or infrastructure optimization. The Microsoft Reactor Azure Skilling Series, running December 3-11, brings skilling to life through engaging video content, mixing regular programming with special Ignite-specific episodes. This series will deliver technical readiness and programming guidance in a livestream presentation that's more digestible than traditional documentation. Whether you're catching episodes live with interactive Q&A or watching on-demand later, you’ll get world-class instruction that makes complex topics approachable. Beyond Ignite: Your Continuous Learning Journey Here's the critical insight that separates Ignite attendees who transform their careers from those who simply collect swag: the real learning begins after the event ends. Microsoft Ignite is your launchpad, not your destination. Every module you start, every lab you complete, and every challenge you tackle connects to a comprehensive learning ecosystem on Microsoft Learn that's available 24/7, 365 days a year. Think of Ignite as your intensive immersion experience—the moment when you gain context, build momentum, and identify the skills that will have the biggest impact on your work. What you do in the weeks and months following determines whether that momentum compounds into career-defining expertise or dissipates into business as usual. For those targeting career advancement through formal credentials, Microsoft Certifications, Applied Skills and AI Skills Navigator, provide globally recognized validation of your expertise. Applied Skills focus on scenario-based competencies, demonstrating that you can build and deploy solutions, not simply answer theoretical questions. Certifications cover role-based scenarios for developers, data engineers, AI engineers, and solution architects. The assessment experiences include performance-based testing in dedicated Azure tenants where you complete real configuration and development tasks. And finally, the NEW AI Skills Navigator is an agentic learning space, bringing together AI-powered skilling experiences and credentials in a single, unified experience with Microsoft, LinkedIn Learning and GitHub – all in one spot Why This Matters: The Competitive Context The cloud skills race is intensifying. While our competitors offer robust training and content, Microsoft's differentiation comes not from having more content—though our 1.4 million module completions last fiscal year and 35,000+ certifications awarded speak to scale—but from integration of services to orchestrate workflows. Only Microsoft offers a truly unified ecosystem where GitHub Copilot accelerates your development, Azure AI services power your applications, and Azure platform services deploy and scale your solutions—all backed by integrated skilling content that teaches you to maximize this connected experience. When you continue your learning journey after Ignite, you're not just accumulating technical knowledge. You're developing fluency in an integrated development environment that no competitor can replicate. You're learning to leverage AI-powered development tools, cloud-native architectures, and enterprise-grade security in ways that compound each other's value. This unified expertise is what transforms individual developers into force-multipliers for their organizations. Start Now, Build Momentum, Never Stop Microsoft Ignite 2025 offered the chance to compress months of learning into days of intensive, hands-on experience, but you can still take part through the on-demand videos, the Global Ignite Skills Challenge, visiting the GitHub repos for the /Ignite25 labs, the Reactor Azure Skilling Series, and the curated Plans on Learn provide multiple entry points regardless of your current skill level or preferred learning style. But remember: the developers who extract the most value from Ignite are those who treat the event as the beginning, not the culmination, of their learning journey. They join hackathons, contribute to GitHub repositories, and engage with the Azure community on Discord and technical forums. The question isn't whether you'll learn something valuable from Microsoft Ignite 2025-that's guaranteed. The question is whether you'll convert that learning into sustained momentum that compounds over months and years into career-defining expertise. The ecosystem is here. The content is ready. Your skilling journey doesn't end when Ignite does—it accelerates.Postgres as a Distributed Cache Unlocks Speed and Simplicity for Modern .NET Workloads
In the world of high-performance, modern software engineering, developers often face a tough tradeoff: how to achieve lightning-fast data retrieval response rates without adding complexity, sacrificing reliability, or getting locked into specialized, external data caching products or platforms. What if you could harness the power and flexibility of your existing Postgres database to solve this challenge? Enter the Microsoft.Extensions.Caching.Postgres library, a new nuget.org package that brings distributed caching to Postgres, unlocking speed, simplicity, and seamless integration for modern .NET workloads. In this article, we’re going to take a closer look at the Postgres caching store, which introduces a new option for .NET developers planning on implementing a distributed cache, such as HybridCache, paired together with a Postgres database to provide distributed backplane operations. One data platform for multiple workloads Postgres’ reputation for reliability, extensibility, and standards compliance has long been respected, with Postgres databases driving some of today’s largest and most popular platforms. Increasingly developers, data engineers, and entrepreneurs alike all rallying to apply these benefits. One of the most compelling aspects of Postgres is its adaptability: it’s a data platform that can simultaneously handle everything from transactional workloads to analytical queries, JSON documents to geospatial data, and even time-series and vectorized AI search. In an era of specialized services, Postgres is proving that one platform can do it all and do it well. Intrepid engineers have also discovered that Postgres is often just as proficient in handling workloads traditionally supported by other very different technology solutions, such as lake-house, pub-sub, message queues, job schedulers, and session store caches. These roles are all now being powered by Postgres databases, while Postgres simultaneously continues to deliver the same scalable, battle-tested, and mission-critical ACID-compliant core relational database operations we’ve all come to expect. When speed matters most Database-backed cache stores are by no means a new concept; the first version of a database cache library for .NET was made available to developers exploring the nuget.org ecosystem (Microsoft.Extensions.Caching.SqlServer) in June 2016. This library included several impressive features, such as expiration policies, serialization, and dependency injection, making it ideal for multi-instance applications requiring shared cache functionality. It was especially useful in environments where Redis or other cache providers were not available. The convenience of leveraging a transactional database’s usefulness to function as a distributed cache comes with some tradeoffs, especially when compared against services such as Redis or Memcached; in a word: speed. All the features which make your database data durable, reliable, and consistent require precious additional clock cycles and I/O operations, and this “overhead” resulted in performance costs when compared to the alternative memory stores and caching system options. What if it was possible to maintain all those familiar and convenient interfaces for connecting to your database, while simultaneously being able to precisely configure specific tables to throw off the burden of crash consistency and replication logging? What if, for only the tables we selected, we could trade this durability for pure speed? Enter Postgres’ UNLOGGED Tables. Postgres' adaptable performance Another compelling aspect of Postgres databases is the ability to significantly speed up write-performance by bypassing the Write Ahead Log (WAL). The WAL is designed to ensure that data is crash-consistent (and replicable), and writing to your database is comprised of a transparent two-step process: your data is written to your database tables, and these changes are also committed to a separate file to guarantee the data’s persistence. It also happens that in some circumstances, the tradeoff to increase performance can be worth the sacrifice to crash-consistency, especially for short-lived, temporary types of data, like when used as a cache store. This table configuration is scoped to individual tables, which allows for combinations of “logged” and “unlogged” table configurations, both operating side-by-side within the same database instance. The net result: Postgres can provide incredibly performant response times when used as a distributed cache, rivaling the performance of other popular cache stores, while also providing the simplicity, familiarity, and consistency that the Postgres engine naturally offers. HybridCache for your .NET solutions It was this capability*combined with the inspiration from the SQL Server library that inspired the creation of the nuget.org Microsoft.Extensions.Caching.Postgres package. As a longtime .NET developer, I have personally witnessed the incredible evolution of the .NET platform and the amazing growth, enhancements, and improvements to the languages, the tooling, runtimes, and the incredible people behind each of these contributions. The recent addition of HybridCache is especially exciting to consider incorporating into your .NET solutions because it dramatically simplifies the steps required to add caching into your project, while simultaneously linking in-memory cache with a second-level tiered cache service. This seamless integration provides your application with the best of both worlds: blazing fast in-memory retrieval paired with a resilient fail-safe and similarly performant backplane in the event an application instance blinks, scales up/out, etc. Don’t just take my word for it, let’s look at some of the benchmarks between a Redis cache and Postgres database. The tests are comprised of synchronous and async operations across three different sized payloads (128, 1024, and 10240 bytes) for read/write, containing both single and concurrent messages, and at fixed and random positions. The tests are further divided into two types of cache expiration windows: absolute/non-sliding and sliding/relative windows. Consider the output from a suite of benchmarks tests, keeping in mind these results are based on microseconds, meaning 1,000 microseconds equals 1 millisecond: What do these results reveal? In certain respects, there aren’t that many surprises. Bespoke memory-based key-value cache systems like Redis continue to outperform relational databases in terms of pure speed and low latency. What is really exciting to see is that Postgres comes very close to Redis performance for more intensive operations! Finding the right fit for your solution I’m excited to make this Postgres package available to everyone considering distributed caching in their solution designs. The combination of HybridCache paired with your choice of backplane will allow you to select the right technologies and tools that are best suited for your solution. Our GitHub repo also contains a variety of sample applications, which demonstrate how to configure and use HybridCache together with the Postgres distributed cache library within a Console app service, as well as an Aspire-based sample Web API. I encourage you to explore these examples and share your thoughts and ideas. I look forward to any feedback you may have to share about the Microsoft.Extensions.Caching.Postgres package. Keep advancing, keep improving, and keep contributing to be a “builder” and an active part of our incredible community! * This package extension is highly configurable, and you can choose whether to enable/disable bypassing the WAL for your cache table, along with several other options that can be adjusted for your particular use case.Calculating performance counters memory and network usage as percentage
Hi everyone, Calculating memory and network usage as a percentage of the resource is a classic problem for Windows performance counters. The issue is that the available counters are absolute numbers, but you don't necessarily know the total for these resources on a given machine. Has anyone come up with a clever way to do this through in OMS and log analytics? For reference, here is a query I'm using to get the average for Bytes Total/sec: Perf | where ObjectName == "Network Interface" and CounterName == "Bytes Total/sec" | where TimeGenerated > startofday(now()-22d) and TimeGenerated < endofday(now()-1d) | summarize AggregatedValue = avg(CounterValue) by Computer, InstanceName , bin(TimeGenerated, 1hour)Solved11KViews0likes3CommentsMin and Max memory setup on SQL server
We are running a BT 2010 environment which processes over 150,000 messages per day during the peak business hours. We have an SQL Server instance running on a separate Windows server 2008 R2 instance. For past few weeks, we are experiencing intermittent outages and the error logged on BizTalk APP server says "Login is from an untrusted domain.....". upon checking the logs on database server we have NETLOGON errors (unable to contact AD for credential validation) due to "Unavailable Memory". CPU Utilization and Memory utilization is 95-100% on SQL server most of the times. Following are the settings on SQL server - RAM: 16 GB, SQL Server Min and Max memory Allocation is 12 GB. ENTSSO is also running on the same SQL server. Please suggest any change in setting on SQL server which can reduce High Utilization and free up some memory for background windows processes and ENTSSO.1.4KViews0likes1CommentWVD Gateway Performance Issues?
Hey Everyone, We have been chugging along just fine with our WVD installation. All of our users are remote at this point, due to COVID-19 and everything had been fine. This morning, around 9:30, we started seeing performance issues with connection to the desktop. By performance issues I mean, slow mouse movements, slow to scroll through a website or email and the like. There are no indicators of performance issues on the WVD Hosts themselves - Disk / CPU / Memory all fine. I have used Horizon View and XenDesktop in the past pretty extensively, and this felt like a scenario where bandwidth to the gateway/connection broker is saturated or is seeing high latency/packet loss. I would blame this on an individual's home internet connection or coffee shop wifi, but, it is all of my users (including myself). I wish there were some way to see or manage the performance of the WVD connection broker...but I am not aware of any. Is anyone else seeing this? I did see the thread about drops and what not, but this is not related, I do not believe.10KViews3likes15CommentsAzure app service - http performance
Hi, as part of a project we have multiple app services running on azure. These services communicate with each other via http. As part of this project, we started running load tests and noticed surprisingly slow results even with, in our opinion, small number of requests. In order to identify performance issues, I created 2 small web apps, 1 using .net and 1 using node.js. Both apps provide a REST API that is called from our load tests running in VS Online. The apps call a single azure function that does a simple Thread.Sleep(1000) and returns, no other processing is done. Trying both vertical and horizontal scaling, I got the following results: Single B1 instance # of requests Avg. response in s .net Node.js 100 1.5 1.7 300 3.1 6.6 500 7.1 23.4 Single B3 instance # of requests Avg. response in s .net Node.js 100 1.7 1.5 300 5.3 4 500 6.5 10.2 3 B1 instances # of requests Avg. response in s .net Node.js 100 1.4 3.5 300 4.6 4.9 500 6.7 15.1 3 B3 instances # of requests Avg. response in s .net Node.js 100 1.4 3.5 300 4.6 4.9 500 7.5 7.2 Looking at these results, it seems like the choice of technology (.net or node.js) does not have a big impact on performance. Generally however, the avg. response times seem very long, especially since there is no real processing or business logic in the applications itself. Of course e.g 500 concurrent requests equals a much larger number of users on the platform but it seems like this should still take a lot less time than is seen in the test results. The configuration of both applications is completely default, so I am wondering if there is anything we should consider to improve performance. This is a rather important issue because we have a strong requirement of response times of less than 3 seconds. Thanks and best regards, Juergen Ziegler1.3KViews0likes0Comments