azure hardware infrastructure
16 TopicsUnleashing GitHub Copilot for Infrastructure as Code
Introduction In the world of managing infrastructure, things are always changing. People really want solutions that work, can handle big tasks, and won't let them down. Now, as more companies switch to using cloud-based systems and start using Infrastructure as Code (IaC), the job of folks who handle infrastructure is getting even more important. They're facing new problems in setting up and keeping everything running smoothly. The Challenges faced by Infrastructure Professionals Complexity of IaC: Managing infrastructure through code introduces a layer of complexity. Infrastructure professionals often grapple with the intricate syntax and structure required by tools like Terraform and PowerShell. This complexity can lead to errors, delays, and increased cognitive load. Consistency Across Environments: Achieving consistency across multiple environments—development, testing, and production—poses a significant challenge. Maintaining uniformity in configurations is crucial for ensuring the reliability and stability of the deployed infrastructure. Learning Curve: The learning curve associated with IaC tools and languages can be steep for those new to the domain. As teams grow and diversify, onboarding members with varying levels of expertise becomes a hurdle. Time-Consuming Development Cycles: Crafting infrastructure code manually is a time-consuming process. Infrastructure professionals often find themselves reinventing the wheel, writing boilerplate code, and handling repetitive tasks that could be automated. Unleashing GitHub Copilot for Infrastructure as Code In response to these challenges, Leveraging GitHub Copilot to generate infra code specifically for infrastructure professionals is helping to revolutionize the way infrastructure is written, addressing the pain points experienced by professionals in the field. The Significance of GH Copilot for Infra Code Generation with accuracy: Copilot harnesses the power of machine learning to interpret the intent behind prompts and swiftly generate precise infrastructure code. It understands the context of infrastructure tasks, allowing professionals to express their requirements in natural language and receive corresponding code suggestions. Streamlining the IaC Development Process: By automating the generation of infrastructure code, Copilot significantly streamlines the IaC development process. Infrastructure professionals can now focus on higher-level design decisions and business logic rather than wrestling with syntax intricacies. Consistency Across Environments and Projects: GH Copilot ensures consistency across environments by generating standardized code snippets. Whether deploying resources in a development, testing, or production environment, GH Copilot helps maintain uniformity in configurations. Accelerating Onboarding and Learning: For new team members and those less familiar with IaC, GH Copilot serves as an invaluable learning service. It provides real-time examples and best practices, fostering a collaborative environment where knowledge is shared seamlessly. Efficiency and Time Savings: The efficiency gains brought about by GH Copilot are substantial. Infrastructure professionals can witness a dramatic reduction in development cycles, allowing for faster iteration and deployment of infrastructure changes. Copilot in Action Prerequisites 1.Install visual studio code latest version - https://code.visualstudio.com/download Have a GitHub Copilot license with a personal free trial or your company/enterprise GitHub account, install the Copilot extension, and sign in from Visual Studio Code. https://docs.github.com/en/copilot/quickstart Install the PowerShell extension for VS Code, as we are going to use PowerShell for our IaC sample. Below is the PowerShell code generated using VS Code & GitHub Copilot. It demonstrates how to create a simple Azure VM. We're employing a straightforward prompt with #, with the underlying code automatically generated within the VS Code editor. Another example to create azure vm with vm scale set with minimum and maximum number of instance count. Prompt used with # in below example. The PowerShell script generated above can be executed either from the local system or from the Azure Portal Cloud Shell. Similarly, we can create Terraform and devops code using this Infra Copilot. Conclusion In summary, GH Copilot is a big deal in the world of infrastructure as code. It helps professionals overcome challenges and brings about a more efficient and collaborative way of working. As we finish talking about GH Copilot's abilities, the examples we've looked at have shown how it works, what technologies it uses, and how it can be used in real life. This guide aims to give infrastructure professionals the info they need to improve how they do infrastructure as code.31KViews9likes9CommentsReimagining AI at scale: NVIDIA GB300 NVL72 on Azure
By Gohar Waqar, CVP of Cloud Hardware Infrastructure Engineering, Microsoft Microsoft was the first hyperscaler to deploy the NVIDIA GB300 NVL72 infrastructure at scale – with a fully integrated platform engineered to deliver unprecedented compute density in a single rack to meet the demands of agentic AI workloads. Each GB300 NVL72 rack packs 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs with up to ~136 kW of IT load, enabled by Microsoft’s custom liquid cooling heat exchanger unit (HXU) system. Using a systems approach to architect GB300 clusters, Azure’s new NDv6 GB300 VMs include robust infrastructure innovation across every layer of the stack, including smart rack management for fleet health, innovative cooling systems, and efficient deployment features that make scaling high-density AI clusters easier than ever. With purpose-built hardware engineered for a unified platform – from silicon to systems to software – Azure’s deployment of NVIDIA GB300 NVL72 is a clear representation of Microsoft’s commitment to raising the bar on accelerated computing, enabling training of multitrillion-parameter models and high throughput on inference workloads. Unique features of NVIDIA GB300 NVL72 system on Microsoft Azure Ultra-dense AI rack - The GB300 rack integrates 72 NVIDIA Blackwell Ultra GPUs (each with 288 GB HBM3e each) and 36 Grace CPUs, effectively delivering supercomputer-class performance in a single rack. Advanced liquid cooling - Each rack uses direct-to-chip liquid cooling. In air-cooled data centers, external liquid cooling heat exchanger unit (HXU) radiator units in each rack dissipate ~136 kW to room air. In facilities with chilled water, the rack connects directly to facility water. Smart rack management - The system is equipped with an embedded controller that monitors power, temperature, coolant flow, and leak sensors in real time. It can auto-throttle or shut down components if conditions go out-of-range and provide full telemetry for remote fleet diagnostics. Fully integrated security and offload features: Our unique design also includes the Azure Integrated Hardware Security Module (HSM) chip and Azure Boost offload accelerator for advanced I/O and security performance. Scalable datacenter deployment - GB300 arrives as an integrated rack (compute trays, NVIDIA NVLink™ fabric, cooling, and power shelves pre-installed). Deployment is streamlined – just requiring connectivity power and cooling, performance of initial checks, and the rack self-regulates its cooling and power distribution. Purpose-built architecture designed for rapid deployment and scale At its core, GB300 is built to maximize AI compute density within a standard data center footprint. It is a single-rack AI inference and training cluster with unprecedented component density. Compared to the previous generation (NVIDIA GB200 NVL72), it introduces higher-performance GPUs (from ~1.2 kW to ~1.4 kW each with more HBM3e memory), a ~50% boost in NVFP4 throughput and a revamped power/cooling design to handle ~20% greater thermal and power load. The liquid cooling system for the GPU module is enhanced with a new cold plate and improved leak detection assembly for safe, high-density operation. Innovations in our purpose-built Azure Boost accelerator for I/O offload unlock higher bandwidth, while our custom Datacenter-secure Control Module (DC-SCM) introduces a secure, modular control plane built on a hardware root of trust, backed by the Azure Integrated Hardware Security Module (HSM). Together, these advancements enable fleet-wide manageability, strengthening security and operational resilience at scale meeting the demands of hyperscale environments. Cooling systems designed for deployability and global resiliency To dissipate ~136 kW of heat per rack, GB300 relies on direct liquid cooling for all major components. To offer resiliency and wide deployability across Microsoft’s datacenter footprint, our cooling designs support both facility-water and air-cooled environments. Both approaches use a closed coolant loop inside the rack with a treated water-glycol fluid. Leak detection cables line each tray, and the base of the rack is equipped with smart management protocols to address potential leaks. Using this method, liquid cooling is highly efficient and reliable – it allows GB300 to run with warmer coolant temperatures than traditional datacenter water, improving overall power usage effectiveness (PUE). Smart management, fleet health & diagnostics Each GB300 rack is a “smart IT rack” with an embedded management controller that oversees its operation. This controller is supported by a rack control module that serves as the brain of the rack, providing comprehensive monitoring and automation for power, cooling, and health diagnostics. By delivering an integrated “single pane of glass” view for each rack’s health, the GB300 makes management at scale feasible despite the complexity. This rack self-regulates its power and thermal environment once installed, adjusting fans or pump speeds automatically, and isolates faults – reducing the manual effort to keep the cluster running optimally so customers can focus on the workloads, with confidence that the infrastructure is continuously self-monitoring and safeguarding itself. In addition to this, the rack control module monitors and moderates GPU peak power consumption and other power management scenarios. These robust design choices reflect the fleet-first mindset – maximizing uptime and easier diagnostics in large deployments. Efficient and streamlined deployment As Microsoft scales thousands of GB300 racks for increased AI supercomputing capacity, fast and repeatable deployment is critical. GB300 introduces a new era of high-density AI infrastructure, tightly integrating cutting-edge hardware (Grace CPUs, Blackwell Ultra GPUs, and NVLink connectivity) with innovations both in power delivery and liquid cooling. Crucially, it does so with an eye toward operational excellence: built-in management, health diagnostics, and deployment-friendly design mean that scaling up AI clusters with GB300 can be done rapidly and reliably. With its unprecedented compute density, intelligent self-management, and flexible cooling options, the GB300 platform enables organizations to scale rapidly with the latest AI supercomputer hardware while maintaining the reliability and serviceability expected in Azure’s promise to customers. GB300 unlocks next-level AI performance delivered in a package engineered for real-world efficiency and fleet-scale success.1.4KViews7likes0CommentsBehind the Azure AI Foundry: Essential Azure Infrastructure & Cost Insights
What is Azure AI Foundry? Azure AI Studio is now renamed to Azure AI Foundry. Azure AI Foundry is a unified AI development platform where teams can efficiently manage AI projects, deploy and test generative AI models, integrate data for prompt engineering, define workflows, and implement content security filters. This powerful tool enhances AI solutions with a wide range of functionalities. It is a one stop shop for all you need for AI development. Azure AI Hubs are collaborative workspaces for developing and managing AI solutions. To use AI Foundry's features effectively, you need at least one Azure AI Hub. An Azure AI Hub can host multiple projects. Each project includes the tools and resources needed to create a specific AI solution. For example, you can set up a project to help data scientists and developers work together on building a custom Copilot business application. You can use Azure AI Foundry to create an Azure AI Hub, or you can create a hub while creating a new project. This creates an AI Hub resource in your Azure subscription in the resource group you specify, providing a workspace for collaborative AI development. https://ai.azure.com/ Azure Infrastructure Azure AI Foundry environment utilizes Azure's robust AI infrastructure to facilitate the development, deployment, and management of AI models across various scenarios. Below is the list of Azure Infrastructure required to deploy the environment. Make sure the below resource providers are enabled for your subscription to deploy these Azure resources. Azure resource Resource provider Kind Purpose Azure AI Foundry Microsoft.CognitiveServices/account AIServices To enable intelligent and efficient interaction across Agents, Evaluations, Azure OpenAI, Speech, Vision, Language, and Content Understanding services—leveraging Azure’s AI capabilities to deliver comprehensive solutions for multimodal understanding, decision-making, and automation. Azure AI Foundry project Microsoft.CognitiveServices/account/project AIServices Subresource to the above Azure AI Foundry Hub Microsoft.MachineLearningServices/ workspace Hub This resource type, associated with the Azure Machine Learning service workspace, serves as a central hub for managing machine learning experiments, models, and data. It provides capabilities for creating, organizing, and collaborating on AI projects. Azure AI Foundry Project Microsoft.MachineLearningServices/ workspace Project Within an Azure AI Studio Hub, you can create projects. These projects allow you to organize your work, collaborate with others, and track experiments related to specific tasks or use cases. Essentially, it provides a structured environment for your AI development. Azure AI OpenAI Service Microsoft.CognitiveServices/account AI Services An Azure AI Services as the model-as-a-service endpoint provider including GPT-4/4o and ADA Text Embeddings models. Azure AI Search Microsoft.Search/searchServices Search Service Creates indexes on your data and provides search capabilities for your projects. Azure Storage Account Microsoft.Storage/storageAccounts Storage It is associated with the Azure AI Foundry workspace. Stores artifacts for your projects (e.g., flows and evaluations). Azure Key Vault Microsoft.KeyVault/vaults Key Vault It is associated with the Azure AI Foundry workspace. Stores secrets like connection strings for resource connections. Azure Container Registry(optional) Microsoft.ContainerRegistry/ registries Container Registry Stores Docker images created when using custom runtime for prompt flow. Azure Application Insights Microsoft.Insights/components Monitoring An Azure Application Insights instance associated with the Azure AI Foundry workspace. Used for application-level logging in deployed prompts. Log Analytics Workspace (optional) Microsoft.OperationalInsights/ workspaces Monitoring Used for log storage and analysis. Event Grid Microsoft.Storage/storage accounts/ providers/extensiontopics Event Grid System Topic Event Grid automates workflows by triggering actions in response to events across Azure services, ensuring dynamic and efficient operations in an Azure AI solution AI Foundry Environment Azure Portal View AI Foundry Portal View All dependent resources are connected to the hub and can some resources (Azure OpenAI and Azure AI Search) can be shared across projects. Pricing Since Azure AI Foundry is assembled from multiple Azure services ,pricing would depend on architectural decisions and usage. When building your own Azure AI solution, it's essential to consider the associated costs that it accrues in Azure AI Foundry. Below are the areas where the costs incur: 1.Compute Hours and Tokens: Unlike fixed monthly costs, Azure AI hubs, Azure OpenAI and projects are billed based on compute hours and tokens used. Be mindful of resource utilization to avoid unexpected charges. 2.Networking Costs: By default, the hub network configuration is public. But if you want to secure the Azure AI Hub there is costs associated with data transfer. 3.Additional Resources: Beyond AI services, consider other Azure resources like Azure Key Vault, Storage, Application Insights, and Event Grids. These services charge based on transactions and data volume. AI Foundry Cost Pane Now in Azure Pricing Calculator you can directly find the upfront monthly cost of the resources under Example Scenarios tab in Azure AI Foundry scenario. This cost calculation feature is GA now. You can also use cost management and Azure resource tags to help with a detailed resource-level cost breakdown. Please note while adding vector search in AI Search requires an Azure OpenAI embedding model. Azure OpenAI embedding model, text-embedding-ada-002 (Version 2), will be deployed if not already. Adding vector embeddings will incur usage to your account. Vector search is available as part of all Azure AI Search tiers in all regions at no extra charge. If you require to group costs of these different services together, it is recommend creating hubs in one or more dedicated resource groups and subscriptions in your Azure environment. You can navigate to your resource group cost estimation from view cost of resources in Azure AI Foundry. Azure Pricing Calculator To learn more about the pricing of Azure AI Foundry pricing click here -Azure AI Foundry - Pricing | Microsoft Azure Conclusion Azure AI Foundry enables a path forward for enterprises serious about AI transformation, not just experiments, but scalable, governable, cost predictable, and responsible AI Systems by leveraging the robust infrastructure of Azure Cloud. This integration helps maintain and cater to business goals while simultaneously providing a competitive edge in an AI-driven market. Resources and getting started with Azure AI Azure AI Portfolio Explore Azure AI. Azure AI Infrastructure Microsoft AI at Scale. Azure AI Infrastructure. Azure OpenAI Service Azure OpenAI Service documentation. Explore the playground and customization in Azure AI Foundry Portal . Copilot Studio Copilot Learning Hub Step 1: Understand Copilot Step 2: Adopt Copilot Step 3: Extend Copilot Step 4: Build Copilot Stay up to date on Copilot -What's new in Copilot Studio GPT 4.5 Model Request MS form link. Please note this is now limited to US region only as Azure AI Infrastructure is undergoing significant advancements, continually evolving to meet the demands of modern technology and innovation. Copilot & AI Agents1.3KViews2likes0CommentsMt Diablo - Disaggregated Power Fueling the Next Wave of AI Platforms
AI platforms have quickly shifted the industry from rack powers near 20 kilowatts to a hundred kilowatts and beyond in just the span of a few years. To enable the largest accelerator pod size within a physical rack domain, and enable scalability between platforms, we are moving to a disaggregated power rack architecture. Our disaggregated power rack is known as Mt Diablo and comes in both 48 Volt and 400 Volt flavors. This shift enables us to leverage more of the server rack for AI accelerators and at the same time gives us the flexibility to scale the power to meet the needs of today’s platforms and the platforms of the future. This forward thinking strategy enables us to move faster and foster collaboration to power the world’s most complex AI systems.13KViews2likes5CommentsAzure Extended Zones: Optimizing Performance, Compliance, and Accessibility
Azure Extended Zones are small-scale Azure extensions located in specific metros or jurisdictions to support low-latency and data residency workloads. They enable users to run latency-sensitive applications close to end users while maintaining compliance with data residency requirements, all within the Azure ecosystem.3KViews2likes0Comments