ramp up with me
11 TopicsRamp up with me...on HPC: What is high-performance computing (HPC)?
Over the next several months, let’s take a journey together and learn about the different use cases. Join me as I dive into each use case and for some of them, I’ll even try my hand at the workload for the first time. We’ll talk about what went well, and any what issues I ran into. And maybe, you’ll get to hear a little about our customers and partners along the way.Ramp up with me... on HPC: Understanding Virtual Machines, CPUs, and GPUs
There are a lot of different products you need to successfully complete a high-performance computing (HPC) workload. You’ll hear several terms regularly, like virtual machines, CPUs, GPUs, compute power, and compute constrained. While these are really important to talk about in high performance computing, they were difficult concepts for me to grasp. Personally, I am a visual learner and I struggle with theoretical concepts that I can’t just physically see. So, I’ll do my best to both explain the concepts, and show you the hardware so you can visualize what they are.Ramp up with me... on HPC: What is Rendering?
New to learning about high performance computing (HPC) and want to know what Rendering is all about? Read the blog to take a journey into creating a new rendered animation and learn a little bit more about what this workload is and how it relates to HPC.Azure’s ND GB200 v6 Delivers Record Performance for Inference Workloads
Achieving peak AI performance requires both cutting-edge hardware and a finely optimized infrastructure. Azure’s ND GB200 v6 Virtual Machines, accelerated by the NVIDIA GB200 Blackwell GPUs, have already demonstrated world record performance of 865,000 tokens/s for inferencing on the industry standard LLAMA2 70BPreview Announcement: Open OnDemand Integration with Azure CycleCloud Workspace for Slurm
This exciting integration with CycleCloud enables end users to dynamically create and configure an Open OnDemand server within minutes. The system automatically deploys Open OnDemand and registers any Slurm cluster deployed by CycleCloud, streamlining the setup process.New Linux VDI Solution: Deploy Cendio Thinlinc as a Standalone Image with Direct Connectivity
Prerequisites Microsoft Azure Subscription: Ensure you have an active Microsoft Azure subscription. Virtual Network: Ensure you have a VNet with connectivity to your corporate network (for example, over VPN) Resource Group: Choose an existing resource group or create a new one. SSH Key (Optional): Generate an SSH key pair if you don't already have one. Step 1: Deploy the Standalone Thinlinc VM Image Navigate to the Microsoft Azure Marketplace Go to the ThinLinc page on the Microsoft Azure Marketplace: Thinlinc by Cendio Click “Get It Now” to start the deployment process. Choose between Alma Linux 9 and Ubuntu 22.04 Click “Create” in the new Microsoft Azure Portal tab Configure Basic Settings Subscription: Select your Microsoft Azure subscription. Resource Group: Choose an existing resource group or create a new one. VM Name: Provide a name for the virtual machine (VM). Region: Select the Azure region where the VM will be deployed. Availability Zone: Select an Availability Zone as you prefer. Image: Ensure the proper Thinlinc image is selected Select VM Size Click "Change size" and choose a VM that has sufficient resources for your workload. We highly recommend a GPU enabled VM series (e.g. NV-series) although it is possible to do this without a GPU machine. Configure Administrator Account Choose the Authentication Type: We recommend using a SSH public key although you can use a password as well. If using SSH public key, upload your SSH public key or generate a new key pair. Set a username for the admin account. Inbound Port Rules Permit only essential ports (example, 22 for SSH if required for admin maintenance). Networking Tab Virtual Network: Select your existing VNet with direct connectivity Subnet: Choose a subnet that can route traffic to your corporate network Public IP: Disable (since direct connectivity is used) Review + Create Review all your information, then click “Create” to deploy the VM. Step 2: Configure Thinlinc Server After deploying the VM, configure the Thinlinc server to ensure it is accessible and running properly. This step verifies the installation and retrieves the private IP for connection Navigate to your deployment in the Microsoft Azure Portal. Select your deployed VM, go to the networking tab and note its private IP address Connect to the VM from a machine within your private network using SSH: ssh “username”@<PRIVATE_IP> Step 3: Connect Using Thinlinc Client (see below) Download the Thinlinc client from Cendio's Website Open the Thinlinc client and enter the VMs private address in the “server” line Enter your username and SSH key (or password) 4. Click connect to launch your Thinlinc session Conclusion: You have successfully deployed a standalone Cendio Thinlinc VM on Microsoft Azure with direct connectivity! This setup allows seamless remote Linux desktop access while leveraging existing VPNs and private networking. Stay tuned for more updates on our Thinlinc integration with CycleCloud Workspaces for Slurm which will support advanced AI and HPC workloads.Open OnDemand with Azure CycleCloud Workspace for Slurm
Open OnDemand, developed by the Ohio Supercomputer Center (OSC), is an open-source, web-based portal designed to offer seamless access to high-performance computing (HPC) resources. The integration with Azure CycleCloud Workspace for Slurm facilitates the deployment and configuration of an Open OnDemand virtual machine using a dedicated CycleCloud cluster template and project scripts. Upon the completion of setup, a local daemon will query CycleCloud to automatically register Slurm clusters. User authentication will be managed through Azure Entra ID, requiring the registration and configuration of an Entra Application to support the OpenID Connect mechanism. CycleCloud will manage local user administration for clusters. Additionally, a Visual Studio Code application will be pre-configured on Open OnDemand, enabling users to run VSCode in web mode on login nodes or dynamically provisioned compute nodes.Creating a Slurm Job Submission App in Open OnDemand with Copilot Agent
High Performance Computing (HPC) environments are essential for research, engineering, and data-intensive workloads. To efficiently manage compute resources and job submissions, organizations rely on robust scheduling and orchestration tools. In this blog post, we'll explore how to use Copilot Agent in Visual Studio Code (VSCode) to build an Open OnDemand application that submits Slurm jobs in a CycleCloud Workspace for Slurm (CCWS) environment. We'll start with a brief overview of CCWS and Open OnDemand, then dive into the integration workflow.