student
547 TopicsPart 1 - Develop a VS Code Extension for Your Capstone Project
API Guardian - My Capstone Project As software and APIs evolve, developers encounter significant difficulties in maintaining and updating API endpoints. Breaking changes can lead to system instability, while outdated or unclear documentation makes maintenance less efficient. These challenges are further compounded by the time-consuming nature of updating dependencies and the tendency to prioritize new features over maintenance tasks. The absence of effective tools and processes to tackle these issues reduces overall productivity and developer efficiency. To address this, API Guardian was created as a Visual Studio Code extension that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing issues caused by breaking or non-breaking changes and to alleviate the difficulties in performing maintenance due to unclear or outdated documentation. Features and Capabilities This extension has 3 main features: Feature 1. Developers can decide if the extension will scan or skip specified files in the project. Press “Enter” to scan/skip all files. Type the file name (e.g., main.py) and press “Enter” to scan/skip a single file. Type file names with a delimiter (e.g., main.py | pythonFile.py) and press “Enter” to scan/skip multiple files. Feature 2. Custom hover messages when developers mouse over identified APIs This hover message will vary based on the status of the APIs. If the API returns a success status, the hover message will only show the completed API and its status. However, if an error occurs, the hover message will include this additional information: (1) API Name, (2) Official API Link, (3) Error Message, (4) Title of Recommended Fix and (5) Link to the Recommended Fix. Feature 3. Excel Report with Details of Identified APIs After all the identified APIs have been tested, an excel report will exported with the following information to allow developers to easily identify the APIs in the project. What Technology and Products does it involved? Building a Visual Studio Code extension and publishing it to the Visual Studio Marketplace involves a mix of technologies and tools. The project was initiated using the NPM package, generator-code, to set up a JavaScript project for developing the extension. All the extension's logic will be developed and managed within the "extension.js" file generated during the setup process. Once ready for deployment, we will package the extension using "vsce" to generate a ".vsix" file, which will then be used for deployment to the Visual Studio Code Marketplace. The deployment process involves requiring the user to create a publishing account and using tools like vsce to upload and manage the extension's version, updates, and metadata. As part of this process, you would need to create a Personal Access Token (PAT) from Azure DevOps. This token is used to verify your identity and authenticate the publishing tool, allowing you to securely upload your extension to the Visual Studio Marketplace. The PAT provides the necessary permissions for tasks such as version management, publishing new releases, and updating the extension metadata. What did I learn? Throughout this journey, I learned not just about the technical stack but also about the value of detailed project setup and secure publishing processes. While the technical steps can be challenging, they’re incredibly rewarding, and I’m excited to dive deeper into it moving forward. I’m looking forward to exploring how the extension can be further improved and enhanced. If you're interested in learning more about how my API guidance was built, keep an eye out for my next post! API Guardian https://marketplace.visualstudio.com/items?itemName=APIGuardian-vsc.api About the Authors Main Author - Ms Joy Cheng Yee Shing, BSc (Hon) Computing Science Academic Supervisor - Dr Peter Yau, Microsoft MVP199Views0likes0CommentsUnderstanding Azure OpenAI Service Quotas and Limits: A Beginner-Friendly Guide
Azure OpenAI Service allows developers, researchers, and students to integrate powerful AI models like GPT-4, GPT-3.5, and DALL·E into their applications. But with great power comes great responsibility and limits. Before you dive into building your next AI-powered solution, it's crucial to understand how quotas and limits work in the Azure OpenAI ecosystem. This guide is designed to help students and beginners easily understand the concept of quotas, limits, and how to manage them effectively. What Are Quotas and Limits? Think of Azure's quotas as your "AI data pack." It defines how much you can use the service. Meanwhile, limits are hard boundaries set by Azure to ensure fair use and system stability. Quota The maximum number of resources (e.g., tokens, requests) allocated to your Azure subscription. Limit The technical cap imposed by Azure on specific resources (e.g., number of files, deployments). Key Metrics: TPM & RPM Tokens Per Minute (TPM) TPM refers to how many tokens you can use per minute across all your requests in each region. A token is a chunk of text. For example, the word "Hello" is 1 token, but "Understanding" might be 2 tokens. Each model has its own default TPM. Example: GPT-4 might allow 240,000 tokens per minute. You can split this quota across multiple deployments. Requests Per Minute (RPM) RPM defines how many API requests you can make every minute. For instance, GPT-3.5-turbo might allow 350 RPM. DALL·E image generation models might allow 6 RPM. Deployment, File, and Training Limits Here are some standard limits imposed on your OpenAI resource: Resource Type Limit Standard model deployments 32 Fine-tuned model deployments 5 Training jobs 100 total per resource (1 active at a time) Fine-tuning files 50 files (total size: 1 GB) Max prompt tokens per request Varies by model (e.g., 4096 tokens for GPT-3.5) How to View and Manage Your Quota Step-by-Step: Go to the Azure Portal. Navigate to your Azure OpenAI resource. Click on "Usage + quotas" in the left-hand menu. You will see TPM, RPM, and current usage status. To Request More Quota: In the same "Usage + quotas" panel, click on "Request quota increase". Fill in the form: Select the region. Choose the model family (e.g., GPT-4, GPT-3.5). Enter the desired TPM and RPM values. Submit and wait for Azure to review and approve. What is Dynamic Quota? Sometimes, Azure gives you extra quota based on demand and availability. “Dynamic quota” is not guaranteed and may increase or decrease. Useful for short-term spikes but should not be relied on for production apps. Example: During weekends, your GPT-3.5 TPM may temporarily increase if there's less traffic in your region. Best Practices for Students Monitor Regularly: Use the Azure Portal to keep an eye on your usage. Batch Requests: Combine multiple tasks in one API call to save tokens. Start Small: Begin with GPT-3.5 before requesting GPT-4 access. Plan Ahead: If you're preparing a demo or a project, request quota in advance. Handle Limits Gracefully: Code should manage 429 Too Many Requests errors. Quick Resources Azure OpenAI Quotas and Limits How to Request Quota in Azure Join the Conversation on Azure AI Foundry Discussions! Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨ 👉 Click here to join the discussion!493Views0likes0CommentsRedeeming Azure for Student from your GitHub Student Pack when you do not have an Academic Email
GitHub Student Developer Pack Learn to ship software like a pro. There's no substitute for hands-on experience. But for most students, real world tools can be cost-prohibitive. That's why we created the GitHub Student Developer Pack with some of our partners and friends. Sign up for Student Developer Pack21KViews1like2CommentsHow to Optimize your Codespaces: Pro-tips for managing quotas
Now that GitHub Codespaces is free for anyone, you might be surprised to see how fast you can hit the free quota. Here are four things you can do to make the most out of the 90 hours you get every month (and 180 hours if you are a student).11KViews3likes1CommentGlobal AI Bootcamp
Are you ready to embark on an exhilarating journey into the world of Artificial Intelligence? The Global AI Bootcamp invites tech students and AI developers to join a vibrant global community of innovators, data scientists, and AI experts. This annual event is your gateway to cutting-edge advancements, where you can learn, share, and collaborate on the latest AI technologies. From Saturday, March 1st to Friday, March 7th, we have an action-packed schedule featuring 29 bootcamps across 19 countries. With the rapid evolution of AI shaping various industries, there's no better time to elevate your skills and make a meaningful impact in this dynamic field. Attendees can expect hands-on workshops, insightful sessions, and numerous networking opportunities designed for all skill levels. Don't miss this chance to be part of the future of AI! Why You Should Attend With 135 bootcamps happening in 44 countries this year, the Global AI Bootcamp is the perfect opportunity to immerse yourself in the AI community. Attendees can expect: Hands-on Workshops: Engage with practical sessions to build and deploy AI models. Expert Talks: Learn from industry leaders about the latest trends and technologies. Networking Opportunities: Connect with peers, mentors, and potential collaborators. Career Growth: Discover new career paths and enhance your professional skills. In-Person Bootcamps Experience the energy and collaboration of our in-person events. Mark your calendars for these dates: Germany, Hamburg | Saturday, March 1st | Event Link India, Hyderabad | Saturday, March 1st | Event Link Nigeria, Jos | Saturday, March 1st | Event Link Canada, Toronto | Saturday, March 1st | Event Link United States, Houston, TX | Saturday, March 1st | Event Link India, Ahmedabad | Saturday, March 1st | Event Link Spain, Málaga | Saturday, March 1st | Event Link India, Chennai | Saturday, March 1st | Event Link United Kingdom, London | Thursday, March 6th | Event Link United States, Milwaukee | Friday, March 7th | Event Link United States, Saint Louis | Friday, March 7th | Event Link Canada, Quebec City | Friday, March 7th | Event Link Poland, Kraków | Friday, March 7th | Event Link Virtual Bootcamps Can't join us in person? No problem! Participate in our virtual events from the comfort of your home: Angola, Luanda | Saturday, March 1st | Event Link Ghana, Accra | Saturday, March 1st | Event Link Netherlands, Amsterdam | Tuesday, March 4th | Event Link Colombia, Bogotá - RockAI | Thursday, March 6th | Event Link Bangladesh, Dhaka | Friday, March 7th | Event Link Colombia, Bogotá | Friday, March 7th | Event Link Hybrid Bootcamps Enjoy the flexibility of hybrid events offering both in-person and virtual participation: India, Palava | Saturday, March 1st | Event Link Spain, Madrid | Saturday, March 1st | Event Link India, Mumbai | Saturday, March 1st | Event Link India, Mumbai - Dear Azure AI | Saturday, March 1st | Event Link Bangladesh, Chattogram | Sunday, March 2nd | Event Link Pakistan, Lahore | Monday, March 3rd | Event Link Costa Rica, San José | Tuesday, March 4th | Event Link Hong Kong, Hong Kong | Wednesday, March 5th | Event Link Malaysia, Kuala Lumpur | Friday, March 7th | Event Link Bolivia, La Paz | Friday, March 7th | Event Link Artificial Intelligence is transforming industries across the globe. There's no better time than now to dive into AI and be at the forefront of innovation. Whether you're looking to start a career in AI or enhance your existing skills, the Global AI Bootcamp has something for everyone. Don't miss out on this incredible opportunity to learn, connect, and grow. Visit our website for more information and register for a bootcamp near you! 👉 Explore All Bootcamps Let's shape the future of AI together!186Views1like0CommentsAzure AI Model Inference API
The Azure AI Model Inference API provides a unified interface for developers to interact with various foundational models deployed in Azure AI Studio. This API allows developers to generate predictions from multiple models without changing their underlying code. By providing a consistent set of capabilities, the API simplifies the process of integrating and switching between different models, enabling seamless model selection based on task requirements.4KViews0likes2Comments