azure cognitive services
42 TopicsUnderstanding the Fundamentals of AI Concepts for Nonprofits
Artificial Intelligence (AI) has become a cornerstone of modern technology, driving innovation across various sectors. Nonprofits, too, can harness the power of AI to enhance their operations and amplify their impact. In this blog, we'll explore fundamental AI concepts, common AI workloads, Microsoft's Responsible AI policies, and the tools and services available through Azure AI, all tailored for the nonprofit sector. Understanding AI Workloads AI workloads refer to the different types of tasks that AI systems can perform. Here are some common AI workloads relevant to nonprofits: Machine Learning: This involves training a computer model to make predictions and draw conclusions from data. Nonprofits can use machine learning to predict donor behavior, optimize fundraising strategies, and analyze program outcomes. Computer Vision: This capability allows software to interpret the world visually through cameras, video, and images. Applications include identifying and tracking wildlife for conservation efforts or analyzing images to assess disaster damage. Natural Language Processing (NLP): NLP enables computers to understand and respond to human language. Nonprofits can use NLP for sentiment analysis of social media posts, language translation for multilingual communities, and developing conversational AI like chatbots for donor engagement. Anomaly Detection: This involves automatically detecting errors or unusual activity. It is useful for fraud detection in financial transactions, monitoring network security, and ensuring data integrity. Conversational AI: This refers to the capability of a software agent to engage in conversations with humans. Examples include chatbots and virtual assistants that can answer questions, provide recommendations, and perform tasks, enhancing donor and beneficiary interactions. Responsible AI Practices As AI technology continues to evolve, it is crucial to ensure it is developed and used responsibly. Microsoft's Responsible AI policies emphasize the importance of fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability in AI systems. These principles guide the development and deployment of AI solutions to ensure they benefit everyone and do not cause harm. To learn more about Microsoft Responsible AI Practices click here: Empowering responsible AI practices | Microsoft AI Azure AI Services for Nonprofits Microsoft Azure offers a suite of AI services that enable nonprofits to build intelligent applications. Some key services include: Azure Machine Learning: A comprehensive platform for building, training, and deploying machine learning models. It supports a wide range of machine learning frameworks and tools, helping nonprofits analyze data and make informed decisions. To learn more or get started with Azure Machine Learning click here: Azure Machine Learning - ML as a Service | Microsoft Azure Azure AI Bot Service: A service for building conversational AI applications. It provides tools for creating, testing, and deploying chatbots that can interact with users through various channels, improving donor engagement and support services. To learn more or get started with Azure AI Bot Service click here: Azure AI Bot Service | Microsoft Azure Azure Cognitive Services: A collection of APIs that enable developers to add AI capabilities to their applications. These services include vision, speech, language, and decision-making APIs, which can be used for tasks like image recognition, language translation, and sentiment analysis. To learn more about the various Cognitive Service please click here: Azure AI Services – Using AI for Intelligent Apps | Microsoft Azure Conclusion AI has the potential to transform the nonprofit sector by enhancing efficiency, driving innovation, and providing valuable insights. By understanding AI workloads, adhering to responsible AI practices, and leveraging Azure AI services, nonprofits can unlock the full potential of AI to better serve their communities and achieve their missions. Embrace the power of AI to take your nonprofit organization to new heights and make a greater impact. For a deeper dive into the fundamental concepts of AI, please visit the module Fundamental AI Concepts. This resource will provide you with essential insights and a solid foundation to enhance your knowledge in the ever-evolving field of artificial intelligence.228Views0likes0CommentsCalculating Chargebacks for Business Units/Projects Utilizing a Shared Azure OpenAI Instance
Azure OpenAI Service is at the forefront of technological innovation, offering REST API access to OpenAI's suite of revolutionary language models, including GPT-4, GPT-35-Turbo, and the Embeddings model series. Enhancing Throughput for Scale As enterprises seek to deploy OpenAI's powerful language models across various business units, they often require granular control over configuration and performance metrics. To address this need, Azure OpenAI Service is introducing dedicated throughput, a feature that provides a dedicated connection to OpenAI models with guaranteed performance levels. Throughput is quantified in terms of tokens per second (tokens/sec), allowing organizations to precisely measure and optimize the performance for both prompts and completions. The model of provisioned throughput provides enhanced management and adaptability for varying workloads, guaranteeing system readiness for spikes in demand. This capability also ensures a uniform user experience and steady performance for applications that require real-time responses. Resource Sharing and Chargeback Mechanisms Large organizations frequently provision a singular instance of Azure OpenAI Service that is shared across multiple internal departments. This shared use necessitates an efficient mechanism for allocating costs to each business unit or consumer, based on the number of tokens consumed. This article delves into how chargeback is calculated for each business unit based on their token usage. Leveraging Azure API Management Policies for Token Tracking Azure API Management Policies offer a powerful solution for monitoring and logging the token consumption for each internal application. The process can be summarized in the following steps: ** Sample Code: Refer to this GitHub repository to get a step-by-step instruction on how to build the solution outlined below : private-openai-with-apim-for-chargeback 1. Client Applications Authorizes to API Management To make sure only legitimate clients can call the Azure OpenAI APIs, each client must first authenticate against Azure Active Directory and call APIM endpoint. In this scenario, the API Management service acts on behalf of the backend API, and the calling application requests access to the API Management instance. The scope of the access token is between the calling application and the API Management gateway. In API Management, configure a policy (validate-jwt or validate-azure-ad-token) to validate the token before the gateway passes the request to the backend. 2. APIM redirects the request to OpenAI service via private endpoint. Upon successful verification of the token, Azure API Management (APIM) routes the request to Azure OpenAI service to fetch response for completions endpoint, which also includes prompt and completion token counts. 3. Capture and log API response to Event Hub Leveraging the log-to-eventhub policy to capture outgoing responses for logging or analytics purposes. To use this policy, a logger needs to be configured in the API Management: # API Management service-specific details $apimServiceName = "apim-hello-world" $resourceGroupName = "myResourceGroup" # Create logger $context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName New-AzApiManagementLogger -Context $context -LoggerId "OpenAiChargeBackLogger" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string" Within outbound policies section, pull specific data from the body of the response and send this information to the previously configured EventHub instance. This is not just a simple logging exercise; it is an entry point into a whole ecosystem of real-time analytics and monitoring capabilities: <outbound> <choose> <when condition="@(context.Response.StatusCode == 200)"> <log-to-eventhub logger-id="TokenUsageLogger">@{ var responseBody = context.Response.Body?.As<JObject>(true); return new JObject( new JProperty("Timestamp", DateTime.UtcNow.ToString()), new JProperty("ApiOperation", responseBody["object"].ToString()), new JProperty("AppKey", context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key",string.Empty)), new JProperty("PromptTokens", responseBody["usage"]["prompt_tokens"].ToString()), new JProperty("CompletionTokens", responseBody["usage"]["completion_tokens"].ToString()), new JProperty("TotalTokens", responseBody["usage"]["total_tokens"].ToString()) ).ToString(); }</log-to-eventhub> </when> </choose> <base /> </outbound> EventHub serves as a powerful fulcrum, offering seamless integration with a wide array of Azure and Microsoft services. For example, the logged data can be directly streamed to Azure Stream Analytics for real-time analytics or to Power BI for real-time dashboards With Azure Event Grid, the same data can also be used to trigger workflows or automate tasks based on specific conditions met in the incoming responses. Moreover, the architecture is extensible to non-Microsoft services as well. Event Hubs can interact smoothly with external platforms like Apache Spark, allowing you to perform data transformations or feed machine learning models. 4: Data Processing with Azure Functions An Azure Function is invoked when data is sent to the EventHub instance, allowing for bespoke data processing in line with your organization’s unique requirements. For instance, this could range from dispatching the data to Azure Monitor, streaming it to Power BI dashboards, or even sending detailed consumption reports via Azure Communication Service. [Function("TokenUsageFunction")] public async Task Run([EventHubTrigger("%EventHubName%", Connection = "EventHubConnection")] string[] openAiTokenResponse) { //Eventhub Messages arrive as an array foreach (var tokenData in openAiTokenResponse) { try { _logger.LogInformation($"Azure OpenAI Tokens Data Received: {tokenData}"); var OpenAiToken = JsonSerializer.Deserialize<OpenAiToken>(tokenData); if (OpenAiToken == null) { _logger.LogError($"Invalid OpenAi Api Token Response Received. Skipping."); continue; } _telemetryClient.TrackEvent("Azure OpenAI Tokens", OpenAiToken.ToDictionary()); } catch (Exception e) { _logger.LogError($"Error occured when processing TokenData: {tokenData}", e.Message); } } } In the example above, Azure function processes the tokens response data in Event Hub and sends them to Application Insights telemetry, and a basic Dashboard is configured in Azure, displaying the token consumption for each client application. This information can conveniently be used to compute chargeback costs. A sample query used in dashboard above that fetches tokens consumed by a specific client: customEvents | where name contains "Azure OpenAI Tokens" | extend tokenData = parse_json(customDimensions) | where tokenData.AppKey contains "your-client-key" | project Timestamp = tokenData.Timestamp, Stream = tokenData.Stream, ApiOperation = tokenData.ApiOperation, PromptTokens = tokenData.PromptTokens, CompletionTokens = tokenData.CompletionTokens, TotalTokens = tokenData.TotalTokens Azure OpenAI Landing Zone reference architecture A crucial detail to ensure the effectiveness of this approach is to secure the Azure OpenAI service by implementing Private Endpoints and using Managed Identities for App Service to authorize access to Azure AI services. This will limit access so that only the App Service can communicate with the Azure OpenAI service. Failing to do this would render the solution ineffective, as individuals could bypass the APIM/App Service and directly access the OpenAI Service if they get hold of the access key for OpenAI. Refer to Azure OpenAI Landing Zone reference architecture to build a secure and scalable AI environment. Additional Considerations If the client application is external, consider using an Application Gateway in front of the Azure APIM If "streaming" is set to true, tokens count is not returned in response. In that that case libraries like tiktoken (Python), orgpt-3-encoder(javascript) for most GPT-3 models can be used to programmatically calculate tokens count for the user prompt and completion response. A useful guideline to remember is that in typical English text, one token is approximately equal to around 4 characters. This equates to about three-quarters of a word, meaning that 100 tokens are roughly equivalent to 75 words. (P.S. Microsoft does not endorse or guarantee any third-party libraries.) A subscription key or a custom header like app-key can also be used to uniquely identify the client as appId in OAuth token is not very intuitive. Rate-limiting can be implemented for incoming requests using OAuth tokens or Subscription Keys, adding another layer of security and resource management. The solution can also be extended to redirect different clients to different Azure OpenAI instances. For example., some clients utilize an Azure OpenAI instance with default quotas, whereas premium clients get to consume Azure Open AI instance with dedicated throughput. Conclusion Azure OpenAI Service stands as an indispensable tool for organizations seeking to harness the immense power of language models. With the feature of provisioned throughput, clients can define their usage limits in throughput units and freely allocate these to the OpenAI model of their choice. However, the financial commitment can be significant and is dependent on factors like the chosen model's type, size, and utilization. An effective chargeback system offers several advantages, such as heightened accountability, transparent costing, and judicious use of resources within the organization.21KViews10likes10CommentsEmpowering Nonprofits with Azure AI Vision: Enhancing Operations and Achieving Missions
Azure AI Vision is a powerful tool that enables organizations to analyze images and extract meaningful descriptions. Nonprofits can leverage AI Vision in various impactful ways to enhance their operations and achieve their missions more effectively. Here are some practical applications: Smart Stores Nonprofits operating retail stores can implement "smart store" solutions using AI Vision. By analyzing images from store cameras, they can identify customers needing assistance and direct employees to help them, improving customer service and operational efficiency. Event Management AI Vision can analyze images and videos from events to monitor attendance, track engagement, and gather insights on participant behavior. This data can help nonprofits improve future events and tailor their programs to better meet the needs of their audience. Security and Safety AI Vision can be used for security purposes, such as monitoring premises, detecting unauthorized access, and ensuring the safety of staff and beneficiaries. This technology can also identify potential hazards and prevent accidents. Now that we understand what Azure AI Vision is and some practical use cases, let's analyze a few images. Creating a Project in Azure AI Foundry Portal Pre-req: You will need to have an Azure account Navigate to Azure AI Foundry: Open a browser tab and go to the Azure AI Foundry portal. Sign In: Use your account credentials to sign in. 3. Create a Project: On the home page, select "Create a project." Projects in Azure AI Foundry help organize your work. 4. Configure Project and Hub: You will see a generated project name. Depending on your previous hub creations, you will either see new Azure resources to be created or a drop-down list of existing hubs. Select "Create new hub," name your hub, and proceed. 5. Customize Location: Select a location for your Azure AI services resource (East US, France Central, Korea Central, West Europe, or West US) and create the project. Resources Created Take note of the resources that are created: Azure AI services Azure AI hub Azure AI project Storage account Key vault Resource group After the resources are created, you will be brought to your project’s Overview page. Using AI Services Access AI Services: On your project's Overview page, select "AI Services" from the left-hand menu. Select Vision + Document: On the AI Services page, choose the Vision + Document tile to explore Azure AI Vision capabilities. Generating Captions for Images Image Captioning: On the Vision + Document page, select "Image" under "View all other vision capabilities," then choose the Image captioning tile. Upload Image: Upload an image by dragging it to the Drag and drop files here box, or by browsing to it on your file system. For demo purposes I used a provided sample file. Observe Captions: The Caption functionality generates a single, human-readable sentence describing the image's content. Dense Captioning Dense Captions: Return to the Vision + Document page, select the Image tab, and choose the Dense captioning tile. Upload Image: Upload an image by dragging it to the Drag and drop files here box, or by browsing to it on your file system. For demo purposes I used a provided sample file. Multiple Captions: Dense Captions provide multiple human-readable captions for an image, each describing essential objects detected in the picture. Tagging Images Extract Tags: On the Vision + Document page, select the Image tab and choose the Common tag extraction tile. Upload Image: Upload an image by dragging it to the Drag and drop files here box, or by browsing to it on your file system. For demo purposes I used a provided sample file. Review Tags: Extracted tags include objects and actions, with confidence scores indicating the likelihood of accurate descriptions. Conclusion Azure AI Vision is a transformative tool that nonprofits can leverage to enhance their operations and achieve their missions more effectively. By implementing smart store solutions, nonprofits can improve customer service and operational efficiency. Event management can be optimized by analyzing images and videos to monitor attendance and track engagement. Security and safety measures can be strengthened by using AI Vision to monitor premises and detect potential hazards. As we have explored the practical applications of Azure AI Vision, it's clear that this technology offers significant benefits for nonprofits. By integrating AI Vision into their operations, nonprofits can make data-driven decisions, improve their services, and ultimately better serve their communities. To learn more about Computer Vision and analyzing images, please visit: Fundamentals of Computer Vision - Training | Microsoft Learn.110Views0likes0CommentsEmpower Your Nonprofit with Azure AI: Building a Smart Knowledge Base
In today's digital age, providing quick and accurate answers to customer queries is essential for any business. Azure AI Language Studio offers powerful question answering capabilities that enable organizations to create and train a knowledge base of questions and answers. Creating a knowledge base with Azure AI Language Studio can be incredibly beneficial for a nonprofit organization. Here are some compelling reasons why you might want to consider it: Enhancing Customer Support: A knowledge base allows you to provide quick and accurate answers to common questions from your supporters, volunteers, and beneficiaries. This can significantly improve customer support by reducing response times and ensuring that inquiries are handled efficiently. Streamlining Operations: By automating the process of answering frequently asked questions, you can free up your staff to focus on more strategic tasks. This can lead to increased productivity and better use of resources, which is crucial for nonprofits operating with limited budgets. Improving Accessibility: A well-structured knowledge base can make information more accessible to everyone, including individuals with disabilities. Azure AI Language Studio's natural language processing capabilities ensure that users can find the information they need easily, regardless of how they phrase their questions. Enhancing Engagement: Providing timely and accurate information can enhance engagement with your supporters and beneficiaries. When people feel that their questions are answered promptly, they are more likely to stay involved and support your cause. Data-Driven Insights: Azure AI Language Studio can provide valuable insights into the types of questions being asked and the information that is most sought after. This data can help you understand the needs and concerns of your community better and tailor your programs and services accordingly. Cost-Effective Solution: Creating a knowledge base with Azure AI Language Studio is a cost-effective solution for managing inquiries. It reduces the need for extensive human intervention and can be scaled easily as your organization grows. Now that we understand the importance, let's build our first knowledge base! Creating a Language Resource To get started, you need to create a Language resource in the Azure portal: Open the Azure portal at https://portal.azure.com/?azure-portal=true and sign in with your Microsoft account. Click the "Create a resource." Search for "Language service." Select "Create a language service plan" and configure the following settings: Default features: Keep the default features. Custom features: Select custom question answering. On the Create Language page, specify the following settings: Subscription: Your Azure subscription. Resource group: Select an existing resource group or create a new one. Region: Select a region (e.g., East US 2). Name: A unique name for your language resource. Pricing tier: Select a pricing tier. Azure search region: Any available location. Azure search pricing tier: Select your Azure search pricing tier. 6. Review and create the resource and wait for the deployment to complete. Creating a New Project Next, you'll create a new project in Language Studio: Open the Language Studio portal at Language Studio - Microsoft Azure and sign in with your Microsoft account. If prompted, select your Azure directory, subscription, and Language resource. In the Create new menu, select "Custom question answering." On the Choose language setting for resource page, select "I want to select the language when I create a project in this resource" and click Next. Enter the following details on the Enter basic information page: Language resource: Choose your Language resource. Azure search resource: Choose your Azure search resource. Name: Choose a name Description: A simple knowledge base Source language: English Default answer when no answer is returned: No answer found Review and create the project. Adding Content to the Knowledge Base Now, you'll add content to your knowledge base from your appropriate document. From demo purposes I will be added the Q&A from our public NTA Landing Page. On the Manage sources page, select "Add source" and choose "URLs." In the Add URLs box, enter the following details: URL name: Enter URL name, for demo purposes I used NTA. URL: Enter URL, for demo purposes I use the NTA Landing Page Classify file structure: Auto-detect Select "Add all" to add the URL to your knowledge base. Editing the Knowledge Base You can customize your knowledge base by adding custom question-and-answer pairs: Expand the left panel and select "Edit knowledge base." Select "+" to add a new question pair. In the Add a new question answer pair dialog box, enter your question as the answer, then select "Done." Save your knowledge base. Training and Testing the Knowledge Base Once your knowledge base is set up, you can train and test it: Select "Test" at the top of the Question answer pairs pane. In the test pane, enter one of your questions to see the response. Try other questions as well. Deploying the Knowledge Base Finally, you can deploy your knowledge base as a client application: In the left panel, select "Deploy knowledge base." Select "Deploy" at the top of the page and confirm the deployment. 3. Next step will be creating your bot. https://aka.ms/qna-create-bot By following these steps, you can create a robust knowledge base using Azure AI Language Studio, providing quick and accurate answers to your customers' questions. This not only enhances customer satisfaction but also streamlines your support processes. To learn more about language services, please visit: Fundamentals of question answering with the Language Service - Training | Microsoft Learn146Views0likes0CommentsHarnessing the Power of Speech AI with Azure AI Foundry
In the ever-evolving world of artificial intelligence, speech technology stands out as a transformative tool that bridges the gap between humans and machines. Microsoft’s Azure AI Foundry portal offers a comprehensive suite of AI services, including powerful speech capabilities. Let's dive into how you can explore and leverage these features to enhance your projects. What is Azure AI Foundry? Azure AI Foundry is a platform designed to help developers and businesses create intelligent applications using a variety of AI services. It provides a centralized hub where you can access, manage, and deploy AI models, including those for speech recognition and synthesis. Key Features of Azure AI Speech Speech-to-Text: Azure AI Speech can transcribe spoken language into written text with high accuracy. This feature is invaluable for applications like meeting transcription, voice commands, and automated customer service. Text-to-Speech: Convert written text into natural-sounding speech. This capability is perfect for creating voice-enabled applications, audiobooks, and accessibility tools. Real-Time Transcription: The Speech Playground in Azure AI Foundry allows you to test live transcription capabilities on your own audio files without writing any code. This feature is ideal for quickly evaluating the performance of speech-to-text models. Expressive Voices: Browse a variety of humanlike voices to find the perfect match for your project. These voices can add a personal touch to your applications, making interactions more engaging and relatable. How to Get Started with Azure AI Speech Navigate to Azure AI Foundry: Open a browser tab and go to the Azure AI Foundry portal. Sign In: Use your account credentials to sign in. Create a Project: On the home page, select "Create a project." Projects in Azure AI Foundry help organize your work. Configure Project and Hub: You will see a generated project name. Depending on your previous hub creations, you will either see new Azure resources to be created or a drop-down list of existing hubs. Select "Create new hub," name your hub, and proceed. Resources Created Take note of the resources that are created: Azure AI services Azure AI hub Azure AI project Storage account Key vault Resource group After the resources are created, you will be brought to your project’s Overview page. Using AI Services Access AI Services: On your project's Overview page, select "AI Services" from the left-hand menu. 2. Select Speech: On the AI Services page, choose the speech tile to explore the Speech capabilities. 3. Test out the real-time transcription feature: Upload your audio files and see how Azure AI Speech transcribes them into text. This hands-on experience helps you understand the capabilities and performance of the service. 4. Transcription: Upload or record your audio files to see how Azure AI Speech transcribes them into text. 5. Deploy and Integrate: Once you’re satisfied with the performance, deploy your speech models to production. Integrate them into your applications to start leveraging the power of Azure AI Speech in real-world scenarios. Benefits of Using Azure AI Speech High Accuracy: Azure AI Speech offers state-of-the-art accuracy in speech recognition and synthesis, ensuring reliable performance for your applications. Scalability: Easily scale your speech applications to handle varying workloads, from small projects to enterprise-level deployments. Customization: Fine-tune models to meet the specific needs of your business, improving the relevance and effectiveness of speech interactions. Ease of Use: The intuitive interface and comprehensive documentation make it easy to get started, even for those new to AI and speech technologies. Conclusion Exploring speech capabilities in the Azure AI Foundry portal opens up a world of possibilities for creating intelligent, voice-enabled applications. Whether you’re looking to transcribe meetings, develop voice assistants, or enhance accessibility, Azure AI Speech provides the tools you need to succeed. Start your journey today and see how speech technology can transform your projects. To learn more about Azure AI Speech, please visit: Fundamentals of Azure AI Speech - Training | Microsoft Learn.226Views0likes0CommentsUnlock New AI and Cloud Potential with .NET 9 & Azure: Faster, Smarter, and Built for the Future
.NET 9, now available to developers, marks a significant milestone in the evolution of the .NET platform, pushing the boundaries of performance, cloud-native development, and AI integration. This release, shaped by contributions from over 9,000 community members worldwide, introduces thousands of improvements that set the stage for the future of application development. With seamless integration with Azure and a focus on cloud-native development and AI capabilities, .NET 9 empowers developers to build scalable, intelligent applications with unprecedented ease. Expanding Azure PaaS Support for .NET 9 With the release of .NET 9, a comprehensive range of Azure Platform as a Service (PaaS) offerings now fully support the platform’s new capabilities, including the latest .NET SDK for any Azure developer. This extensive support allows developers to build, deploy, and scale .NET 9 applications with optimal performance and adaptability on Azure. Additionally, developers can access a wealth of architecture references and sample solutions to guide them in creating high-performance .NET 9 applications on Azure’s powerful cloud services: Azure App Service: Run, manage, and scale .NET 9 web applications efficiently. Check out this blog to learn more about what's new in Azure App Service. Azure Functions: Leverage serverless computing to build event-driven .NET 9 applications with improved runtime capabilities. Azure Container Apps: Deploy microservices and containerized .NET 9 workloads with integrated observability. Azure Kubernetes Service (AKS): Run .NET 9 applications in a managed Kubernetes environment with expanded ARM64 support. Azure AI Services and Azure OpenAI Services: Integrate advanced AI and OpenAI capabilities directly into your .NET 9 applications. Azure API Management, Azure Logic Apps, Azure Cognitive Services, and Azure SignalR Service: Ensure seamless integration and scaling for .NET 9 solutions. These services provide developers with a robust platform to build high-performance, scalable, and cloud-native applications while leveraging Azure’s optimized environment for .NET. Streamlined Cloud-Native Development with .NET Aspire .NET Aspire is a game-changer for cloud-native applications, enabling developers to build distributed, production-ready solutions efficiently. Available in preview with .NET 9, Aspire streamlines app development, with cloud efficiency and observability at its core. The latest updates in Aspire include secure defaults, Azure Functions support, and enhanced container management. Key capabilities include: Optimized Azure Integrations: Aspire works seamlessly with Azure, enabling fast deployments, automated scaling, and consistent management of cloud-native applications. Easier Deployments to Azure Container Apps: Designed for containerized environments, .NET Aspire integrates with Azure Container Apps (ACA) to simplify the deployment process. Using the Azure Developer CLI (azd), developers can quickly provision and deploy .NET Aspire projects to ACA, with built-in support for Redis caching, application logging, and scalability. Built-In Observability: A real-time dashboard provides insights into logs, distributed traces, and metrics, enabling local and production monitoring with Azure Monitor. With these capabilities, .NET Aspire allows developers to deploy microservices and containerized applications effortlessly on ACA, streamlining the path from development to production in a fully managed, serverless environment. Integrating AI into .NET: A Seamless Experience In our ongoing effort to empower developers, we’ve made integrating AI into .NET applications simpler than ever. Our strategic partnerships, including collaborations with OpenAI, LlamaIndex, and Qdrant, have enriched the AI ecosystem and strengthened .NET’s capabilities. This year alone, usage of Azure OpenAI services has surged to nearly a billion API calls per month, illustrating the growing impact of AI-powered .NET applications. Real-World AI Solutions with .NET: .NET has been pivotal in driving AI innovations. From internal teams like Microsoft Copilot creating AI experiences with .NET Aspire to tools like GitHub Copilot, developed with .NET to enhance productivity in Visual Studio and VS Code, the platform showcases AI at its best. KPMG Clara is a prime example, developed to enhance audit quality and efficiency for 95,000 auditors worldwide. By leveraging .NET and scaling securely on Azure, KPMG implemented robust AI features aligned with strict industry standards, underscoring .NET and Azure as the backbone for high-performing, scalable AI solutions. Performance Enhancements in .NET 9: Raising the Bar for Azure Workloads .NET 9 introduces substantial performance upgrades with over 7,500 merged pull requests focused on speed and efficiency, ensuring .NET 9 applications run optimally on Azure. These improvements contribute to reduced cloud costs and provide a high-performance experience across Windows, Linux, and macOS. To see how significant these performance gains can be for cloud services, take a look at what past .NET upgrades achieved for Microsoft’s high-scale internal services: Bing achieved a major reduction in startup times, enhanced efficiency, and decreased latency across its high-performance search workflows. Microsoft Teams improved efficiency by 50%, reduced latency by 30–45%, and achieved up to 100% gains in CPU utilization for key services, resulting in faster user interactions. Microsoft Copilot and other AI-powered applications benefited from optimized runtime performance, enabling scalable, high-quality experiences for users. Upgrading to the latest .NET version offers similar benefits for cloud apps, optimizing both performance and cost-efficiency. For more information on updating your applications, check out the .NET Upgrade Assistant. For additional details on ASP.NET Core, .NET MAUI, NuGet, and more enhancements across the .NET platform, check out the full Announcing .NET 9 blog post. Conclusion: Your Path to the Future with .NET 9 and Azure .NET 9 isn’t just an upgrade—it’s a leap forward, combining cutting-edge AI integration, cloud-native development, and unparalleled performance. Paired with Azure’s scalability, these advancements provide a trusted, high-performance foundation for modern applications. Get started by downloading .NET 9 and exploring its features. Leverage .NET Aspire for streamlined cloud-native development, deploy scalable apps with Azure, and embrace new productivity enhancements to build for the future. For additional insights on ASP.NET, .NET MAUI, NuGet, and more, check out the full Announcing .NET 9 blog post. Explore the future of cloud-native and AI development with .NET 9 and Azure—your toolkit for creating the next generation of intelligent applications.9.6KViews2likes1CommentGenerative AI with JavaScript FREE course
JavaScript devs, now’s your chance to tap into the potential of Generative AI! Whether you’re just curious or ready to level up your apps, our new video series is packed with everything you need to start building AI-powered applications.3.5KViews0likes0CommentsOpenAI at Scale: Maximizing API Management through Effective Service Utilization
Harnessing Azure OpenAI at Scale: Effective API Management with Circuit Breaker, Retry, and Load Balance Unlock the full potential of Azure OpenAI by leveraging the advanced capabilities of Azure API Management. This guide explores how to effectively utilize Circuit Breaker, Retry, and Load Balance strategies to optimize backends and ensure seamless service utilization. Learn best practices for integrating OpenAI services, enhancing performance, and achieving scalability through robust API management policies.6.6KViews3likes0CommentsEvaluating Generative AI Models with Azure Machine Learning
LLM evaluation assesses the performance of a large language model on a set of tasks, such as text classification, sentiment analysis, question answering, and text generation. The goal is to measure the model's ability to understand and generate human-like language.4.7KViews3likes0Comments