azure ai
191 TopicsBuilding your own copilot – yes, but how? (Part 1 of 2)
Are you interested in building your own AI co-pilot? Check out the first of a two-part blog post from Carlotta Castelluccio that covers the basics of creating a virtual assistant that can help you with tasks like scheduling, email management, and more. Learn about the tools and technologies involved, including Microsoft's Bot Framework and Language Understanding Intelligent Service (LUIS). Whether you're a software developer or just curious about the possibilities of AI, this post is a great introduction to building your own co-pilot.32KViews3likes2Comments*Updated on 1/06/2021* Quick Start: Setting Up Your COVID-19 Healthcare Bot
The Healthcare Bot Service is an AI-powered, extensible, secure and compliant healthcare experience. Microsoft now has a specific template pre-built for COVID-19. It takes inbound requests, asks about the patient’s symptoms, and assists in getting people access to trusted and relevant healthcare services and information based on the CDC recommendation.31KViews6likes21CommentsBring all your workloads to the edge with AKS Edge Essentials. Now Generally Available!
Announcing the General Availability (GA) of Azure Kubernetes Service (AKS) Edge Essentials! AKS Edge Essentials has been developed as a managed Kubernetes for the operational edge on small-footprint devices to orchestrated workloads and drive business optimization.18KViews9likes1CommentIntegrate Custom Azure AI Agents with CoPilot Studio and M365 CoPilot
Integrating Custom Agents with Copilot Studio and M365 Copilot In today's fast-paced digital world, integrating custom agents with Copilot Studio and M365 Copilot can significantly enhance your company's digital presence and extend your CoPilot platform to your enterprise applications and data. This blog will guide you through the integration steps of bringing your custom Azure AI Agent Service within an Azure Function App, into a Copilot Studio solution and publishing it to M365 and Teams Applications. When Might This Be Necessary: Integrating custom agents with Copilot Studio and M365 Copilot is necessary when you want to extend customization to automate tasks, streamline processes, and provide better user experience for your end-users. This integration is particularly useful for organizations looking to streamline their AI Platform, extend out-of-the-box functionality, and leverage existing enterprise data and applications to optimize their operations. Custom agents built on Azure allow you to achieve greater customization and flexibility than using Copilot Studio agents alone. What You Will Need: To get started, you will need the following: Azure AI Foundry Azure OpenAI Service Copilot Studio Developer License Microsoft Teams Enterprise License M365 Copilot License Steps to Integrate Custom Agents: Create a Project in Azure AI Foundry: Navigate to Azure AI Foundry and create a project. Select 'Agents' from the 'Build and Customize' menu pane on the left side of the screen and click the blue button to create a new agent. Customize Your Agent: Your agent will automatically be assigned an Agent ID. Give your agent a name and assign the model your agent will use. Customize your agent with instructions: Add your knowledge source: You can connect to Azure AI Search, load files directly to your agent, link to Microsoft Fabric, or connect to third-party sources like Tripadvisor. In our example, we are only testing the CoPilot integration steps of the AI Agent, so we did not build out additional options of providing grounding knowledge or function calling here. Test Your Agent: Once you have created your agent, test it in the playground. If you are happy with it, you are ready to call the agent in an Azure Function. Create and Publish an Azure Function: Use the sample function code from the GitHub repository to call the Azure AI Project and Agent. Publish your Azure Function to make it available for integration. azure-ai-foundry-agent/function_app.py at main · azure-data-ai-hub/azure-ai-foundry-agent Connect your AI Agent to your Function: update the "AIProjectConnString" value to include your Project connection string from the project overview page of in the AI Foundry. Role Based Access Controls: We have to add a role for the function app on OpenAI service. Role-based access control for Azure OpenAI - Azure AI services | Microsoft Learn Enable Managed Identity on the Function App Grant "Cognitive Services OpenAI Contributor" role to the System-assigned managed identity to the Function App in the Azure OpenAI resource Grant "Azure AI Developer" role to the System-assigned managed identity for your Function App in the Azure AI Project resource from the AI Foundry Build a Flow in Power Platform: Before you begin, make sure you are working in the same environment you will use to create your CoPilot Studio agent. To get started, navigate to the Power Platform (https://make.powerapps.com) to build out a flow that connects your Copilot Studio solution to your Azure Function App. When creating a new flow, select 'Build an instant cloud flow' and trigger the flow using 'Run a flow from Copilot'. Add an HTTP action to call the Function using the URL and pass the message prompt from the end user with your URL. The output of your function is plain text, so you can pass the response from your Azure AI Agent directly to your Copilot Studio solution. Create Your Copilot Studio Agent: Navigate to Microsoft Copilot Studio and select 'Agents', then 'New Agent'. Make sure you are in the same environment you used to create your cloud flow. Now select ‘Create’ button at the top of the screen From the top menu, navigate to ‘Topics’ and ‘System’. We will open up the ‘Conversation boosting’ topic. When you first open the Conversation boosting topic, you will see a template of connected nodes. Delete all but the initial ‘Trigger’ node. Now we will rebuild the conversation boosting agent to call the Flow you built in the previous step. Select 'Add an Action' and then select the option for existing Power Automate flow. Pass the response from your Custom Agent to the end user and end the current topic. My existing Cloud Flow: Add action to connect to existing Cloud Flow: When this menu pops up, you should see the option to Run the flow you created. Here, mine does not have a very unique name, but you see my flow 'Run a flow from Copilot' as a Basic action menu item. If you do not see your cloud flow here add the flow to the default solution in the environment. Go to Solutions > select the All pill > Default Solution > then add the Cloud Flow you created to the solution. Then go back to Copilot Studio, refresh and the flow will be listed there. Now complete building out the conversation boosting topic: Make Agent Available in M365 Copilot: Navigate to the 'Channels' menu and select 'Teams + Microsoft 365'. Be sure to select the box to 'Make agent available in M365 Copilot'. Save and re-publish your Copilot Agent. It may take up to 24 hours for the Copilot Agent to appear in M365 Teams agents list. Once it has loaded, select the 'Get Agents' option from the side menu of Copilot and pin your Copilot Studio Agent to your featured agent list Now, you can chat with your custom Azure AI Agent, directly from M365 Copilot! Conclusion: By following these steps, you can successfully integrate custom Azure AI Agents with Copilot Studio and M365 Copilot, enhancing you’re the utility of your existing platform and improving operational efficiency. This integration allows you to automate tasks, streamline processes, and provide better user experience for your end-users. Give it a try! Curious of how to bring custom models from your AI Foundry to your CoPilot Studio solutions? Check out this blog15KViews3likes10CommentsNew Hugging Face Models on Azure AI: Phi-3 Variants from the Community
The Azure AI Model Catalog offers over 1.78K models, including foundation models from core partners and nearly 1.6K open-source models from the Hugging Face community. Read this post to learn about the latest models added to the Hugging Face Collection this month!14KViews0likes0CommentsThe Future of AI: Computer Use Agents Have Arrived
Discover the groundbreaking advancements in AI with Computer Use Agents (CUAs). In this blog, Marco Casalaina shares how to use the Responses API from Azure OpenAI Service, showcasing how CUAs can launch apps, navigate websites, and reason through tasks. Learn how CUAs utilize multimodal models for computer vision and AI frameworks to enhance automation. Explore the differences between CUAs and traditional Robotic Process Automation (RPA), and understand how CUAs can complement RPA systems. Dive into the future of automation and see how CUAs are set to revolutionize the way we interact with technology.10KViews6likes0CommentsAzure Open AI Industry Use cases
Azure Open AI Industry Use cases: Content generation Call Center Analytics: Automatically generate responses to customer inquiries Code generation Aircraft company using to convert natural language to SQL for aircraft telemetry data. Consulting service using Azure OpenAI Service to convert natural language to query propriety data models. Semantic search Financial services firm using Azure OpenAI Service to improve search capabilities and the conversational quality of a customer’s Bot experience. Insurance companies extract information from volumes of unstructured data to automate claim handling processes. Summarization International insurance company using Azure OpenAI Service to provide summaries of call center customer support conversation logs. Global bank using Azure OpenAI Service to summarize financial reporting and analyst articles. Government agency using Azure OpenAI Service to extract and summarize key information from their extensive library of rural development reports. Financial services using Azure OpenAI Service to summarize financial reporting for peer risk analysis and customer conversation summarization. Code model use cases: Natural Language to Code Natural Language to SQL Code to Natural Language Code documentation Refactoring Text model use cases: Reason over structured and unstructured data: Classification, Sentiment, Entity Extraction, Search Product feedback sentiment Customer and employee feedback classification Claims and risk analyses Support emails and call transcripts Social media trends Writing assistance Marketing copy/email taglines Long format text Paragraphs from bullets Summarization Call center call transcripts Subject Matter Expert Documents Competitive analysis Peer Analysis Technical reports Product and service feedback Social media trends Conversational AI Smart assists for call centers Tech support chat bots Virtual assistants Use Cases that use multiple model capabilities Contact Centers Classification—route mails to appropriate team Sentiment—prioritize angry customers Entity extraction and search—analyze liability and risk Mail and call transcript summarization Customer response email generation Rapid response marketing campaigns: classification, sentiment, summarization, content generation More details here on Microsoft documentation: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#capabilities GPT-4 Turbo with Vision: Chat and conversation interaction: Users can interact with a conversational agent that responds with information drawn from trusted documentation such as internal company documentation or tech support documentation. Conversations must be limited to answering scoped questions. Available to internal, authenticated external users, and unauthenticated external users. Chatbot and conversational agent creation: Users can create conversational agents that respond with information drawn from trusted documents such as internal company documentation or tech support documents. For instance, diagrams, charts, and other relevant images from technical documentation can enhance comprehension and provide more accurate responses. Conversations must be limited to answering scoped questions. Limited to internal users only. Code generation or transformation scenarios: Converting one programming language to another or enabling users to generate code using natural language or visual input. For example, users can take a photo of handwritten pseudocode or diagrams illustrating a coding concept and use the application to generate code based on that. Limited to internal and authenticated external users. Reason over structured and unstructured data: Users can analyze inputs using classification, sentiment analysis of text, or entity extraction. Users can provide an image alongside a text query for analysis. Limited to internal and authenticated external users. Summarization: Users can submit content to be summarized for pre-defined topics built into the application and cannot use the application as an open-ended summarizer. Examples include summarization of internal company documentation, call center transcripts, technical reports, and product reviews. Limited to internal, authenticated external users, and unauthenticated external users. Writing assistance on specific topics: Users can create new content or rewrite content submitted by the user as a writing aid for business content or pre-defined topics. Users can only rewrite or create content for specific business purposes or pre-defined topics and cannot use the application as a general content creation tool for all topics. Examples of business content include proposals and reports. May not be selected to generate journalistic content (for journalistic use, select the above Journalistic content use case). Limited to internal users and authenticated external users. Search: Users can search for content in trusted source documents and files such as internal company documentation. The application does not generate results ungrounded in trusted source documentation. Limited to internal users only. Image and Video Tagging: Users can identify and tag visual elements, including objects, living beings, scenery, and actions within an image or recorded video. Users may not attempt to use the service to identify individuals. Limited to internal users and authenticated external users. Image and Video Captioning: Users can generate descriptive natural language captions for visuals. Beyond simple descriptions, the application can identify and provide textual insights about specific subjects or landmarks within images and recorded video. If shown an image of the Eiffel Tower, the system might offer a concise description or highlight intriguing facts about the monument. Generated descriptions of people may not be used to identify individuals. Limited to internal users and authenticated external users. Object Detection: For use to identify the positions of individual or multiple objects in an image by providing their specific coordinates. For instance, in an image that has scattered apples, the application can identify and indicate the location of each apple. Through this application, users can obtain spatial insights regarding objects captured in images. This use case is not yet available for videos. Limited to internal users and authenticated external users. Visual Question Answering: Users can ask questions about an image or video and receive contextually relevant responses. For instance, when shown a picture of a bird, one might ask, "What type of bird is this?" and receive a response like, "It's a European robin." The application can identify and interpret context within images and videos to answer queries. For example, if presented with an image of a crowded marketplace, users can ask, "How many people are wearing hats?" or "What fruit is the vendor selling?" and the application can provide the answers. The system may not be used to answer identifying questions about people. Limited to internal users and authenticated external users. Brand and Landmark recognition: The application can be used to identify commercial brands and popular landmarks in images or videos from a preset database of thousands of global logos and landmarks. Limited to internal users and authenticated external users. GPT-3.5, GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and/or Embeddings Models: Chat and conversation interaction: Users can interact with a conversational agent that responds with responses drawn from trusted documents such as internal company documentation or tech support documentation; conversations must be limited to answering scoped questions. Available to internal, authenticated external users, and unauthenticated external users. Chat and conversation creation: Users can create a conversational agent that responds with responses drawn from trusted documents such as internal company documentation or tech support documentation; conversations must be limited to answering scoped questions. Limited to internal users only. Code generation or transformation scenarios: For example, converting one programming language to another, generating docstrings for functions, converting natural language to SQL. Limited to internal and authenticated external users. Journalistic content: For use to create new journalistic content or to rewrite journalistic content submitted by the user as a writing aid for pre-defined topics. Users cannot use the application as a general content creation tool for all topics. May not be used to generate content for political campaigns. Limited to internal users. Question-answering: Users can ask questions and receive answers from trusted source documents such as internal company documentation. The application does not generate answers ungrounded in trusted source documentation. Available to internal, authenticated external users, and unauthenticated external users. Reason over structured and unstructured data: Users can analyze inputs using classification, sentiment analysis of text, or entity extraction. Examples include analyzing product feedback sentiment, analyzing support calls and transcripts, and refining text-based search with embeddings. Limited to internal and authenticated external users. Search: Users can search trusted source documents such as internal company documentation. The application does not generate results ungrounded in trusted source documentation. Available to internal, authenticated external users, and unauthenticated external users. Summarization: Users can submit content to be summarized for pre-defined topics built into the application and cannot use the application as an open-ended summarizer. Examples include summarization of internal company documentation, call center transcripts, technical reports, and product reviews. Limited to internal, authenticated external users, and unauthenticated external users. Writing assistance on specific topics: Users can create new content or rewrite content submitted by the user as a writing aid for business content or pre-defined topics. Users can only rewrite or create content for specific business purposes or pre-defined topics and cannot use the application as a general content creation tool for all topics. Examples of business content include proposals and reports. May not be selected to generate journalistic content (for journalistic use, select the above Journalistic content use case). Limited to internal users and authenticated external users. Data generation for fine-tuning: Users can use a model in Azure OpenAI to generate data which is used solely to fine-tune (i) another Azure OpenAI model, using the fine-tuning capabilities of Azure OpenAI, and/or (ii) another Azure AI custom model, using the fine-tuning capabilities of the Azure AI service. Generating data and fine-tuning models is limited to internal users only; the fine-tuned model may only be used for inferencing in the applicable Azure AI service and, for Azure OpenAI service, only for customer’s permitted use case(s) under this form. DALL-E 2 and/or DALL-E 3: Art and Design: For use to generate imagery for artistic purposes only for designs, artistic inspiration, mood boards, or design layouts. Limited to internal and authenticated external users. Communication: For use to create imagery for business-related communication, documentation, essays, bulletins, blog posts, social media, or memos. This use case may not be selected to generate images for political campaigns or journalistic content (for journalistic use, see the Journalistic content use case below). Limited to internal and authenticated external users. Education: For use to create imagery for enhanced or interactive learning materials, either for use in educational institutions or for professional training. Limited to internal users and authenticated external users. Entertainment: For use to create imagery to enhance entertainment content such as video games, movies, TV, videos, recorded music, podcasts, audio books, or augmented or virtual reality. This use case may not be selected to generate images for political campaigns or journalistic content (for journalistic use, see the below Journalistic content use case). Limited to internal and authenticated external users.9.3KViews0likes0Comments