Forum Widgets
Latest Discussions
AI Model started to miss a delimiter in the invoice.
Hi, So we've been trying to use AI Builder to read standardized, invoices in PDF documents (not scans, no images, plain text). We have used both the invoice ready template and fixed template documents to train the model. We have used around 20 documents, with various scenarios, one-page of products, two pages. The pre-defined invoice model have correctly found the items lines in the invoice, and correctly read the values with delimiters in tables. It worked as magic for some time, but recently, the model started to miss the decimal delimiter (comma ",") but only in the >1 items invoiced. We had 2 collection for training. Single items and multi-items with >1. I think it could have something to do with published model used from 3.1 to 4.0, but can't confirm. Now it can read all the total values and so on correctly for all documents, but for multi-lines it can't read only the items-lines. Digits are read perfectly, but commas are missed. i.e it reads 58,58 as 5858. The delimiter in number field amount is set to comma, have tried changing to text but it's the same. Anyone had such issues, or any experience how can I maybe adjust collections to re-train the model, because i suppose this is the only way to fix it right now? Or how can I revert the model version to 3.1 to confirm it was after the update?Daniel_FMay 16, 2025Occasional Reader2Views0likes0CommentsOWASP LLM top 10 risks
Dear all, I would like to correlate all the Azure AI services from this link - https://azure.microsoft.com/en-us/products/ with the OWASP LLM top 10 risks. Do Microsoft have online technical documentation that provides the relevant information? For instance I would expect that for some Azure AI services it would be the customers responsibility to create the relevant security controls to mitigate against the OWASP LLM top 10 risks, whilst some controls may be built-in and applied by Microsoft. Alternatively I imagine the differing AI service tiers will also make some difference, however I am just wondering were I can start to develop to start this type of activity.miksinghMay 12, 2025Copper Contributor25Views0likes0CommentsUsing artificial intelligence to verify document compliance
Organizations of all domains and sizes are actively exploring ways to leverage Artificial intelligence and infuse it into it's business. There are several business challenges in which AI technologies have already made a significant impact to organization's bottom lines; one of these challenges is in the domain of legal document review, processing, and compliance. Any business that regularly reviews and processes legal documents (e.g. financial services, professional services, legal firms) are inundated with both open contracts and repositories of previously executed agreements, all of which have historically been managed by humans. Though humans may bring the required domain expertise their ability to review dense and lengthy legal agreements is manual, slow, and subject to human error. Efforts to modernize these operations began with documents being digitized (i.e. contracts either originating as a digital form or being uploaded via .pdf, post-execution). The next opportunity to innovate in the legal document domain now includes processing these digitized documents through AI services to extract key dates, phrases, or contract terms and create rules to identify outliers or point out terms/conditions for further review. As a note, humans are still involved in the document compliance process but further down the value chain where their abilities to reason and leverage their domain expertise is required. Whether it’s a vendor agreement that must include an arbitration clause, or a loan document requiring specific disclosures, ensuring the presence of these clauses may prove vital in reducing the legal exposure for an organization. With AI, we can now automate much of the required analysis and due diligence that takes place before a legal agreement is ever signed. From classical algorithms like cosine similarity to advanced reasoning using large language models (LLMs), Microsoft Azure offers powerful tools that enable AI solutions that can compare documents and validate their contents. Attached is a link to an Azure AI Document Compliance Proof of Concept-Toolkit. This repo will help rapidly build AI-powered document compliance proof-of-concepts. It leverages multiple similarity-analysis techniques to verify that legal, financial or other documents include all required clauses—and expose them via a simple REST API. Key Features of the Document Compliance PoC toolkit: Clause Verification - Detect and score the presence of required clauses in any document. Multi-Technique Similarity - Compare documents using TF-IDF, cosine similarity over embeddings, and more. Modular Architecture - Swap in your preferred NLP models or similarity algorithms with minimal changes. Extensible Examples - Sample configs and test documents to help you get started in minutes. Please note, this repo is in active development, the API and UI are not operational55Views0likes0CommentsAzure Prompt Flow Output Trace Not Displaying Expected Details
We are experiencing an issue with the Prompt Flow (UI) trace visualization in Azure AI Studio. When we run a prompt flow and go to the Outputs > Trace tab, we only see a single flow node with 0 Tokens and the total duration. However, we do not see the detailed breakdown of intermediate nodes or any expanded trace graph (as shown in the official documentation). Expected Behavior: Upon clicking on the flow node, the right pane should show the detailed flow overview, including duration and token usage across individual steps/nodes. A full trace graph of the execution should be rendered. Current Behavior: Only the top-level flow is visible. No token information or trace details are available for sub-nodes, even though the flow has multiple components. Could you please advise whether this is a known issue or if there's any configuration needed to enable the full trace view?prashanthdMay 09, 2025Copper Contributor22Views0likes0CommentsAzure AI Foundry/Azure AI Service - cannot access agents
I'm struggling with getting agents to work via API which were defined in AI Foundry (based on Azure AI Service). When defining agent in project in AI Foundry I can use it in playground via web browser. The issue appears when I'm trying to access them via API (call from Power Automate). When executing Run on agent I get info that agent cannot be found. The issue doesn't exist when using Azure OpenAI and defining assistants. I can use them both via API and web browser. I guess that another layer of management which is project might be an issue here. I saw usage of SDK in Python and first call is to connect to a project and then get an agent. Does anyone of you experienced the same? Is a way to select and run agent via API?TySuMay 08, 2025Copper Contributor23Views0likes0Commentsazure bot services
Hi All, I've built an AI app using using azure open ai and python code. It connects to fabric warehouse and answer sql questions. The app is running fine in web through azure container. Now I want to integrate it in azure bot services. I'm using bot framework. The webchat is not responding to chat messages. Could someone please help, what can be the issue?dim94May 06, 2025Copper Contributor66Views1like4CommentsPrinciple Does not have Access to API/Operation
Hi all, I am trying to connect Azure OpenAI service to Azure AI Search service to Azure Gen 2 Data lake. In the Azure AI Foundry studio Chat Playground, I am able to add my data source, which is a .csv file in the data lake that has been indexed successfully. I use "System Assigned Managed Identity". The following RBAC has been applied: AI Search service has Cognitive Services OpenAI Contributor in Azure Open AI service Azure OpenAI service has Search Index Data Reader in AI Search Service Azure OpenAI service has Search Service Contributor in AI Search Service AI Search Service has Storage Blob Data Reader in Storage account (Data Lake) As mentioned when adding the data source it passes validation but when I try to ask a question, I get the error "We couldn't connect your data Principal does not have access to API/Operation"fingers3775May 06, 2025Copper Contributor533Views3likes9CommentsModel in Document Intelligence has stuck in state "running"
Hello, my custom model has stuck in state "running". Besides it, I am not able to delete it since all the actions like "delete", "compose" etc (including the model itself) has been greyed out and are not active. What are the steps for model unblocking? Thanks in advance for the hint!MarLogMay 06, 2025Copper Contributor91Views0likes2CommentsBuilding Agentic Solutions with Autogen 0.4
Multi Agent Systems are a consequence of an organized interaction between diverse agents to achieve a goal. Similar to human collaborations, Agentic solutions are expected to collaborate effectively in accordance with the goal to be accomplished. A crucial aspect is adopting the appropriate design pattern depending on the task on hand. Let us look at the design of Agentic Solutions is stages. Stage 1: Determine all the required Agents and define the required tools which can be leveraged by the Agents. The tools may have access requirements which has to be handled with appropriate security constraints. In Autogen, this is supported through multiple patterns which address different requirements. At its core, Autogen provides the ability to leverage LLMs, human inputs, tools or a combination. Autogen 0.4 in particular has provided an high-level API through AgentChat with preset Agents allowing for variations in agent responses. Some of the preset Agents include 1) AssistantAgent is a built-in agent which can use a language model and tools. It can also handle multimodal messages and instructions of the Agents function. 2) UserProxyAgent: An agent that takes user input returns it as responses. 3) CodeExecutorAgent: An agent that can execute code. 4) OpenAIAssistantAgent: An agent that is backed by an OpenAI Assistant, with ability to use custom tools. 5) MultimodalWebSurfer: A multi-modal agent that can search the web and visit web pages for information. 6) FileSurfer: An agent that can search and browse local files for information. 7) VideoSurfer: An agent that can watch videos for information. A Custom Agents can be used when the preset Agents do not address the need. Stage 2: Identify the optimal interaction between the team of agents. This can include a human in the loop proxy agent which serves as an interface for human inputs. Autogen supports multiple interaction patterns 1) GroupChat is a high-level design pattern for interleaved interactions. In Auotgen 0.4, GroupChat got further abstracted with RoundRobinGroupChat or SelectorGroupChat . This means you can choose to go with abstracted options of RoundRobinGroupChat, SelectorGroupChat or customize it to your need with the base GroupChat in the core. RoundRobinGroupChat team configuration where all agents share the same context respond in a round-robin fashion. Broadcasts response to all agents, provides a consistent context. Human In the Loop - UserProxyAgent SelectorGroupChat team where participants take turns broadcasting messages to all other members. A generative model selects the next speaker based on the shared context, enabling dynamic, context-aware collaboration. selector_func argument with a custom selector function to override the default model-based selection. GroupChat in core 2) Sequential Agents Stage 3: Determine the memory and message passing between the Agents Memory can be the context for the Agent which could be the conversation history, RAG which is pulled from a ListMemory or a Custom Memory Store like a Vector DB. Messaging between Agents uses ChatMessage. This message type allows both text and multimodal communication and includes specific types such as TextMessage or MultiModalMessage. Stage 4: Articulate the Termination condition The following Termination options are available in Autogen 0.4 MaxMessageTermination: Stops after a specified number of messages have been produced, including both agent and task messages. TextMentionTermination: Stops when specific text or string is mentioned in a message (e.g., “TERMINATE”). TokenUsageTermination: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages. TimeoutTermination: Stops after a specified duration in seconds. HandoffTermination: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as Swarm. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them. SourceMatchTermination: Stops after a specific agent responds. ExternalTermination: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., “Stop” buttons in chat interfaces). StopMessageTermination: Stops when a StopMessage is produced by an agent. TextMessageTermination: Stops when a TextMessage is produced by an agent. FunctionCallTermination: Stops when a ToolCallExecutionEvent containing a FunctionExecutionResult with a matching name is produced by an agent. Stage 5: Optionally manage the state This is useful in web application where stateless endpoints respond to requests and need to load the state of the application from persistent storage. The state can be saved by using the save_state() call in the AssistantAgent. assistant_agent.save_state() Finally, Logging and Serializing is also available for debugging and sharing. A well-designed Agentic Solution is crucial to be both optimal and effective in accomplishing the assigned goal. References Autogen - https://microsoft.github.io/autogen/stable/index.html797Views3likes2CommentsUnderstand the development lifecycle of a large language model (LLM) app
Before understanding how to work with prompt flow, let's explore the development lifecycle of a Large Language Model (LLM) application. The lifecycle consists of the following stages: Initialization: Define the use case and design the solution. Experimentation: Develop a flow and test with a small dataset. Evaluation and refinement: Assess the flow with a larger dataset. Production: Deploy and monitor the flow and application. During both evaluation and refinement, and production, you might find that your solution needs to be improved. You can revert back to experimentation during which you develop your flow continuously, until you're satisfied with the results. Let's explore each of these phases in more detail. Initialization Imagine you want to design and develop an LLM application to classify news articles. Before you start creating anything, you need to define what categories you want as output. You need to understand what a typical news article looks like, how you present the article as input to your application, and how the application generates the desired output. In other words, during initialization you: Define the objective Collect a sample dataset Build a basic prompt Design the flow To design, develop, and test an LLM application, you need a sample dataset that serves as the input. A sample dataset is a small representative subset of the data you eventually expect to parse as input to your LLM application. When collecting or creating the sample dataset, you should ensure diversity in the data to cover various scenarios and edge cases. You should also remove any privacy sensitive information from the dataset to avoid any vulnerabilities. Experimentation You collected a sample dataset of news articles, and decided on which categories you want the articles to be classified into. You designed a flow that takes a news article as input, and uses an LLM to classify the article. To test whether your flow generates the expected output, you run it against your sample dataset. The experimentation phase is an iterative process during which you (1) run the flow against a sample dataset. You then (2) evaluate the prompt's performance. If you're (3) satisfied with the result, you can move on to evaluation and refinement. If you think there's room for improvement, you can (4) modify the flow by changing the prompt or flow itself. Evaluation and refinement When you're satisfied with the output of the flow that classifies news articles, based on the sample dataset, you can assess the flow's performance against a larger dataset. By testing the flow on a larger dataset, you can evaluate how well the LLM application generalizes to new data. During evaluation, you can identify potential bottlenecks or areas for optimization or refinement. When you edit your flow, you should first run it against a smaller dataset before running it again against a larger dataset. Testing your flow with a smaller dataset allows you to more quickly respond to any issues. Once your LLM application appears to be robust and reliable in handling various scenarios, you can decide to move the LLM application to production. Production Finally, your news article classification application is ready for production. During production, you: Optimize the flow that classifies incoming articles for efficiency and effectiveness. Deploy your flow to an endpoint. When you call the endpoint, the flow is triggered to run and the desired output is generated. Monitor the performance of your solution by collecting usage data and end-user feedback. By understanding how the application performs, you can improve the flow whenever necessary. Explore the complete development lifecycle Now that you understand each stage of the development lifecycle of an LLM application, you can explore the complete overview:1.3KViews1like4Comments
Resources
Tags
- AMA74 Topics
- AI Platform54 Topics
- TTS50 Topics
- azure ai services12 Topics
- azure ai7 Topics
- community5 Topics
- AzureAI5 Topics
- azure ai foundry5 Topics
- azure openai4 Topics
- azure machine learning3 Topics