Recent Discussions
Why Your AI Bots Need a Proxy
Let’s cut to the chase: AI bots are hungry. They devour data, automate tasks, and work . But without a proxy, they’re like elephants tap-dancing in a library—loud, obvious, and destined for disaster. Websites use firewalls, CAPTCHAs, and IP bans to block bots. If your AI sends 100 requests from the same IP in 10 seconds, you’ll get flagged faster than a TikTok trend. A proxy acts as a disguise, masking your bot’s real IP and rotating it to mimic human behavior. Think of it as giving your bot a wardrobe of digital disguises. Need to scrape product prices? Check ad placements? Train machine learning models? A proxy keeps your AI incognito to work smarter, not harder.2Views0likes0CommentsMy introduction to Deep Learning
I had to work a lot to get an answer to each question, to have a basic understanding of Deep Learning. As I tackled through, and got all the math and idea to simple-enough level of complexity to understand something essential or basic, I reached the need to write this text: https://github.com/tambetvali/LaegnaAIBasics It's quite a complete intro to DL basics, and it's also having essential mathematical simplicity to get your own ideas rather than looking at cryptic code, I guess: I will add there as I get more things simple, over time (first absorbing the ideas long enough to be able to write with some clarity).22Views0likes0CommentsUsing Neural Network to Learn Profitable Trading in the FOREX Markets
I am using Neural Networks (NN) to teach them how to recognize profitable trading opportunities in the Foreign Exchange (FOREX) markets, using 10 currencies simultaneously. I am using 3rd-order Cubic Splines as input to give the NNs a sense of how the critical variables change over time. I am using free FOREX historical trading data to train the NNs how to trade profitably in the future. I don't just feed the trading levels of the FOREX currency pairs as input to the NNs. Instead, I use a variation of the computed DXY Index for all 10 currencies in order to isolate the value change of each of the individual currencies, using Cubic Splines to detail how those values change over various time periods. The end result is Neural Networks that recognize which currencies to Buy and which ones to Sell at the most profitable times. If anyone is interested in the details, please reach out and I will provide more details.2Views0likes0CommentsUnderstand the development lifecycle of a large language model (LLM) app
Before understanding how to work with prompt flow, let's explore the development lifecycle of a Large Language Model (LLM) application. The lifecycle consists of the following stages: Initialization: Define the use case and design the solution. Experimentation: Develop a flow and test with a small dataset. Evaluation and refinement: Assess the flow with a larger dataset. Production: Deploy and monitor the flow and application. During both evaluation and refinement, and production, you might find that your solution needs to be improved. You can revert back to experimentation during which you develop your flow continuously, until you're satisfied with the results. Let's explore each of these phases in more detail. Initialization Imagine you want to design and develop an LLM application to classify news articles. Before you start creating anything, you need to define what categories you want as output. You need to understand what a typical news article looks like, how you present the article as input to your application, and how the application generates the desired output. In other words, during initialization you: Define the objective Collect a sample dataset Build a basic prompt Design the flow To design, develop, and test an LLM application, you need a sample dataset that serves as the input. A sample dataset is a small representative subset of the data you eventually expect to parse as input to your LLM application. When collecting or creating the sample dataset, you should ensure diversity in the data to cover various scenarios and edge cases. You should also remove any privacy sensitive information from the dataset to avoid any vulnerabilities. Experimentation You collected a sample dataset of news articles, and decided on which categories you want the articles to be classified into. You designed a flow that takes a news article as input, and uses an LLM to classify the article. To test whether your flow generates the expected output, you run it against your sample dataset. The experimentation phase is an iterative process during which you (1) run the flow against a sample dataset. You then (2) evaluate the prompt's performance. If you're (3) satisfied with the result, you can move on to evaluation and refinement. If you think there's room for improvement, you can (4) modify the flow by changing the prompt or flow itself. Evaluation and refinement When you're satisfied with the output of the flow that classifies news articles, based on the sample dataset, you can assess the flow's performance against a larger dataset. By testing the flow on a larger dataset, you can evaluate how well the LLM application generalizes to new data. During evaluation, you can identify potential bottlenecks or areas for optimization or refinement. When you edit your flow, you should first run it against a smaller dataset before running it again against a larger dataset. Testing your flow with a smaller dataset allows you to more quickly respond to any issues. Once your LLM application appears to be robust and reliable in handling various scenarios, you can decide to move the LLM application to production. Production Finally, your news article classification application is ready for production. During production, you: Optimize the flow that classifies incoming articles for efficiency and effectiveness. Deploy your flow to an endpoint. When you call the endpoint, the flow is triggered to run and the desired output is generated. Monitor the performance of your solution by collecting usage data and end-user feedback. By understanding how the application performs, you can improve the flow whenever necessary. Explore the complete development lifecycle Now that you understand each stage of the development lifecycle of an LLM application, you can explore the complete overview:1.1KViews1like1CommentPrinciple Does not have Access to API/Operation
Hi all, I am trying to connect Azure OpenAI service to Azure AI Search service to Azure Gen 2 Data lake. In the Azure AI Foundry studio Chat Playground, I am able to add my data source, which is a .csv file in the data lake that has been indexed successfully. I use "System Assigned Managed Identity". The following RBAC has been applied: AI Search service has Cognitive Services OpenAI Contributor in Azure Open AI service Azure OpenAI service has Search Index Data Reader in AI Search Service Azure OpenAI service has Search Service Contributor in AI Search Service AI Search Service has Storage Blob Data Reader in Storage account (Data Lake) As mentioned when adding the data source it passes validation but when I try to ask a question, I get the error "We couldn't connect your data Principal does not have access to API/Operation"359Views3likes3CommentsUnderstanding Azure OpenAI Service Provisioned Reservations
Hello Team, We are building a Azure OpenAI based finetuned model making use of GPT 4o-mini for long run. Wanted to understand the costing, here we came up with the following question over Azure OpenAI Service Provisioned Reservations plan PTU units. Need to understand how it works: Is there any Token Quota Limit Provisioned finetuned model deployment? How many finetuned model with Provisioned capacity can be deployed under the plan, How will the pricing affect if we deploy multiple finetune model? Model Deployment - GPT 4o-mini finetuned Region - North Central US We are doing it for our enterprise customer, kindly help us to resolve this issue.52Views0likes1CommentUsing AI to convert unstructured information to structured information
We have a use case to extract the information from various types of documents like Excel, PDF, and Word and convert it into structured information. The data exists in different formats. We started building this use case with AI Builder, and we hit the roadblock and are now exploring ways using the Co-pilot studio. It would be great if someone could point us in the right direction. What should be the right technology stack that we should consider for this use case? Thank you for the pointer.497Views2likes9CommentsHow to Build AI Agents in 10 Lessons
Microsoft has released an excellent learning resource for anyone looking to dive into the world of AI agents: "AI Agents for Beginners". This comprehensive course is available free on GitHub. It is designed to teach the fundamentals of building AI agents, even if you are just starting out. What You'll Learn The course is structured into 10 lessons, covering a wide range of essential topics including: Agentic Frameworks: Understand the core structures and components used to build AI agents. Design Patterns: Learn proven approaches for designing effective and efficient AI agents. Retrieval Augmented Generation (RAG): Enhance AI agents by incorporating external knowledge. Building Trustworthy AI Agents: Discover techniques for creating AI agents that are reliable and safe. AI Agents in Production: Get insights into deploying and managing AI agents in real-world applications. Hands-On Experience The course includes practical code examples that utilize: Azure AI Foundry GitHub Models These examples help you learn how to interact with Language Models and use AI Agent frameworks and services from Microsoft, such as: Azure AI Agent Service Semantic Kernel Agent Framework AutoGen - A framework for building AI agents and applications Getting Started To get started, make sure you have the proper set-up. Here are the 10 lessons Intro to AI Agents and Agent Use Cases Exploring AI Agent Frameworks Understanding AI Agentic Design Principles Tool Use Design Pattern Agentic RAG Building Trustworthy AI Agents Planning Design Multi-Agent Design Patterns Metacognition in AI Agents AI Agents in Production Multi-Language Support To make learning accessible to a global audience, the course offers multi-language support. Get Started Today! If you are eager to learn about AI agents, this course is an excellent starting point. You can find the complete course materials on GitHub at AI Agents for Beginners.How to Build your own AI Text-to-Image Generator
Build your own AI Text-to-Image Generator in Visual Studio Code Do you want to build your own AI Text-to-Image Generator in less than 15 minutes? Join me as I'll walk you through the process of building one using Stable Diffusion within Visual Studio Code! Prerequisites Before you start, ensure you have the following: Python 3.9 or higher. Hugging Face Account. Step 1: Set Up the Development Environment In your project directory, create a file named requirements.txt and add the following dependencies to the file: certifi==2022.9.14 charset-normalizer==2.1.1 colorama==0.4.5 customtkinter==4.6.1 darkdetect==0.7.1 diffusers==0.3.0 filelock==3.8.0 huggingface-hub==0.9.1 idna==3.4 importlib-metadata==4.12.0 numpy==1.23.3 packaging==21.3 Pillow==9.2.0 pyparsing==3.0.9 PyYAML==6.0 regex==2022.9.13 requests==2.28.1 tk==0.1.0 tokenizers==0.12.1 torch==1.12.1+cu113 torchaudio==0.12.1+cu113 torchvision==0.13.1+cu113 tqdm==4.64.1 transformers==4.22.1 typing_extensions==4.3.0 urllib3==1.26.12 zipp==3.8.1 To install the listed dependencies in the requirements.txt file, run the following command in your terminal: pip install -r requirements.txt Step 2: Configure Authentication In your project directory, create a file named authtoken.py and add the following code to the file: auth_token = "ACCESS TOKEN FROM HUGGING FACE" To obtain access token from Hugging Face, follow these steps: Log in to your Hugging Face account. Go to your profile settings and select Access Tokens Click on Create new token. Choose the token type as Read. Enter Token name and click Create token. Copy the generated token and replace ACCESS TOKEN FROM HUGGINGFACE in authtoken.py file with your token. Step 3: Develop the Application In your project directory, create a file named application.py and add the following code to the file: # Import the Tkinter library for GUI import tkinter as tk # Import the custom Tkinter library for enhanced widgets import customtkinter as ctk # Import PyTorch for handling tensors and model import torch # Import the Stable Diffusion Pipeline from diffusers library from diffusers import StableDiffusionPipeline # Import PIL for image handling from PIL import Image, ImageTk # Import the authentication token from a file from authtoken import auth_token # Initialize the main Tkinter application window app = tk.Tk() # Set the size of the window app.geometry("532x632") # Set the title of the window app.title("Text-to-Image Generator") # Set the appearance mode of customtkinter to dark ctk.set_appearance_mode("dark") # Create an entry widget for the prompt text input prompt = ctk.CTkEntry(height=40, width=512, text_font=("Arial", 20), text_color="black", fg_color="white") # Place the entry widget at coordinates (10, 10) prompt.place(x=10, y=10) # Create a label widget for displaying the generated image lmain = ctk.CTkLabel(height=512, width=512) # Place the label widget at coordinates (10, 110) lmain.place(x=10, y=110) # Define the model ID for Stable Diffusion modelid = "CompVis/stable-diffusion-v1-4" # Define the device to run the model on device = "cpu" # Load the Stable Diffusion model pipeline pipe = StableDiffusionPipeline.from_pretrained(modelid, revision="fp16", torch_dtype=torch.float32, use_auth_token=auth_token) # Move the pipeline to the specified device (CPU) pipe.to(device) # Define the function to generate the image from the prompt def generate(): # Disable gradient calculation for efficiency with torch.no_grad(): # Generate the image with guidance scale image = pipe(prompt.get(), guidance_scale=8.5)["sample"][0] # Convert the image to a PhotoImage for Tkinter img = ImageTk.PhotoImage(image) # Keep a reference to the image to prevent garbage collection lmain.image = img # Update the label widget with the new image lmain.configure(image=img) # Create a button widget to trigger the image generation trigger = ctk.CTkButton(height=40, width=120, text_font=("Arial", 20), text_color="white", fg_color="black", command=generate) # Set the text on the button to "Generate" trigger.configure(text="Generate") # Place the button at coordinates (206, 60) trigger.place(x=206, y=60) # Start the Tkinter main loop app.mainloop() To run the application, execute the following command in your terminal: python application.py This will launch the GUI where you can enter a text prompt and generate corresponding images by clicking the Generate button. Congratulations! You have successfully built an AI Text-to-Image Generator using Stable Diffusion in Visual Studio Code. Feel free to explore and enhance the application further by adding new features and improving the user interface. Happy coding!2.1KViews1like1CommentBuilding Agentic Solutions with Autogen 0.4
Multi Agent Systems are a consequence of an organized interaction between diverse agents to achieve a goal. Similar to human collaborations, Agentic solutions are expected to collaborate effectively in accordance with the goal to be accomplished. A crucial aspect is adopting the appropriate design pattern depending on the task on hand. Let us look at the design of Agentic Solutions is stages. Stage 1: Determine all the required Agents and define the required tools which can be leveraged by the Agents. The tools may have access requirements which has to be handled with appropriate security constraints. In Autogen, this is supported through multiple patterns which address different requirements. At its core, Autogen provides the ability to leverage LLMs, human inputs, tools or a combination. Autogen 0.4 in particular has provided an high-level API through AgentChat with preset Agents allowing for variations in agent responses. Some of the preset Agents include 1) AssistantAgent is a built-in agent which can use a language model and tools. It can also handle multimodal messages and instructions of the Agents function. 2) UserProxyAgent: An agent that takes user input returns it as responses. 3) CodeExecutorAgent: An agent that can execute code. 4) OpenAIAssistantAgent: An agent that is backed by an OpenAI Assistant, with ability to use custom tools. 5) MultimodalWebSurfer: A multi-modal agent that can search the web and visit web pages for information. 6) FileSurfer: An agent that can search and browse local files for information. 7) VideoSurfer: An agent that can watch videos for information. A Custom Agents can be used when the preset Agents do not address the need. Stage 2: Identify the optimal interaction between the team of agents. This can include a human in the loop proxy agent which serves as an interface for human inputs. Autogen supports multiple interaction patterns 1) GroupChat is a high-level design pattern for interleaved interactions. In Auotgen 0.4, GroupChat got further abstracted with RoundRobinGroupChat or SelectorGroupChat . This means you can choose to go with abstracted options of RoundRobinGroupChat, SelectorGroupChat or customize it to your need with the base GroupChat in the core. RoundRobinGroupChat team configuration where all agents share the same context respond in a round-robin fashion. Broadcasts response to all agents, provides a consistent context. Human In the Loop - UserProxyAgent SelectorGroupChat team where participants take turns broadcasting messages to all other members. A generative model selects the next speaker based on the shared context, enabling dynamic, context-aware collaboration. selector_func argument with a custom selector function to override the default model-based selection. GroupChat in core 2) Sequential Agents Stage 3: Determine the memory and message passing between the Agents Memory can be the context for the Agent which could be the conversation history, RAG which is pulled from a ListMemory or a Custom Memory Store like a Vector DB. Messaging between Agents uses ChatMessage. This message type allows both text and multimodal communication and includes specific types such as TextMessage or MultiModalMessage. Stage 4: Articulate the Termination condition The following Termination options are available in Autogen 0.4 MaxMessageTermination: Stops after a specified number of messages have been produced, including both agent and task messages. TextMentionTermination: Stops when specific text or string is mentioned in a message (e.g., “TERMINATE”). TokenUsageTermination: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages. TimeoutTermination: Stops after a specified duration in seconds. HandoffTermination: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as Swarm. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them. SourceMatchTermination: Stops after a specific agent responds. ExternalTermination: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., “Stop” buttons in chat interfaces). StopMessageTermination: Stops when a StopMessage is produced by an agent. TextMessageTermination: Stops when a TextMessage is produced by an agent. FunctionCallTermination: Stops when a ToolCallExecutionEvent containing a FunctionExecutionResult with a matching name is produced by an agent. Stage 5: Optionally manage the state This is useful in web application where stateless endpoints respond to requests and need to load the state of the application from persistent storage. The state can be saved by using the save_state() call in the AssistantAgent. assistant_agent.save_state() Finally, Logging and Serializing is also available for debugging and sharing. A well-designed Agentic Solution is crucial to be both optimal and effective in accomplishing the assigned goal. References Autogen - https://microsoft.github.io/autogen/stable/index.html504Views3likes1CommentIntegrating Azure OpenAI Services
Hello everyone, I'm currently developing an AI to Human Text Converter Free platform aitohumantextconverterfree.net that aims to transform AI text into more human style engaging text. I'm exploring ways to enhance the platform's capabilities and am particularly interested in leveraging Azure's AI services. I've been reading about the Azure OpenAI Service and its various models, such as the o-series and GPT4o, which are designed for advanced reasoning and problem solving tasks. I have a few questions: Model Selection: Which Azure AI models would be most suitable for refining ai generated text to make it more human style? Integration Best Practices: Are there recommended approaches or resources for integrating Azure's ai services into existing web platforms? Customization: Is it possible to fine-tune these models specifically for converting AI-generated text into a more natural, human-like style? Any insights, experiences, or resources you could share would be greatly appreciated. Thank you!120Views0likes2CommentsAzure AI Search - Tag Scoring profile on azureopenai extra_body
I created an index on Azure AI Search and connected it to Azure OpenAI using the extra_body. It works perfectly. However, I created a default scoring profile for my index, which boosts documents containing the string "zinc" in the VITAMINS field by a factor of 10. Since doing this, I can no longer run the query that worked previously without issues. Now, the query is asking for a scoringParameter, and when I attempt to pass it, I receive an error. Here is the code that works fine when I remove the scoring function. client.chat.completions.create( model=os.getenv('DEPLOYMENT'), messages=messages, temperature=0.5, extra_body={ "data_sources": [{ "type": "azure_search", "parameters": { "endpoint": os.getenv('ENDPOINT'), "index_name": os.getenv('INDEX'), "semantic_configuration": os.getenv('RANK'), "query_type": "hybrid", "in_scope": True, "role_information": None, "strictness": 1, "top_n_documents": 3, "authentication": { "type": "api_key", "key": os.getenv('KEY') }, "embedding_dependency": { "type": "deployment_name", "deployment_name": os.getenv('ADA_VIT') } } }] } ) However, if I activate the default scoring profile, I get the following error: > An error occurred: Error code: 400 - {'error': 'message': 'An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message=\'Server responded with status 400. Error message: {"error":{"code":"MissingRequiredParameter","message":"Expected 1 parameter(s) but 0 were supplied.\\\\r\\\\nParameter name: scoringParameter","details":[{"code":"MissingScoringParameter","message":"Expected 1 parameter(s) but 0 were supplied."}]}}\', api-version=2024-03-01-preview\'\nCall to Azure Search instance failed.\nAPI Users: Please ensure you are using the right instance, index_name, and provide admin_key as the api_key.\n'} **If I try to pass the scoringParameter anywhere in the extra_body**, I receive this error: > An error occurred: Error code: 400 - {'error': {'requestid': '', 'code': 400, 'message': 'Validation error at #/data_sources/0/azure_search/parameters/scoringParameter: Extra inputs are not permitted'}} This error is even more confusing. I’ve been looking through various resources, but none of them seem to provide a clear example of how to properly pass the scoring profile or scoring parameters in the extra_body. Here’s how I define my scoring profile using tags: scoring_profiles = [ ScoringProfile( name="my-scoring-profile", functions=[ TagScoringFunction( field_name="VITAMINS", boost=10.0, parameters=TagScoringParameters( tags_parameter="tags", ), ) ] ) ] How to pass the scoring parameters correctly in the `extra_body` on the `client.chat.completions.create`? PS: The only way I can get my code to work is if I delete the scoring profile or do not make it the scoring profile by default by I do want to use it.63Views0likes0CommentsLearning: AgentChat Swarm
I am trying to use autogen agentchat swarm team, specifically in websocket based application. Facing issues with async setup and swarm usage. If someone has done work in agentchat or swarm domain, have some sort of tutorial code and can share, that will be of great help. Thanks!41Views2likes0CommentsDo not see option Add your data in Azure AI chat playground for DeepSeek model
Issue 1: I am evaluating different models in Azure AI Foundry against my own data in Azure AI search and do not see option to add your data when DeepSeek-R1 model is selected in chat playground. It used to be there but disappeared recently ( precisely on Feb 18 ET). However see option to add your data when gpt models are selected in chat play ground. Issue 2: When add your data option was available for DeepSeek-R1 model (prior to Feb 18,2025), I was getting following error in chat playground "An error occurred when calling Azure OpenAI: Server responded with status 400. Error message: {'error': {'code': 'unknown_model', 'message': 'Unknown model: chatgpt', 'details': 'Unknown model: chatgpt'}}"151Views2likes1CommentImage-to-Image generation using Stable-Diffuson-3.5 Large Model
Has anybody been able to generate an image with 'Image-to-Image' mode of 'Stable-Diffusion 3.5' ( deployed as a serverless api) ? I tried with text-to-image option (via Python + POST request ) and was able to generate the image but 'image-to-image' option does not seem to work. Infact even Azure Playground does not list parameters for this option. But the model information on Azure does list it supports image-to-image mode and accepts an image input. Any leads on this will be greatly appreciated.134Views0likes1CommentAzure AI speech studio - synthesis failed
Hi, in my TTS project all files created so far cause a failure when I hit the Play button. I get following error msg: Response status code does not indicate success: 400 (Synthesis failed. StatusCode: FailedPrecondition, Details: '=' is an unexpected token. The expected token is ';'. Line 1, position 535..). Connection ID: c2e319c0-c447-11ef-8937-33bd13f92760 Changing voices does not solve it. Location of the speech service is "Germany West Central"74Views0likes1CommentIntegration between AI Agent and D365 Finance and Operations
We have a requirement to fire a database query using the AI agent to find and retrieve certain information. I understood from our team that retrieving the information from a specific table may be possible, but the team is facing difficulty when retrieving the information from multiple related tables and when there are multiple conditions to consider. Any pointer here is greatly appreciated.40Views0likes0CommentsUse case: Access to product documentation using the sidecar in D365
As an ISV, we continuously enhance our product by adding new features. We want to make up-to-date information about these features easily accessible to our customers directly within D365 using the Co-pilot sidecar. This would allow customers to prompt for details on a specific feature, and the system would provide comprehensive explanations. Currently, our product documentation for new features is stored on SharePoint. Would it be possible to integrate the Co-pilot sidecar for all our customers, enabling them to access our centrally stored product documentation on SharePoint?26Views0likes0CommentsHow do I integrate a custom AI agent with D365 Finance and Operations?
We have a requirement where a custom AI agent is required to call D365 Finance and Operation to search an item with some search criteria. Any pointer here is greatly appreciated. We are in the process of building this custom AI agent using the Azure AI Foundry. We are also looking to retrieve the matching items from D365 Finance and Operations to be returned to the AI agent.42Views0likes0CommentsCreate indexer - Can anyone help me please?
Hi everyone, I'm trying to Create a Custom Skill for Azure AI Search, following the Labs below. https://github.com/MicrosoftLearning/mslearn-knowledge-mining/blob/main/Instructions/Labs/01-azure-search.md When I try to creat an indexer with "./create-search" command on PowerShell of Visual Studio Code, the error is returned (I'm using C#); {"error":{"code":"","message":"This indexer refers to a skillset 'margies-custom-skillset' that doesn't exist"}} Why does this happen? Where is the skillset 'margies-custom-skillset' stored?? Is there any hint to get this though? Access Keys (service, admin, storge) are correctly entered.272Views0likes1Comment
Events
Recent Blogs
- Developers and enterprises now have immediate access to state-of-the-art generative and semantic models purpose-built for RAG (Retrieval-Augmented Generation) and agentic AI workflows on Azure AI Fou...Apr 15, 2025175Views0likes0Comments
- Last week, we kicked off the arrival of Meta’s powerful new Llama 4 models in Azure with the launch of three models across Azure AI Foundry and Azure Databricks. Today, we’re expanding the herd with ...Apr 14, 202598Views0likes0Comments