slm
18 TopicsMicrosoft Semantic Kernel and AutoGen: Open Source Frameworks for AI Solutions
Explore Microsoft’s open-source frameworks, Semantic Kernel and AutoGen. Semantic Kernel enables developers to create AI solutions across various domains using a single Large Language Model (LLM). AutoGen, on the other hand, uses AI Agents to perform smart tasks through agent dialogues. Discover how these technologies serve different scenarios and can be used to build powerful AI applications.47KViews6likes1CommentFine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
Phi-3 is a family of small language models (SLMs) developed by Microsoft that delivers exceptional performance and cost-effectiveness. In this tutorial, you will learn how to fine-tune the Phi-3 model and integrate the custom Phi-3 model with Prompt flow in Azure AI Studio. By leveraging Azure AI / ML Studio, you will establish a workflow for deploying and utilizing custom AI models.20KViews1like0CommentsExploring Microsoft's Phi-3 Family of Small Language Models (SLMs) with Azure AI
Dive into the world of small language models (SLMs) with Microsoft's Phi-3 family and learn how to integrate them into real-world applications using Azure AI. Discover step-by-step guidance, practical exercises, and a Gradio-powered chatbot interface to bolster your confidence in deploying and integrating AI. Keep learning and building with Azure AI17KViews2likes0CommentsGetting Started Using Phi-3-mini-4k-instruct-onnx for Text Generation with NLP Techniques
In this tutorial, we'll cover how to use the Phi-3 mini models for text generation using NLP techniques. Whether you're a beginner or an experienced AI developer, you'll learn how to download and run these powerful tools on your own computer. From setting up the Python environment to generating responses with the generate() API, we'll provide clear instructions and code examples throughout the tutorial. So, let's get started and see what the Phi-3 mini models can do!11KViews1like1CommentVisual Studio AI Toolkit : Building Phi-3 GenAI Applications
Port Forwarding, a valuable feature within the AI Toolkit, serves as a crucial gateway for seamless communication with the GenAI model. Whether it's through a straightforward API call or leveraging the SDKs, this functionality greatly enhances our ability to harness the power of the LLM/SLM. By enabling Port Forwarding, a plethora of new scenarios unfold, unlocking the full potential of our interactions with the model.9.7KViews2likes0CommentsUnlock the Power of Small Language Models with Phi-3 and Azure AI Studio
Discover the exciting world of Small Language Models (SLMs) and learn how to get started with Phi-3, a revolutionary new architecture that redefines what's possible with language models. This step-by-step guide takes you through setting up your environment, installing Phi-3 models, building your first SLM-based application, and deploying it to the cloud.7.8KViews1like0CommentsA better Phi Family is coming - multi-language support, better vision, intelligence MOEs
After the release of Phi-3 at Microsoft Build 2024, it has received different attention, especially the application of Phi-3-mini and Phi-3-vision on edge devices. In the June update, we improved Benchmark and System role support by adjusting high-quality data training. In the August update, based on community and customer feedback, we brought Phi-3.5-mini-128k-instruct multi-language support, Phi-3.5-vision-128k with multi-frame image input, and provided Phi-3.5 MOE newly added for AI Agent. Next, let's take a look Multi-language support In previous versions, Phi-3-mini had good English corpus support, but weak support for non-English languages. When we tried to ask questions in Chinese, there were often some wrong questions, such as Obviously, this is a wrong answer But in the new version, we can have better understanding and corpus support with the new Chinese prediction support You can also try the enhancements in different languages, or in the scenario without fine-tuning and RAG, it is also a good model. Code Sample: https://github.com/microsoft/Phi-3CookBook/blob/main/code/09.UpdateSamples/Aug/phi3-instruct-demo.ipynb Better vision Phi-3.5-Vision enables Phi-3 to not only understand text and complete dialogues, but also have visual capabilities (OCR, object recognition, and image analysis, etc.). However, in actual application scenarios, we need to analyze multiple images to find associations, such as videos, PPTs, books, etc. In the new Phi-3-Vision, multi-frame or multi-image input is supported, so we can better complete the inductive analysis of videos, PPTs, and books in visual scenes. As shown in this video We can use OpenCV to extract key frames. We can extract 21 key frame images from the video and store them in an array. images = [] placeholder = "" for i in range(1,22): with open("../output/keyframe_"+str(i)+".jpg", "rb") as f: images.append(Image.open("../output/keyframe_"+str(i)+".jpg")) placeholder += f"<|image_{i}|>\n" Combined with Phi-3.5-Vision's chat template, we can perform a comprehensive analysis of multiple frames. This allows us to more efficiently perform dynamic vision-based work, especially in edge scenarios. Code Sample: https://github.com/microsoft/Phi-3CookBook/blob/main/code/09.UpdateSamples/Aug/phi3-vision-demo.ipynb Intelligence MOEs In order to achieve higher performance of the model, in addition to computing power, model size is one of the key factors to improve model performance. Under a limited computing resource budget, training a larger model with fewer training steps is often better than training a smaller model with more steps. Mixture of Experts Models (MoEs) have the following characteristics: Faster pre-training speed than dense models Faster inference speed than models with the same number of parameters Requires a lot of video memory because all expert systems need to be loaded into memory There are many challenges in fine-tuning, but recent research shows that instruction tuning for mixed expert models has great potential. Now there are a lot of AI Agents applications, we can use MOEs to empower AI Agents. In multi-task scenarios, the response is faster. We can explore a simple scenario where we want to use AI to help us write Twitter based on some content and translate it into Chinese and publish it to social networks. We can combine Phi-3.5 MOEs to complete this. We can use Prompt to set and arrange tasks, such as blog content publishing, translated content, and the best answer. """ sys_msg = """You are a helpful AI assistant, you are an agent capable of using a variety of tools to answer a question. Here are a few of the tools available to you: - Blog: This tool helps you describe a certain knowledge point and content, and finally write it into Twitter or Facebook style content - Translate: This is a tool that helps you translate into any language, using plain language as required - Final Answer: the final answer tool must be used to respond to the user. You must use this when you have decided on an answer. To use these tools you must always respond in JSON format containing `"tool_name"` and `"input"` key-value pairs. For example, to answer the question, "Build Muliti Agents with MOE models" you must use the calculator tool like so: { "tool_name": "Blog", "input": "Build Muliti Agents with MOE models" } Or to translate the question "can you introduce yourself in Chinese" you must respond: { "tool_name": "Search", "input": "can you introduce yourself in Chinese" } Remember just output the final result, ouput in JSON format containing `"agentid"`,`"tool_name"` , `"input"` and `"output"` key-value pairs .: [ { "agentid": "step1", "tool_name": "Blog", "input": "Build Muliti Agents with MOE models", "output": "........." }, { "agentid": "step2", "tool_name": "Search", "input": "can you introduce yourself in Chinese", "output": "........." }, { "agentid": "final" "tool_name": "Result", "output": "........." } ] The users answer is as follows. """ We can see that by telling the model what skills it needs to have and the arrangement of tasks, we can assign them to different tasks through Phi-3.5 MOEs to complete related work. Write something about Generative AI with MOEs , translate it to Chinese The result : [ { "agentid": "step1", "tool_name": "Blog", "input": "Generative AI with MOE", "output": "Generative AI with MOE (Mixture of Experts) is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more." }, { "agentid": "step2", "tool_name": "Translate", "input": "Generative AI with MOE is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more.", "output": "基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。" }, { "agentid": "final", "tool_name": "Result", "output": "基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。" } ] If conditions permit, we can more smoothly integrate the Phi-3 MOEs model into frameworks such as AutoGen, Semantic Kernel, and Langchain. Code Sample: https://github.com/microsoft/Phi-3CookBook/blob/main/code/09.UpdateSamples/Aug/phi3_moe_demo.ipynb Thoughts on SLMs SLMs do not replace LLMs but give GenAI a broader scenario. The update of Phi-3 allows more edge devices to have better support, including text, chat, and vision. In modern AI Agents application scenarios, we hope to have more efficient task execution efficiency. In addition to computing power, MoEs are the key to solving problems. Phi-3 is still iterating, and I hope everyone will pay more attention and give us better feedback. Resources 1. Download Microsoft Phi-3 Family https://huggingface.co/collections/microsoft/phi-3-6626e15e9585a200d2d761e3 2. Read the Phi-3 Cookbook https://aka.ms/phi-3cookbook 3. Learn about MOEs https://huggingface.co/blog/moe](https://huggingface.co/blog/moe
6.7KViews1like0CommentsBuilding AI Agents on edge devices using Ollama + Phi-4-mini Function Calling
The new Phi-4-mini and Phi-4-multimodal now support Function Calling. This feature enables the models to connect with external tools and APIs. By deploying Phi-4-mini and Phi-4-multimodal with Function Calling capabilities on edge devices, we can achieve local expansion of knowledge capabilities and enhance their task execution efficiency. This blog will focus on how to use Phi-4-mini's Function Calling capabilities to build efficient AI Agents on edge devices. What‘s Function Calling How it works First we need to learn how Function Calling works Tool Integration: Function Calling allows LLM/SLM to interact with external tools and APIs, such as weather APIs, databases, or other services. Function Definition: Defines a function (tool) that LLM/SLM can call, specifying its name, parameters, and expected output. LLM Detection: LLM/SLM analyzes the user's input and determines if a function call is required and which function to use. JSON Output: LLM/SLM outputs a JSON object containing the name of the function to call and the parameters required by the function. External Execution: The application executes the function call using the parameters provided by LLM/SLM. Response to LLM: Returns the output of Function Calling to LLM/SLM, and LLM/SLM can use this information to generate a response to the user. Application scenarios Data retrieval: convert natural language queries into API calls to fetch data (e.g., "show my recent orders" triggers a database query) Operation execution: convert user requests into specific function calls (e.g., "schedule a meeting" becomes a calendar API call) Computational tasks: handle mathematical or logical operations through dedicated functions (e.g., calculate compound interest or statistical analysis) Data processing: chain multiple function calls together (e.g., get data → parse → transform → store) UI/UX integration: trigger interface updates based on user interactions (e.g., update map markers or display charts) Phi-4-mini / Phi-4-multimodal's Function Calling Phi-4-mini / Phi-4-multimodal supports single and parallel Function Calling. Things to note when calling You need to define Tools in System to start single or parallel Function Calling If you want to start parallel Function Calling, you also need to add 'some tools' to the System prompt The following is an example Single Function Calling tools = [ { "name": "get_match_result", "description": "get match result", "parameters": { "match": { "description": "The name of the match", "type": "str", "default": "Arsenal vs ManCity" } } }, ] messages = [ { "role": "system", "content": "You are a helpful assistant", "tools": json.dumps(tools), # pass the tools into system message using tools argument }, { "role": "user", "content": "What is the result of Arsenal vs ManCity today?" } ] Full Sample : Click Parallel Function Calling AGENT_TOOLS = { "booking_fight": { "name": "booking_fight", "description": "booking fight", "parameters": { "departure": { "description": "The name of Departure airport code", "type": "str", }, "destination": { "description": "The name of Destination airport code", "type": "str", }, "outbound_date": { "description": "The date of outbound flight", "type": "str", }, "return_date": { "description": "The date of return flight", "type": "str", } } }, "booking_hotel": { "name": "booking_hotel", "description": "booking hotel", "parameters": { "query": { "description": "The name of the city", "type": "str", }, "check_in_date": { "description": "The date of check in", "type": "str", }, "check_out_date": { "description": "The date of check out", "type": "str", } } }, } SYSTEM_PROMPT = """ You are my travel agent with some tools available. """ messages = [ { "role": "system", "content": SYSTEM_PROMPT, "tools": json.dumps(AGENT_TOOLS), # pass the tools into system message using tools argument }, { "role": "user", "content": """I have a business trip from London to New York in March 21 2025 to March 27 2025, can you help me to book a hotel and flight tickets""" } ] Full sample : click Using Ollama and Phi-4-mini Function Calling to Create AI Agents on Edge Devices Ollama is a popular free tool for deploying LLM/SLM locally and can be used in combination with AI Toolkit for VS Code. In addition to being deployed on your PC/Laptop, it can also be deployed on IoT, mobile phones, containers, etc. To use Phi-4-mini on Ollama, you need to use Ollama 0.5.13+. Different quantitative versions are supported on Ollama, as shown in the figure below: Using Ollama, we can deploy Phi-4-mini on the edge, and implement AI Agent with Function Calling under limited computing power, so that Generative AI can be applied more effectively on the edge. Current Issues A sad experience - If you directly use the interface to try to call Ollama in the above way, you will find that Function Calling will not be triggered. There are discussions on Ollama's GitHub Issue. You can enter the Issue https://github.com/ollama/ollama/issues/9437. By modifying the Phi-4-mini Template on the ModelFile to implement a single Function Calling, but the call to Parallel Function Calling still failed. Resolution We have implemented a fix by making a adjustments to the template. We have improved it according to Phi-4-mini's Chat Template and re-modified the Modelfile. Of course, the quantitative model has a huge impact on the results. The adjustments are as follows: TEMPLATE """ {{- if .Messages }} {{- if or .System .Tools }}<|system|> {{ if .System }}{{ .System }} {{- end }} In addition to plain text responses, you can chose to call one or more of the provided functions. Use the following rule to decide when to call a function: * if the response can be generated from your internal knowledge (e.g., as in the case of queries like "What is the capital of Poland?"), do so * if you need external information that can be obtained by calling one or more of the provided functions, generate a function calls If you decide to call functions: * prefix function calls with functools marker (no closing marker required) * all function calls should be generated in a single JSON list formatted as functools[{"name": [function name], "arguments": [function arguments as JSON]}, ...] * follow the provided JSON schema. Do not hallucinate arguments or values. Do to blindly copy values from the provided samples * respect the argument type formatting. E.g., if the type if number and format is float, write value 7 as 7.0 * make sure you pick the right functions that match the user intent Available functions as JSON spec: {{- if .Tools }} {{ .Tools }} {{- end }}<|end|> {{- end }} {{- range .Messages }} {{- if ne .Role "system" }}<|{{ .Role }}|> {{- if and .Content (eq .Role "tools") }} {"result": {{ .Content }}} {{- else if .Content }} {{ .Content }} {{- else if .ToolCalls }} functools[ {{- range .ToolCalls }}{{ "{" }}"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}{{ "}" }} {{- end }}] {{- end }}<|end|> {{- end }} {{- end }}<|assistant|> {{ else }} {{- if .System }}<|system|> {{ .System }}<|end|>{{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}<|end|>{{ end }}<|assistant|> {{ end }}{{ .Response }}{{ if .Response }}<|user|>{{ end }} """ We have tested the solution using different quantitative models. In the laptop environment, we recommend that you use the following model to enable single/parallel Function Calling: phi4-mini:3.8b-fp16. Note: you need to bind the defined Modelfile and phi4-mini:3.8b-fp16 together to enable this to work. Please execute the following command in the command line: #If you haven't downloaded it yet, please execute this command firstr ollama run phi4-mini:3.8b-fp16 #Binding with the adjusted Modelfile ollama create phi4-mini:3.8b-fp16 -f {Your Modelfile Path} To test the single Function Calling and Parallel Function Calling of Phi-4-mini. Single Function Calling Parallel Function Calling Full Sample in notebook The above example is just a simple introduction. As we move forward with the development we hope to find simpler ways to apply it on the edge, use Function Calling to expand the scenarios of Phi-4-mini / Phi-4-multimodal, and also develop more usecases in vertical industries. Resources Phi-4 model on Hugging face https://huggingface.co/collections/microsoft/phi-4-677e9380e514feb5577a40e4 Phi-4-mini on Ollama https://ollama.com/library/phi4-mini Learn Function Calling https://huggingface.co/docs/hugs/en/guides/function-calling Phi Cookbook - Samples and Resources for Phi Models https://aka.ms/phicookbook6KViews4likes1CommentCreate your own Visual Studio Code Chat Participant with Phi-3.5 by GitHub Models
Visual Studio Code provides an API that allows companies and individuals to create different participants based on their business to expand their capabilities in different proprietary fields. In this article, we will focus on Phi-3.5-mini-instruct (128k) and Phi-3.5-vision-instruct (128k) of GitHub Models to create your own Visual Studio Code Chat Participant.4.9KViews0likes1CommentShowcasing Phi-4-Reasoning: A Game-Changer for AI Developers
Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision. What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces.. We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle complex reasoning tasks, including mathematical problems, scientific queries, and coding challenges Why Phi-4-reasoning and Phi-4-reasoning-plus Excels Advanced Reasoning Capabilities: Phi-4-Reasoning is trained to generate detailed reasoning chains, making it highly effective in solving complex problems. It leverages additional inference-time compute to enhance its reasoning abilities High-Quality Data Curation: The model's training involved meticulous data curation and the use of high-quality synthetic datasets, allowing it to compete with much larger models Performance: Despite its smaller size, Phi-4-Reasoning outperforms larger models like OpenAI's o1-mini and DeepSeek-R1-Distill-Llama-70B on various benchmarks, including mathematical reasoning and PhD-level science questions Industry Applications: Phi-4-Reasoning is versatile and can be deployed in various industry scenarios, including medicine, where it can be used for reasoning tasks on edge devices Real-World Examples Wordplay Riddle: Phi-4-Reasoning can solve wordplay riddles by recognizing patterns and applying logical reasoning. For example, it deduced that collecting three strawberries would yield 9 r's because the word "strawberry" contains three r's Example 1: How many strawberries for 9 r’s? User: How many strawberries do I need to collect 9 r’s? Phi-4-Reasoning: <think> TRUNCATED </think> It’s a wordplay riddle! Notice that the word “strawberry” has three r’s in it. So if you “collect” three strawberries, you get 3 Å~ 3 = 9 r’s. Thus, you need 3 strawberries. Solving a Riddle: The model can decrypt complex riddles by identifying patterns and performing arithmetic operations. It demonstrated this capability by solving a riddle involving foreign language number-words and reversed number-words. What This Problem Proves: Pattern Recognition: Phi-4-Reasoning demonstrates its ability to identify patterns in words and apply logical reasoning to solve the riddle. Wordplay Understanding: The model showcases its proficiency in understanding and solving wordplay riddles, which require a combination of linguistic knowledge and logical deduction. Efficiency: Phi-4-Reasoning quickly arrives at the correct solution by applying a straightforward multiplication operation, highlighting its efficiency in solving such problems. Example 2: Solving a Riddle User: I will give you a riddle to solve with a few examples, and something to complete at the end What is a likely completion for X that is consistent the with examples above? Phi-4-Reasoning: <think> TRUNCATED </think> We can “decrypt” the riddle by noticing that each line has five parts. In every example the parts are: A “number-word” in a foreign language (or with an extra minus sign), A “first name” (of any nationality), A “reversed number-word” (or two, joined by a hyphen), A “second name”, and A final number. When you “reverse” the third word you get a familiar numberin English: In line 1 “evif” is “five” (5) and the first number is “uno” = 1, and 5 – 1 = 4. In line 2 “neves” is “seven” (7) and “tres” = 3, and 7 – 3 = 4. In line 3 “evlewt” is “twelve” (12) and “drei” = 3, and 12 – 3 = 9. In lines 4 and 5 with two reversed “number-words”: – Line 4: “xis” →“six” (6) and “ytnewt” →“twelve” (12); 6 + 12 = 18, plus 3 = 21. – Line 5: “eerht” → “three” (3) and “ytnewt” → “twelve” (12); 3 + 12 = 15, plus 3 = 18. Now look at the final line: “minus one-point-two Satya eno Bill X” It follows the same five–part format. Notice that “eno” reversed is “one” (1). And the “first number” is “minus onepoint-two” (–1.2). Since for a one–number–word line the rule is: final number = (number from reversed word)−(first number) we compute: final = 1 − (−1.2) = 1 + 1.2 = 2.2 Thus, a likely completion for X is 2.2. Phi-4-Reasoning: Phi-4-Reasoning decrypts the riddle by identifying a pattern in the examples provided. Each line consists of five parts: a number-word in a foreign language, a first name, a reversed number-word, a second name, and a final number. By reversing the third word and performing arithmetic operations, Phi-4-Reasoning determines that the likely completion for X is 2.2 What This Problem Proves: Complex Pattern Recognition: Phi-4-Reasoning excels in recognizing complex patterns involving multiple elements, such as foreign language number-words and reversed number-words. Logical Deduction: The model demonstrates its ability to apply logical deduction to solve the riddle by performing arithmetic operations based on the identified patterns. Versatility: Phi-4-Reasoning showcases its versatility in handling diverse types of riddles and problems, making it a valuable tool for AI developers. Example 3. solve complex graph traversal tasks, such as finding a path through a maze. The maze is represented as an adjacency list, which is a common way to describe graphs in computer science. Each node in the graph (or maze) is connected to other nodes, and the task is to find a valid path from the starting node (node 0) to the destination node Example 3: Solving a maze User: Please solve the maze provided in adjacency list form below, by outputting a valid path from node 0 to node {destination}, in a space-separated list of integers. Adjacency list: {depicted as an image below} Phi-4-Reasoning: <think> TRUNCATED </think> {path depicted as an image below} What This Problem Proves: Graph Traversal Capability: Phi-4-Reasoning can effectively navigate through graphs using algorithms like Depth-First Search (DFS) or Breadth-First Search (BFS). This capability is crucial for solving problems that involve finding paths, cycles, or connectivity in graphs. Logical Reasoning: The model demonstrates its ability to apply logical reasoning to determine the correct sequence of nodes to traverse from the start to the destination. This involves understanding the structure of the graph and making decisions based on the connections between nodes. Pattern Recognition: Phi-4-Reasoning can recognize patterns in the adjacency list and use them to find a solution. This is important for tasks that require identifying and following specific paths or routes. Versatility: The ability to solve a maze using an adjacency list showcases the model's versatility in handling different types of data structures and problem-solving scenarios. This is beneficial for AI developers who need to work with various data representations and algorithms. Efficiency: The model's ability to quickly and accurately find a valid path through the maze highlights its efficiency in solving complex problems. This is valuable for applications that require fast and reliable solutions. Conclusion: Phi-4-Reasoning's ability to solve a maze using an adjacency list demonstrates its advanced reasoning capabilities, making it a powerful tool for AI developers. Its proficiency in graph traversal, logical reasoning, pattern recognition, versatility, and efficiency makes it well-suited for tackling a wide range of complex problems. Deployment and Integration Phi-4-Reasoning can be deployed on various platforms, including Azure AI Foundry and Hugging Face. It supports quantization using tools like Microsoft Olive, making it suitable for deployment on edge devices such as IoT, laptops, and mobile devices. Phi-4-Reasoning is a groundbreaking AI model that offers advanced reasoning capabilities, high performance, and versatility. Its ability to handle complex reasoning tasks makes it an invaluable tool for AI developers, enabling them to create innovative solutions across various industries. References Make Phi-4-mini-reasoning more powerful with industry reasoning on edge devices | Microsoft Community Hub Phi-4 Reasoning Technical Paper Phi-4-Mini-Reasoning Technical Paper One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog PhiCookBook Access Phi-4-reasoning models Phi Models at Azure AI Foundry Models Phi Models on Hugging Face Phi Models on GitHub Marketplace Models3.6KViews0likes0Comments