Blog Post

Educator Developer Blog
6 MIN READ

Unlock the Power of AI with GitHub Models: A Hands-On Guide

ZileHuma's avatar
ZileHuma
Brass Contributor
Sep 23, 2024

Hi, this is Zil-e-huma a BETA student ambassador and today I am back with another interesting article for my curious tech fellows. Ever thought if there’s a way to seamlessly integrate AI models into your projects without the heavy lifting? Enter GitHub Models—a game-changing hallmark that drives the power of AI right to your fingertips. Suppose you're an AI enthusiast, a passionate developer, or just looking to make your applications smarter. In that case, this guide will show you how to harness the full strength of GitHub Models in a few simple steps.

 

Discovering GitHub Models: Your Gateway to AI Magic

Think of you having a collection of powerful AI models at your disposal—models that can chat, generate code, and much more with just a few tweaks. That’s what GitHub Models have for you. To get started, head over to the Marketplace on GitHub and select Models. Here, you’ll see many options, from the versatile Lama to the innovative Meta and beyond. Imagine this as your AI toolkit, ready to be explored and experimented with!

 

 

 

Now once you’ve chosen a model, you’ll see the layout. Here is what it will look like and what it will be about:

 

 

  • README: The go-to guide for everything you need to know about the model.
  • Evaluation: A handy comparison tool to see how this model stacks up against others.
  • Transparency: Get all the nitty-gritty details about the model’s inner workings.
  • License: Check out the usage rights and restrictions.

 

Ready to take your first leap? Click the Playground button, and the fun begins!

 

 Your First AI Adventure: Playing with GitHub Models

The Playground is where the magic happens. Here, you can ask questions, change parameters, and see the model respond in real-time. In this way, you are receiving customized responses just by adjusting settings like max tokens and temperature to see how different configurations affect the output.

Now, let’s take it up a notch. Click the Get Started button, and you’ll be greeted with a user-friendly overlay. You can choose the programming language and SDK that suits your needs. Then, it’s time to generate your own personal access token. Don’t worry—it’s easier than it sounds. Simply follow these steps:

  1. Go to Personal Access Token.
  2. Select the Beta option.
  3. Log in with your GitHub credentials.
  4. Set an expiration date and name your token.
  5. Click Generate Token and copy it.

You’re now equipped with the key to the GitHub Models kingdom! Export the token to your environment, and you’re all set to start coding.

 

 Bringing AI to Life: Integrating GitHub Models into Your Projects

 

Quick and Easy Integration

Want to see how easy it is to integrate a model into your project? Let’s use a simple Python example. This code will have you up and running in no time:

 

import os
from openai import OpenAI

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

client = OpenAI(
    base_url=endpoint,
    api_key=token,
)

response = client.chat.completions.create(
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant.",
        },
        {
            "role": "user",
            "content": "What is the capital of France?",
        }
    ],
    model=model_name,
    temperature=1.0,
    max_tokens=1000,
    top_p=1.0
)

print(response.choices[0].message.content)

 

Now in order to run this file write the following command in the terminal, as you have to let the system know about the GitHub token. Here is one possible way to do so.

 

 

(NOTE: the GitHub token used here is shown only for educational purposes and has been deleted now)

 

Advanced Integration with Custom Tools

But why stop there? Imagine adding custom functionality to your AI. In the provided example we are getting flight information between two cities. Here’s how you can supercharge your model with a custom tool:

 

import os
import json
from openai import OpenAI

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

# Define a function that returns flight information between two cities (mock implementation)
def get_flight_info(origin_city: str, destination_city: str):
    if origin_city == "Seattle" and destination_city == "Miami":
        return json.dumps({
            "airline": "Delta",
            "flight_number": "DL123",
            "flight_date": "May 7th, 2024",
            "flight_time": "10:00AM"})
    return json.dumps({"error": "No flights found between the cities"})

# Define a function tool that the model can ask to invoke in order to retrieve flight information
tool={
    "type": "function",
    "function": {
        "name": "get_flight_info",
        "description": """Returns information about the next flight between two cities.
            This includes the name of the airline, flight number and the date and time
            of the next flight""",
        "parameters": {
            "type": "object",
            "properties": {
                "origin_city": {
                    "type": "string",
                    "description": "The name of the city where the flight originates",
                },
                "destination_city": {
                    "type": "string", 
                    "description": "The flight destination city",
                },
            },
            "required": [
                "origin_city",
                "destination_city"
            ],
        },
    },
}

client = OpenAI(
    base_url=endpoint,
    api_key=token,
)

messages=[
    {"role": "system", "content": "You an assistant that helps users find flight information."},
    {"role": "user", "content": "I'm interested in going to Miami. What is the next flight there from Seattle?"},
]

response = client.chat.completions.create(
    messages=messages,
    tools=[tool],
    model=model_name,
)

# We expect the model to ask for a tool call
if response.choices[0].finish_reason == "tool_calls":

    # Append the model response to the chat history
    messages.append(response.choices[0].message)

    # We expect a single tool call
    if response.choices[0].message.tool_calls and len(response.choices[0].message.tool_calls) == 1:

        tool_call = response.choices[0].message.tool_calls[0]

        # We expect the tool to be a function call
        if tool_call.type == "function":

            # Parse the function call arguments and call the function
            function_args = json.loads(tool_call.function.arguments.replace("'", '"'))
            print(f"Calling function `{tool_call.function.name}` with arguments {function_args}")
            callable_func = locals()[tool_call.function.name]
            function_return = callable_func(**function_args)
            print(f"Function returned = {function_return}")

            # Append the function call result fo the chat history
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": tool_call.function.name,
                    "content": function_return,
                }
            )

            # Get another response from the model
            response = client.chat.completions.create(
                messages=messages,
                tools=[tool],
                model=model_name,
            )

            print(f"Model response = {response.choices[0].message.content}")

 

This code lets your AI model not just answer questions, but actively perform tasks—like finding the next flight from Seattle to Miami. The possibilities are endless!

 

Supercharge Your Workflow with GitHub Codespaces

Want an even smoother experience? GitHub Codespaces lets you run models in a fully-configured cloud environment. Here’s how:

  1. Go to the Playground.
  2. Click Get Started, then select Run Codespace.
  3. A virtual environment with all dependencies pre-installed will launch, so you can start coding immediately.

No more configuration headaches—just you and your code.

 

Pricing and Limitations: What You Need to Know

While GitHub Models are powerful, they do come with rate limits. To use them effectively, you’ll need an Azure AI account and a personalized Azure token. Pricing details are available on the Azure AI portal, so you can choose a plan that fits your needs.

 

FAQs: Your Burning Questions Answered

Q: Can GitHub Models replace Hugging Face?

A: Not yet. Most of the models on GitHub are closed-source and link back to Azure AI. While GitHub Models provide a convenient way to use Azure AI, they don't currently offer open model weights like Hugging Face. However, they do make using Azure AI models incredibly simple with a GitHub Personal Token.

 

Ready to Dive In?

GitHub Models are a fantastic way to integrate AI into your applications effortlessly. From simple queries to complex integrations, the possibilities are endless. So why wait? Head over to GitHub, explore the models, and let your creativity soar!

 

Happy coding! 🚀

 

Microsoft Learn modules for further learning

Introduction to prompt engineering with GitHub Copilot - Training | Microsoft Learn

Build a Web App with Refreshable Machine Learning Models - Training | Microsoft Learn

Introduction to GitHub - Training | Microsoft Learn

Updated Sep 20, 2024
Version 1.0
No CommentsBe the first to comment