Blog Post

Educator Developer Blog
5 MIN READ

Running Phi-3-vision via ONNX on Jetson Platform

Jambo0321's avatar
Jambo0321
Brass Contributor
Jul 19, 2024

Hi, I'm Jambo a Microsoft Learn Student Ambassador.

This article aims to run the quantized Phi-3-vision model in ONNX format on the Jetson platform and successfully perform inference for image+text dialogue tasks.

Writing Environment:

What is Jetson?

The Jetson platform, introduced by NVIDIA, consists of small arm64 devices equipped with powerful GPU computing capabilities. Designed specifically for edge computing and AI applications, Jetson devices run on Linux, enabling complex computing tasks with low power consumption. This makes them ideal for developing embedded AI and machine learning projects.

 

For other versions of the Phi-3 model, we can use llama.cpp to convert them into GUFF format to run on Jetson, and easily switch between different quantizations. Alternatively, you can conveniently use services like ollama or llamaedge which are based on llama.cpp. More information can be found in the Phi-3CookBook.

 

However, for the vision version, there is currently no way to convert it into GUFF format (#7444). Additionally, resource-constrained edge devices struggle to run the original model without quantization via transformers. Therefore, we can use ONNX Runtime to run the quantized model in ONNX format.

What is ONNX Runtime?

ONNX Runtime is a high-performance inference engine designed to accelerate and execute AI models in the ONNX (Open Neural Network Exchange) format. The onnxruntime-genai is an API specifically built for LLM (Large Language Model) models, providing a simple way to run models like Llama, Phi, Gemma, and Mistral.

 

When writing this article, onnxruntime-genai does not have a precompiled version for aarch64 + GPU, so we need to compile it ourselves.

Compiling onnxruntime-genai

Preparation

  • Upgrade CMake

 

sudo apt purge cmake
pip3 install cmake -U

 

Cloning the onnxruntime-genai Repository

 

git clone https://github.com/microsoft/onnxruntime-genai
cd onnxruntime-genai

 

The latest onnxruntime-genai repository cannot be successfully compiled for unknown reasons, so we need to switch to an earlier commit. Below is the latest commit that has been tested and can be successfully compiled:

 

git checkout 940bc102a317e886f488ad5e120533b96a34ddcd

 

ONNXRuntime

You can compile ONNXRuntime from the source yourself, but this can be a very time-consuming process for the Jetson platform. Therefore, we will directly use the version compiled by dusty-nv for the Jetson platform. Do not worry about the cu124 in the URL; it runs well on CUDA 12.2.

 

 

wget http://jetson.webredirect.org:8000/jp6/cu124/onnxruntime-gpu-1.19.0.tar.gz
mkdir ort
tar -xvf onnxruntime-gpu-1.19.0.tar.gz -C ort

mv ort/include/onnxruntime/onnxruntime_c_api.h ort/include/
rm -rf ort/include/onnxruntime/

 

Compiling onnxruntime-genai

You should still be in the onnxruntime-genai directory at this point.

Now we need to prepare to build the Python API. You can use Python >=3.6 for the compilation. JetPack 6.0 comes with Python 3.10 by default, but you can switch to other versions for the compilation. The compiled whl can only be installed on the Python version used during the compilation.

Note: The compilation process will require a significant amount of memory. Therefore, if your Jetson device has limited memory (like the Orin NX), do not use the --parallel parameter.

 

python3 build.py --use_cuda --cuda_home /usr/local/cuda-12.2 --skip_tests --skip_csharp [--parallel]

 

The compiled files will be located in the build/Linux/Release/dist/wheel directory, and we only need the .whl file. Note that the .whl file should be around 110 MB.

 

You can copy the whl file to other Jetson platforms with the same environment (CUDA) for installation.

Note: The generated subdirectory may differ, but we only need the .whl file from the build directory.

Installing onnxruntime-genai

If you have multiple CUDA versions, you might need to set the CUDA_PATH environment variable to ensure it points to the same version used during compilation.

 

export CUDA_PATH=/usr/local/cuda-12.2

 

Navigate to the directory where the whl file is located, or copy the whl file to another directory for installation using the following command.

 

pip3 install *.whl

 

Running the Phi-3-vision Model

Downloading the Model

Download the Phi-3-vision model for onnx-cuda from huggingface.

 

pip3 install huggingface-hub[cli]

 

The FP16 model requires 8 GB of VRAM. If you are running on a device with more resources like the Jetson Orin, you can opt for the FP32 model.

The Int 4 model is a quantized version, requiring only 3 GB of VRAM. This is suitable for more compact devices like the Jetson Orin Nano.

 

huggingface-cli download microsoft/Phi-3-vision-128k-instruct-onnx-cuda --include cuda-fp16/* --local-dir .
# Or
huggingface-cli download microsoft/Phi-3-vision-128k-instruct-onnx-cuda --include cuda-int4-rtn-block-32/* --local-dir .

 

Running the Example Script

Download the official example script and an example image.

 

# Download example script
wget https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3v.py
# Download example image
wget https://onnxruntime.ai/images/table.png

 

Run the example script.

 

python3 phi3v.py -m cuda-int4-rtn-block-32

 

 

First, input the path to the image, for example, table.png.

 

Next, input the prompt text, for example: Convert this image to markdown format.

 

 

```markdown
| Product             | Qtr 1    | Qtr 2    | Grand Total |
|---------------------|----------|----------|-------------|
| Chocolade           | $744.60  | $162.56  | $907.16     |
| Gummibarchen        | $5,079.60| $1,249.20| $6,328.80   |
| Scottish Longbreads | $1,267.50| $1,062.50| $2,330.00   |
| Sir Rodney's Scones | $1,418.00| $756.00  | $2,174.00   |
| Tarte au sucre      | $4,728.00| $4,547.92| $9,275.92   |
| Chocolate Biscuits  | $943.89  | $349.60  | $1,293.49   |
| Total               | $14,181.59| $8,127.78| $22,309.37  |
```

The table lists various products along with their sales figures for Qtr 1, Qtr 2, and the Grand Total. The products include Chocolade, Gummibarchen, Scottish Longbreads, Sir Rodney's Scones, Tarte au sucre, and Chocolate Biscuits. The Grand Total column sums up the sales for each product across the two quarters.

 

 

Note: The first round of dialogue during script execution might be slow, but subsequent dialogues will be faster.

We can use Jtop to monitor resource usage:

 

The above inference is run on Jetson Orin Nano using the Int 4 quantized model. As shown, the Python process occupies 5.4 GB of VRAM for inference, with minimal CPU load and nearly full GPU utilization during inference.

We can modify the example script to use the time function at key points to measure the inference speed, which is remarkably fast.

 

All of this is achieved on a device with a power consumption of just 15W.

 

Updated Jul 29, 2024
Version 3.0

18 Comments

  • Jambo0321's avatar
    Jambo0321
    Brass Contributor

    Current information can be summarized as follows:

     

    1. It is necessary to switch to the 940bc10 commit for onnxruntime-genai. The newer commit appears to have introduced some issues, rendering CUDA compilation unusable.

    git switch --detach 940bc102a317e886f488ad5e120533b96a34ddcd

    2. An updated version of the onnxruntime library is required. However, even version 1.16 does not seem to enable CUDA support for onnxruntime-genai.
    3. There is currently no onnxruntime image or precompiled library more recent than version 1.16. Due to memory constraints, compiling on Jetson devices is challenging (Orin requires single-threaded compilation, and Orin Nano is unable to compile).
    4. I am still searching for a potentially simpler method for compilation.

  • Jambo0321's avatar
    Jambo0321
    Brass Contributor

    I apologize for the issues in my article. My testing environment, which has numerous pre-installed libraries, has led to complications when trying to compile and run the code in a fresh setup. I have been creating a Dockerfile to make it easier for everyone to use, but this process is still ongoing and will require more time. However, since I have successfully run the model previously, it indicates that running ONNX models on Jetson is feasible. Thank you all for your support and patience.

  • RBrown955's avatar
    RBrown955
    Copper Contributor

    I kept receiving so many errors following these steps, I re-flashed my Jetson, to help point out the missing steps for a Jetson Orin Nano Dev Kit also running Jetpack 6. I am assuming Your instructions are based on a system that has done some of these steps some earlier time.

    The first thing is that the user is not automatically added to the existing docker group, 

     

     

     

     

    sudo usermod -aG docker $USER

     

     

     

     

     

     

     Next cmake must be updated to 3.26  in order to execute build.py

     

     

     

     

     

     

    sudo apt purge cmake
    pip install --upgrade cmake

     

     

     

     

     

    Logoff and log  and cd back to  onnxruntime-genai directory so cmake is usable. 
    removed --parallel flag- probably best to leave this out side the terminal text as its not used in the described environment. 

     

     

     

     

     

    python3 build.py --use_cuda --cuda_home /usr/local/cuda-12.2 --skip_tests --skip_csharp

     

     

     

     

     

    The compiled files will be located in the build/Linux/RelWithDebInfo/wheel directory, and we only need the .whl file. called (onnxruntime_genai_cuda-0.4.0.dev0-cp310-cp310-linux_aarch64.whl) in my case. 
    Install seems to go fine. 
    Phi3 model downloads successfully from hugging face and Example Scripts download fine.  
    Running 

     

     

     

    python3 phi3v.py -m cuda-int4-rtn-block-32
    #Results In
    Loading model...
    Traceback (most recent call last):
      File "/home/travis/phi3v.py", line 66, in <module>
        run(args)
      File "/home/travis/phi3v.py", line 16, in run
        model = og.Model(args.model_path)
    onnxruntime_genai.onnxruntime_genai.OrtException: CUDA execution provider is not enabled in this build.

     

     

     

     I am unsure where to go from here or what step is missing. Please keep in mind this is a 100% fresh build. 

     

  • RBrown955's avatar
    RBrown955
    Copper Contributor

    Thanks! 
    Currently building without parallel. (Jetson Orin Nano Dev Kit (8GB). Will check back in an ohour or so when the build is complete. Really looking forward to running Phi3 locally. I've been super eager to get set up since GTC. 

     

  • Jambo0321's avatar
    Jambo0321
    Brass Contributor

    RBrown955 I apologize for the mistake in the instructions. I didn't verify if those commands would work in your scenario. I achieved the same result by mounting a directory, copying the files inside the container, and then exiting the container. Here is how you can do it:

    docker run -it --rm -v ./ort/lib:/home dustynv/onnxruntime:r36.2.0
    cp /usr/local/lib/libonnxruntime*.so* /home
    exit
    

    I will also update the article accordingly.

    Regarding running Docker as a non-root user, you can refer to this documentation: https://docs.docker.com/engine/install/linux-postinstall/

  • RBrown955's avatar
    RBrown955
    Copper Contributor

    Great Work! Actually setting this up now. 
    I don't there is any part of the Jetpack setup that goes through running Docker as a non root user. 

     

    sudo su
    id=$(docker create dustynv/onnxruntime:r36.2.0)

     

     

     

    also, unfortunately when I run

    sudo docker cp $id:/usr/local/lib/libonnxruntime*.so* - > ort/lib/


    i get 

    bash: ort/lib/: Is a directory