Percept - reading input video source instead of RTSP

Copper Contributor

I am trying to use a video file source as input into Percept, but this simple C++ code using OpenCV doesn’t work on the Percept (in the AzureEyeModule docker container).  It is unable to open any video file (.mp4 or .avi or .mkv etc.).  Are there any libraries missing to be able to read a video file?

 

Code snippet:

cv::VideoCapture cap("/tmp/app/build/pedestrians_animals.mp4");

if (!cap.isOpened()) {

    std::cout << "ERROR! Unable to open file\n";       

}

else{

    std::cout<<"opened the file!!\n";

}

 

I have also tried to update ~\src\azure-percept-advanced-development\azureeyemodule\app\model\ssd.cpp, I added the following line to read an inputvideofile which is set to “filepath/filename.mp4”.  No matter what, OpenCV just doesn’t want to read the inputvideofile. 

 

Code snippet:

 if (!this->inputvideofile.empty()){

    // Specify the input video file as the input to the pipeline, and start processing

    pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::wip::>(inputvideofile));   

    }

    else

    {

    // Specify the Azure Percept's Camera as the input to the pipeline, and start processing

    pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::mx::Camera>());

    }

    return pipeline;

 

If this doesn't work on the Percept due to OpenCV/library issues, is there a way for LVA Edge media graph to specify input source as a video file instead of RTSP?

4 Replies

Hi @amitmarathe - thanks for this question. We're looking into this and will get back with a solution or any additional questions we have for you.

 

Thanks!

@amitmarathe If using Python is an option for you, you may want to look into using the "

opencv-python-headless" package which exists with prebuilt binaries for the aarch64 platform. Assuming you are running on the DK directly and not in a container, you can try the following:
 
1) Install pip3 with "sudo yum install pip3"
2) Install OpenCV (headless so we don't have additional dependencies for libGl or similar): "sudo pip3 install opencv-python-headless"
3) Prepare a Python script, this one is simply counting frames of a video
 
import cv2
import numpy as np

cap = cv2.VideoCapture("/path/to/your_video_file.mp4")
if not cap.isOpened():
    print("Could not open file")

frame_count = 0
# Read until video is completed
while(cap.isOpened()):
    # Capture frame-by-frame
    ret, frame = cap.read()
    if ret == True:
        frame_count += 1
    else:
        break

cap.release()
print(f"Frames: {frame_count}")

4) Run it, e.g. "sudo python3 video.py"

@amitmarathe - I know that you've got an active email thread with our engineers covering these questions, but let me know if there is anything that isn't being addressed. Love the work you are doing with the dev kit! :)

I tested the PR 47 here and I am now able to pass in a pre-recorded video file https://github.com/microsoft/azure-percept-advanced-development/pull/47