PTZ_centering_app

 

Table of contents


 

Introduction


This explanation assumes that the i-PRO camera application development environment has been completed.
If you are not ready to build the development environment, please refer to here to complete it.

Also, in this tutorial, the SDK installation directory is described as ${SDK_DIR}.

Operation overview


PTZ_centering_app is a sample application that moves the viewpoint to an object that has been detected.

The PTZ_centering_app sample app only works with PTZ cameras.

 

External libraries required for operation


To build with C/C++, need below:

libcurl

The use of external libraries will be explained later.

 

Directory path of the sample app


The C/C++ source code is stored below.

${SDK_DIR}/src/adamapp/PTZ_centering_app

No Python source code.

 

Use of AI model conversion tool


Before building the sample app, you need to use the AI model conversion tool.

The PTZ_centering_app sample application has the following files, so you can check the operation without using the AI model conversion tool.

[For ambaCV2X app]
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV2X/cnn/mobilenet_priorbox_fp32.bin
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV2X/cnn/mobilenetv1_ssd_cavalry.bin
[For ambaCV5X app]
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV5X/cnn/mobilenet_priorbox_fp32.bin
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV5X/cnn/mobilenetv1_ssd_cavalry.bin

Get the AI model conversion tool from below and build the environment.

AI model convert tool - Technology Partner FAQ (En) - Confluence (atlassian.net)

 After building the environment, refer to the following and convert the yolov5 sample model.

AI model convert tool: Tensorflow - Technology Partner FAQ (En) - Confluence (atlassian.net)

 

How to build the sample app (C/C++)


Place the model-converted mobilenet_priorbox_fp32.bin and mobilenetv1_ssd_cavalry.bin file in the sample app directory with the following configuration.

[For ambaCV2X app]
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV2X/cnn/mobilenet_priorbox_fp32.bin
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV2X/cnn/mobilenetv1_ssd_cavalry.bin
[For ambaCV5X app]
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV5X/cnn/mobilenet_priorbox_fp32.bin
${SDK_DIR}/src/adamapp/PTZ_centering_app/data_CV5X/cnn/mobilenetv1_ssd_cavalry.bin

See here for building with C/C++.

 

How to use the sample app


When the camera detects an object, the viewpoint moves to the object that detected the object. Try it yourself.

 

Appendix


How to change preferences

This application has some preferneces which a user is able to change.
When changing some preferneces, push "AppPrefs" button in "ADAM OPERATION UI" html page.

Resoultion:
Resolution to get YUV images. Specfify HD(1280x720) or FHD(1920x1080).
However, by the ability of the camera, it may not work with the specified value.

Frame rate:
Frame rate to get YUV images. Specify 1 or more.
However, by the ability of the camera, it may not work with the specified value.

CameraUser:
Camera login user.

CameraPassword:
Camera login password.

CenteringObjectName:
A list of objects to detect and objects to center.

CenteringSleepTime:
Centering interval time.

 

How to change AI model

  1. Please replace
    data_CV2X/cnn/mobilenetv1_ssd_cavalry.bin
    data_CV5X/cnn/mobilenetv1_ssd_cavalry.bin
    and
    data_CV2X/cnn/mobilenet_priorbox_fp32.bin
    data_CV5X/cnn/mobilenet_priorbox_fp32.bin
    with your model.

  2. Please change the following part of main.cpp according to your model.
    #define OUTSIZE_HEIGHT<Input height of your model>
    #define OUTSIZE_WIDTH<Input width of your model>

    #define NETNAME<File name of your model>
    #define PRIORBOXFILE<File name of prior box>
    #define LAYERNAMEIN<Input layer name of your model>
    #define LAYERNAMEOUT_MBOX_LOC<Output layer name of your model to indicate boundary box location>
    #define LAYERNAMEOUT_MBOX_CONF_FLATTEN<Output layer name of your model to indicate boundary box confidence>
    #define PROPERTY_NUMCLASSES<Number of classes including background label>
    #define PROPERTY_MBOXLOCSIZE<Number of boundary boxes * 4> : 4 means (x,y,w,h)
    #define PROPERTY_BACKGROUND_LABEL_ID<Background label id>

    Following parameters are valid only for tensorflow SSD. For caffe SSD, please set these values to 0.
    #define X_SCALE<X scale value>
    #define Y_SCALE<Y scale value>
    #define WIDTH_SCALE<Width scale value>
    #define HEIGHT_SCALE<Height scale value>

  3. Please describe "objectname" (label-objectname matrix) according to your model.

 

Port in use

This application uses 8081 port for websocket communication.