yuv_pose_app

 

Table of contents


 

Introduction


This explanation assumes that the i-PRO camera application development environment has been completed.
If you are not ready to build the development environment, please refer to here to complete it.

Also, in this tutorial, the SDK installation directory is described as ${SDK_DIR}.

Operation overview


yuv_pose_app is a sample application that draws the skeleton on the model on the camera.

 

External libraries required for operation


To build with Python, need below:

Numpy

OpenCV

The use of external libraries will be explained later.

 

Directory path of the sample app


No C/C++ source code.

The Python source code is stored below.

${SDK_DIR}/src/adamapp-py/yuv_pose_app

 

Use of AI model conversion tool


Before building the sample app, you need to use the AI model conversion tool.

Get the AI model conversion tool from below and build the environment.

AI model convert tool - Technology Partner FAQ (En) - Confluence (atlassian.net)

It may take several days from the time you make an inquiry to the time it is provided.

 After building the environment, refer to the following and convert the sample model.

AI model convert tool: Tensorflow - Technology Partner FAQ (En) - Confluence (atlassian.net)

The setting.conf used for conversion is stored below.

${SDK_DIR}/src/adamapp-py/yuv_pose_app/setting.conf

Here, the model converted file is explained as "tf_pose_estimation_cavalry.bin".

 

How to build the sample app (Python)


This article describes how to build it as AdamApp.
If you want to build it as Container AdamApp for Azure IoT Edge, see below.

Development tutorial (Container AdamApp for Azure IoT Edge) - Technology Partner FAQ (En) - Confluence (atlassian.net)

If you want to build it as Container AdamApp, see below.

Development tutorial (Container AdamApp) - Technology Partner FAQ (En) - Confluence

Place the model-converted tf_pose_estimation_cavalry.bin file in the sample app directory with the following configuration.

[For ambaCV2X app]
${SDK_DIR}/src/adamapp-py/yuv_pose_app/data_CV2X/cnn/tf_pose_estimation_cavalry.bin
[For ambaCV5X app]
${SDK_DIR}/src/adamapp-py/yuv_pose_app/data_CV5X/cnn/tf_pose_estimation_cavalry.bin

 

See here for building with Python.

If the image of the camera is displayed, it is successful.

 

How to use the sample app


Please see the picture of the camera and the person.

 

Appendix


How to change preferences

This application has some preferneces which a user is able to change.
When changing some preferneces, push "AppPrefs" button in "ADAM OPERATION UI" html page.

Resoultion:
Resolution to get YUV images. Specfify HD(1280x720) or FHD(1920x1080).
However, by the ability of the camera, it may not work with the specified value.

Frame rate:
Frame rate to get YUV images. Specify 1 or more.
However, by the ability of the camera, it may not work with the specified value.

 

How to change AI model

The posture estimation model uses the following.
GitHub - ZheC/tf-pose-estimation: Openpose from CMU implemented using Tensorflow with Custom Architecture for fast inference.

  1. Please download the pb file.
    https://github.com/ZheC/tf-pose-estimation/blob/master/models/graph/mobilenet_thin_432x368/graph_freeze.pb

  2. Use CV tool to convert.
    Please use the setting.conf in the folder for conversion.
    Please refer to the following for how to obtain the CV tool.
    Ja: https://dev-partner.i-pro.com/space/TPFAQ/961545224
    En: https://dev-partner-en.i-pro.com/space/TPFAQEN/967804089

  3. Place the converted model file (.bin) in the
    data_CV2X/cnn
    and
    data_CV5X/cnn
    folder.