Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of contents

...

Table of Contents
minLevel1
maxLevel4

Introduction

...

This explanation assumes that the i-PRO camera application development environment has been completed.
If you are not ready to build the development environment, please refer to here to complete it.

Also, in this tutorial, the SDK installation directory is described as ${SDK_DIR}.

Operation overview

...

yuv_pose_app is a sample application that draws the skeleton on the model on the camera.

External libraries required for operation

...

To build with Python, need below:

...

Info

The use of external libraries will be explained later.

Directory path of the sample app

...

No C/C++ source code.

The Python source code is stored below.

${SDK_DIR}/src/adamapp-py/yuv_pose_app

Use of AI model conversion tool

...

Before building the sample app, you need to use the AI model conversion tool.

Get the AI model conversion tool from below and build the environment.

AI model convert tool - Technology Partner FAQ (En) - Confluence (atlassian.net)

Info

It may take several days from the time you make an inquiry to the time it is provided.

 After building the environment, refer to the following and convert the sample model.

AI model convert tool: Tensorflow - Technology Partner FAQ (En) - Confluence (atlassian.net)

The setting.conf used for conversion is stored below.

...

Here, the model converted file is explained as "tf_pose_estimation_cavalry.bin".

How to build the sample app (Python)

...

Info

This article describes how to build it as AdamApp.
If you want to build it as Container AdamApp for Azure IoT Edge, see below.

Development tutorial (Container AdamApp for Azure IoT Edge) - Technology Partner FAQ (En) - Confluence (atlassian.net)

Place the model-converted tf_pose_estimation_cavalry.bin file in the sample app directory with the following configuration.

[For ambaCV2X app]
${SDK_DIR}/src/adamapp-py/yuv_pose_app/data_CV2X/cnn/tf_pose_estimation_cavalry.bin
[For ambaCV5X app]
${SDK_DIR}/src/adamapp-py/yuv_pose_app/data_CV5X/cnn/tf_pose_estimation_cavalry.bin

...

If the image of the camera is displayed, it is successful.

How to use the sample app

...

Please see the picture of the camera and the person.

...

Info

Mosaic processing is applied around the face.

Appendix

...

How to change preferences

This application has some preferneces which a user is able to change.
When changing some preferneces, push "AppPrefs" button in "ADAM OPERATION UI" html page.

Resoultion:
Resolution to get YUV images. Specfify HD(1280x720) or FHD(1920x1080).
However, by the ability of the camera, it may not work with the specified value.

Frame rate:
Frame rate to get YUV images. Specify 1 or more.
However, by the ability of the camera, it may not work with the specified value.

 

How to change AI model

The posture estimation model uses the following.
https://github.com/ZheC/tf-pose-estimation

  1. Please download the pb file.
    https://github.com/ZheC/tf-pose-estimation/blob/master/models/graph/mobilenet_thin_432x368/graph_freeze.pb

  2. Use CV tool to convert.
    Please use the setting.conf in the folder for conversion.
    Please refer to the following for how to obtain the CV tool.
    Ja: https://dev-partner.i-pro.com/space/TPFAQ/961545224
    En: https://dev-partner-en.i-pro.com/space/TPFAQEN/967804089

  3. Place the converted model file (.bin) in the
    data_CV2X/cnn
    and
    data_CV5X/cnn
    folder.