...
Table of Contents | ||||
---|---|---|---|---|
|
Preparation
...
Please refer here for the environment construction procedure.
...
The model after conversion is output to the following directory.
For ambaCV2X camera
${OUTPUT_DIR}/${NET_NAME}/${PARSER_OPTION}/[model name]
For ambaCV5X camera
${OUTPUT_DIR}/${NET_NAME}_ambaCV5X/${PARSER_OPTION}/[モデル名model name]
Note |
---|
Pytorch model needs to be converted to ONNX in advance |
...
Code Block |
---|
# Network Name NET_NAME=mobilenet_v2 # SSD Model or Not (0:not SSD, 1:SSD) IS_SSD=0 # Path to Directory for ONNX Models MODEL_DIR=./sample/mobilenet_v2/models # Path to Directory for DRA Images DRA_IMAGE_DIR=../../dra_img # Path to Directory for Output Data OUTPUT_DIR=./out # Quantization Mode # FIX8 : Fixed-point 8bit # FIX16 : Fixed-point 16bit # MIX : FIX8/FIX16 mixed PARSER_OPTION=MIX # Input Data Format (0:NHWC, 1:NCHW) IN_DATA_FORMAT=1 # Input Data Channel IN_DATA_CHANNEL=3 # Input Data Width IN_DATA_WIDTH=224 # Input Data Height IN_DATA_HEIGHT=224 # Input Data Mean Vector IN_MEAN=103.94,116.78,123.68 # Input Data Scale # IN_SCALE=1/Scale IN_SCALE=58.823529411 # RGB or BGR (0:RGB, 1:BGR) IS_BGR=1 # Output Layers Name OUT_LAYER=mobilenetv20_output_flatten0_reshape0 #cavalry version #if not specified -> "" CAVALRY_VER="2.1.7" # Unique preprocess # if use im2bin -> NONE # if use unique preprocess -> script path PREPRO=NONE PREPRO_ARG="" # Input file data format IN_DATA_FILEFORMAT=0,0,0,0 # Transpose indices(NONE:without transpose , 0,3,1,2:transpose (EX)) IN_DATA_TRANSPOSE=NONE |
NET_NAME: The name of network
Any name can be set.
IS_SSD: Not used.
MODEL_DIR: Path to directory which includes .onnx files
All .onnx files under the directory are converted.
IMAGE_DIR: Path to directory which includes image files for optimizing quantization
Please put the directory image files for training. Recommended number of image files is 100 to 200.
Image file format should be the format that are supported by OpenCV (Ex. JPEG, PNG and etc…).
Any resolution is supported.
OUTPUT_DIR: Path to directory which converted data in placed.
PARSER_OPTION: Quantization mode
Select from FIX8/FIX16/MIX (FIX8/FIX16 mixed).
IN_DATA_CHANNEL: Number of input image channel for target model
N_DATA_WIDTH: Width of input image for target model
IN_DATA_HEIGHT: Height of input image for target model
IN_MEAN: Normalization parameter (mean) of input image
Please refrain from using space between “,” as shown below if using numerical value.
IN_MEAN=127.5,127.5,127.5
IN_SCALE: Normalization parameter (scale) of input image
Please refrain from using space between “,” and only use “,” to separate values when setting different values for each channel.
IS_BGR: Format of input image (RGB or BGR)
OUT_LAYER: The name of output layer for target network
If two or more layers exists, separate layers by “,”.
When the following symbols are contained in the name of input/output node, conversion may not be successful.
| ; , ‘
PRIORBOX_NODE: Node equivalent to “priorbox”
Need to set when IS_SSD=1
CAVALRY_VER: Version of cavalry to use
PREPRO: Path of preprocessing script (python script)
Refer to “/home/cvtool/ common/prepro.py” for how to create a script
PREPRO_ARG: Argument of preprocessing script (python script)
IN_DATA_FILEFORMAT: Input data format
Examples : uint8->0,0,0,0, float32->1,2,0,7, float16-> 1,1,0,4
When the value of IN_DATA_FILEFORMAT changes from “0,0,0,0”, setting PREPRO is needed.
NIN_DATA_TRANSPOSE: Specify when performing TRANSPOSE on the input data
Note |
---|
If there is a node that is not a constant, it cannot be converted normally. In that case, convert the model using the following command, and perform quantization with the converted model. -m : before conversion model -o : after conversion model |
Code Block |
---|
$ graph_surgery.py onnx -m mobilenetv210.onnx -o mobilenetv210_mod.onnx -t Default |
Info |
CAVALRY_VER is "2.1.7" for ambaCV2X, but please set "2.2.8.2" for ambaCV5X. |
Convert sample models
...
mobilenetv2
...