Preparation
Please refer here for the environment construction procedure.
Convert sample models
You can understand the flow of the conversion process by converting a sample model. If you are converting a model that you have developed, you can skip this step.
mobilenetv2
Move directory.
$ cd /home/cvtool/conversion/onnx/mobilenetv2
Convert the model.
$ ./onnx_conversion.sh setting.conf
If an error occurs, please check the environment settings (source setup_env.sh) below. You will need to do this every time you restart your computer.
Install AI model convert tool - Technology Partner FAQ (En) - Confluence
yolov5 / yolov8
Download sample model.
(yolov5) $ cd /home/cvtool/conversion/onnx/yolov5 $ ./setup_yolov5.sh (yolov8) $ cd /home/cvtool/conversion/onnx/yolov8 $ ./setup_yolov8.sh
The file size to be downloaded will be several GB. Please make sure you have enough free space before proceeding.
Convert the model.
$ ./onnx_conversion.sh setting.conf
If an error occurs, please check the environment settings (source setup_env.sh) below. You will need to do this every time you restart your computer.
Install AI model convert tool - Technology Partner FAQ (En) - Confluence
Depending on your PC environment, you may need up to 8GB of memory (RAM). If it fails, please check your memory capacity.
The model after conversion is output to the following directory.
For ambaCV2X camera
${OUTPUT_DIR}/${NET_NAME}/${PARSER_OPTION}/[model name]
For ambaCV5X camera
${OUTPUT_DIR}/${NET_NAME}_ambaCV5X/${PARSER_OPTION}/[model name]
Convert user AI model
Pytorch model needs to be converted to ONNX in advance.
Copy either directory under
/home/cvtool/conversion/onnx
.
$ cd /home/cvtool/conversion/onnx $ cp -r mobilenetv2 foo $ cd foo
Change the parameter of "setting.conf" according to the model to be converted.
Locate user AI model in “MODEL_DIR“ in “setting.conf”.
Convert the model.
$ ./onnx_conversion.sh setting.conf
If an error occurs, please check the environment settings (source setup_env.sh) below. You will need to do this every time you restart your computer.
Install AI model convert tool - Technology Partner FAQ (En) - Confluence
setting.conf
From v1.20, parameter "CAVALRY_VER" is removed that existed in setting.conf until v1.19.
If you use setting.conf from v1.19 or earlier, please remove "CAVALRY_VER" from it.
# Network Name NET_NAME=mobilenet_v2 # Path to Directory for ONNX Models MODEL_DIR=./sample/mobilenet_v2/models # Path to Directory for DRA Images DRA_IMAGE_DIR=../../dra_img # Path to Directory for Output Data OUTPUT_DIR=./out # Quantization Mode # FIX8 : Fixed-point 8bit # FIX16 : Fixed-point 16bit # MIX : FIX8/FIX16 mixed PARSER_OPTION=MIX # Input Data Format (0:NHWC, 1:NCHW) IN_DATA_FORMAT=1 # Input Data Channel IN_DATA_CHANNEL=3 # Input Data Width IN_DATA_WIDTH=224 # Input Data Height IN_DATA_HEIGHT=224 # Input Data Mean Vector IN_MEAN=103.94,116.78,123.68 # Input Data Scale # IN_SCALE=1/Scale IN_SCALE=58.823529411 # RGB or BGR (0:RGB, 1:BGR) IS_BGR=1 # Output Layers Name OUT_LAYER=mobilenetv20_output_flatten0_reshape0 # Unique preprocess # if use im2bin -> NONE # if use unique preprocess -> script path PREPRO=NONE PREPRO_ARG="" # Input file data format IN_DATA_FILEFORMAT=0,0,0,0 # Transpose indices(NONE:without transpose , 0,3,1,2:transpose (EX)) IN_DATA_TRANSPOSE=NONE
NET_NAME: The name of network
Any name can be set.
MODEL_DIR: Path to directory which includes .onnx files
All .onnx files under the directory are converted.
IMAGE_DIR: Path to directory which includes image files for optimizing quantization
Please put the directory image files for training. Recommended number of image files is 100 to 200.
Image file format should be the format that are supported by OpenCV (Ex. JPEG, PNG and etc…).
Any resolution is supported.
OUTPUT_DIR: Path to directory which converted data in placed.
PARSER_OPTION: Quantization mode
Select from FIX8/FIX16/MIX (FIX8/FIX16 mixed).
IN_DATA_CHANNEL: Number of input image channel for target model
N_DATA_WIDTH: Width of input image for target model
IN_DATA_HEIGHT: Height of input image for target model
IN_MEAN: Normalization parameter (mean) of input image
Please refrain from using space between “,” as shown below if using numerical value.
IN_MEAN=127.5,127.5,127.5
IN_SCALE: Normalization parameter (scale) of input image
Please refrain from using space between “,” and only use “,” to separate values when setting different values for each channel.
IS_BGR: Format of input image (RGB or BGR)
OUT_LAYER: The name of output layer for target network
If two or more layers exists, separate layers by “,”.
When the following symbols are contained in the name of input/output node, conversion may not be successful.
: | ; , ‘
PRIORBOX_NODE: Node equivalent to “priorbox”
Need to set when IS_SSD=1
PREPRO: Path of preprocessing script (python script)
Refer to “/home/cvtool/ common/prepro.py” for how to create a script
PREPRO_ARG: Argument of preprocessing script (python script)
IN_DATA_FILEFORMAT: Input data format
Examples : uint8->0,0,0,0, float32->1,2,0,7, float16->1,1,0,4
When the value of IN_DATA_FILEFORMAT changes from “0,0,0,0”, setting PREPRO is needed.
IN_DATA_TRANSPOSE: Specify when performing TRANSPOSE on the input data
If conversion error occurs
Modifying the model by graph_surgery.py included in cvtool may resolve the problem.
Please refer here for details about graph_surgery.py.
The model includes unsupported node by cvtool
Run the following command.$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -t CVFlow
Please refer here for the list of supported node.
You can check if the model includes unsupported node, by using onnx_print_graph_summary.py in cvtool.$ onnx_print_graph_summary.py -p (model name)
Input of the model has variable value
Replace with fixed value by the following command.
$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -isrc "i:input|is:1,3,960,960" -t SetIOShapes
Input shape of the model is NHWC, and transposed to NCHW first
Separate from the beginning of the model to “Tranpose” node.
$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -isrc "i:(output name of Transpose node)|is:1,3,224,224" -t CutGraph
Unsupported character ( : | ; , ‘ ) is included in OUT_LAYER
Rename the node by the following command.$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -t "RenameTensors(original_name=new_name)"
If multiple nodes are specified, please separate those by ",".
-t "RenameTensors(node::1=node1,node::2=node2)"
The model has nodes of rank > 4
Conversion may not be possible due to SoC constraints.
You can check if the model has such nodes, by using onnx_print_graph_summary.py. The following message will be output when the nodes are found.$ onnx_print_graph_summary.py -p (model name) INFO: 08/29/2024 03:22:40.725761 onnx_print_graph_summary.py:384 [PrintGraphSummary] Unsupported tensors with rank > 4 (3): INFO: 08/29/2024 03:22:40.725837 onnx_print_graph_summary.py:386 [PrintGraphSummary] -> 'output' [1 x 3 x 52 x 52 x 85] INFO: 08/29/2024 03:22:40.725909 onnx_print_graph_summary.py:386 [PrintGraphSummary] -> '1026' [1 x 3 x 26 x 26 x 85] INFO: 08/29/2024 03:22:40.725993 onnx_print_graph_summary.py:386 [PrintGraphSummary] -> '1046' [1 x 3 x 13 x 13 x 85]
Please modify the model in one of the following ways.
・Modify the nodes before exporting to onnx.
・Separate the back of the node from the model.$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -on (node name before the node of rank > 4) -t CutGraph
・Replace the node with that of rank <= 4.$ graph_surgery.py onnx -m (model name before modification) -o (model name after modification) -t ReplaceSubgraph
※If you wish to use “ReplaceSubgraph”, please contact us.
Althogh the node of rank=4, conversion may not be possible when the number of elements in the first dimension of the node is not 1. Please take the same action as in case of rank > 4.