Accuracy of the converted AI model

Question

The accuracy of the converted AI model is lower than before conversion. Is there a way to improve the accuracy?


Answer

Since the AI ​​model is quantized to 8 or 16 bits during conversion, the accuracy may be lower than before conversion.

 

Please try one of the following methods to reduce the loss of accuracy.

Processing time for inference will tend to increase.

  1. Set PARSER_OPTION (quantization mode) of setting.conf to FIX16.

  2. When PARSER_OPTION is MIX, 8bit/16bit ratio can be set.
    There are codes in the conversion script (such as onnx_conversion.sh) that reference PARSER_OPTION.
    Please add lines 5 and 6 below to the codes.

    if [ ${PARSER_OPTION} == "FIX8" ]; then PARSER_OPT="-c act-force-fx8,coeff-force-fx8" elif [ ${PARSER_OPTION} == "FIX16" ]; then PARSER_OPT="-c act-force-fx16,coeff-force-fx16" elif [ ${PARSER_OPTION} == "MIX" ]; then PARSER_OPT="-dra mode=2,coverage_th=0.90" fi

    “coverage_th” is the value that specifies the ratio of FIX8/FIX16. Default value is 0.90. The closer to 0, the higher the ratio of FIX8, and the closer to 1, the higher the ratio of FIX16.
    Specifying a value greater than 0.90 will increase the accuracy.

 

Please also check the followings.

  • Pixel order (BGR/RGB) input to the AI model match the order used during training.

  • Inference result is not obtained correctly because inference and post-processing are not completed within the expected time.

  • Differences exist between the PC software and the camera software that may affect accuracy (especially post-processing of inference).