Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Command Line Options

Flag

Option

Description

-m, --model

Input model

(Required) Path to input model

-o, --outputproto

Output model

(Optional) Path to output model. Defaults to {basename}_modified.onnx

-isrc, --inputsources

Input tensor attribute strings containing tensor name (i) and shape(is)

(Optional) Used to specify input tensor shape overrides. Use multiple -isrc entries for multiple input tensors.
Example: -isrc "i:input0|is:1,3,224,224"

-on, --outputtensors

Output tensor names(comma separated)

(Optional) Used to specify multiple output tensors
Example: -on output0,output1

-odst, --odest

Output tensor attribute strings containing tensor name (o) and shape(os)

(Optional) Used to specify output tensor shape overrides.
Use multiple -odst entries for multiple output tensors.
Example: -odst "o:output0|os:1,1000"

-t, --transforms

Transforms

Comma-separated string denoting order of transforms to be applied to the ONNX graph. Available options include custom transforms (see below table) as well as all built-in onnx.optimizer transforms.
See https://github.com/onnx/onnx/tree/master/onnx/optimizer/passes .
Example: -t Default

-l, --log_dir

Log directory

(Optional) Directory to save log files

-d, --debug

Logging verbosity level

(Optional) Set logging verbosity. Integer value in range [0, 3], where 0: ERROR and 3: DEBUG

Custom Transforms

Conversions are applied to the original network in the order specified.

Transform Name

Description

Lossless?

Default

Applies a combination of built-in and custom optimization

passes to optimize the network for CVFlow compilation. The

following transform passes will be applied (in order):

 

  1. ConstantifyShapes

  1. FoldConstants

  1. FoldNoOps

 

Usage: $ graph_surgery.py onnx -t Default

Yes

BatchToChannel /

Constrain3DConv

Fold batch dimension into channel dimension for the inputs of

Conv/ConvTranspose operators with batch size greater than 1. Also takes care of dimension updates for the corresponding

weight/bias tensors, as well as for associated BatchNormalization operators.

 

Usage: $ graph_surgery.py onnx -t BatchToChannel

Yes

ClusterVPEligible

TBD

No

ConstantifyShapes

Uses Monte Carlo Sampling to convert eligible Shape, ConstantOfShape operators to constants. Typically used in conjunction with the FoldConstants transform.

Usage: $ graph_surgery.py onnx -t ConstantifyShapes, FoldConstants

Yes

Constrain2DSoftmax

Reshape inputs of Softmax nodes with rank > 2 to the coerced 2D shape

Usage: $ graph_surgery.py onnx -t Constrain2DSoftmax

Yes

Constrain2DMatMul

Optimize model to favor performing MatMul operations on

tensors having significant rank no greater than 2.

 

Usage: $ graph_surgery.py onnx -t Constrain2DMatMul

Yes

CutGraph

Splits and returns subgraph bounded by user-specified input and output tensors

Usage: $ graph_surgery.py onnx -isrc "i:input0" -isrc "i:input1" -on output0,output1 -t CutGraph

No

FlattenIO

Flatten statically-shaped primary input/output tensors

 

Usage: $ graph_surgery.py onnx -t FlattenIO

No

FoldConstants

Fold sections of the network which evaluate to constants values. Typically used in conjunction with the ConstantifyShapes transform

Usage: $ graph_surgery.py onnx -t ConstantifyShapes, FoldConstants

No

FoldNoOps

Fold no-op operators by applying a combination of built-in and

custom optimization passes. The following transform passes will be applied (in order):

 

  1. FoldPassthroughWhere

  1. eliminate_identity (built-in)

  1. eliminate_nop_dropout (built-in)

  1. eliminate_nop_monotone_argmax (built-in)

  1. eliminate_nop_pad (built-in)

  1. eliminate_nop_transpose (built-in)

  1. extract_constant_to_initializer (built-in)

  1. eliminate_unused_initializer (built-in)

  1. eliminate_deadend (built-in)

 

Usage: $ graph_surgery.py onnx -t FoldNoOps

Yes

FoldPassthroughWhere

Fold 'Where' nodes which have constant, homogeneous condition input tensors, essentially becoming passthrough operations

Usage: $ graph_surgery.py onnx -t FoldPassthroughWhere

Yes

FuseConsecutiveReshapes

Fuse consecutive Reshape nodes into the last Reshape node of the sequence

Usage: $ graph_surgery.py onnx -t FuseConsecutiveReshapes

Yes

FuseMaxPoolMaxUnpool

Fuse MaxPool-MaxUnpool nodes to ensure direct connection of indices tensor

 

Usage: $ graph_surgery.py onnx -t FuseMaxPoolMaxUnpool

Yes

InferShapes

Infer (all) tensor shapes using built-in shape inference, supplemented by onnxruntime. Uses sampling to evaluate dynamic shapes

Usage: $ graph_surgery.py onnx -t InferShapes

Yes

ModNodeNames

Assigns names to all unnamed graph nodes in the form '<op_type>_<id>'

Usage: $ graph_surgery.py onnx -t ModNodeNames

Yes

NamespaceTensors

Prepends a dot-separated namespace to the names of all

tensors except for primary inputs/outputs. If no namespace

argument is given, defaults to the model producer's name.

 

Usage: $ graph_surgery.py onnx -t NamespaceTensors=<ns>

Yes

RemoveZeroChannel

Remove entire channels that have been zeroed out by

structured pruning from eligible Conv/ConvT-to-Conv/ConvT

operator sequences.

 

Usage: $ graph_surgery.py onnx -t RemoveZeroChannel

Yes

RenameTensors

Rename arbitrary tensors in the model.

 

Usage: $ graph_surgery.py onnx -t  "RenameTensors(orig_name=new_name, ...)"

Yes

ReplaceSubgraph

Replace subgraph(s) with alternative implementations, e.g. replacing operations with rank greater than 4
Usage: $ graph_surgery.py onnx -t

ReplaceSubgraph=path/to/config.json

Yes

SetIOShapes

Sets shape of primary input and output tensors to userspecified values, can be static or dynamic

Usage: $ graph_surgery.py onnx -isrc "i:input0|is:1,3,224,224" -odst "o:output0|os:1,1000" -t SetIOShapes

Yes

AsymmetricAdjust

operators to either 128 if it's uint8 or 0 if it's int8 and annotate

the quantized operators with asymmetric zero-points to parse

with 16-bit precision in onnxparser. Input '.ini' file to configure

annotations and types of adjustment.

 

Usage: $ graph_surgery.py onnx -t  AsymmetricAdjust=config.ini

Yes

  • No labels