Add per_tensor_quantize to int8 quantize * add per_tensor_quantize to dnn int8 module. * change api flag from perTensor to perChannel, and recognize quantize type and onnx importer. * change the default to hpp |
||
|---|---|---|
| .. | ||
| batch_norm_layer.cpp | ||
| convolution_layer.cpp | ||
| elementwise_layers.cpp | ||
| eltwise_layer.cpp | ||
| fully_connected_layer.cpp | ||
| layers_common.hpp | ||
| layers_common.simd.hpp | ||
| pooling_layer.cpp | ||
| quantization_utils.cpp | ||
| reduce_layer.cpp | ||
| scale_layer.cpp | ||
| softmax_layer.cpp | ||