Add per_tensor_quantize to int8 quantize * add per_tensor_quantize to dnn int8 module. * change api flag from perTensor to perChannel, and recognize quantize type and onnx importer. * change the default to hpp |
||
|---|---|---|
| .. | ||
| utils | ||
| all_layers.hpp | ||
| dict.hpp | ||
| dnn.hpp | ||
| dnn.inl.hpp | ||
| layer_reg.private.hpp | ||
| layer.details.hpp | ||
| layer.hpp | ||
| shape_utils.hpp | ||
| version.hpp | ||