Add per_tensor_quantize to int8 quantize * add per_tensor_quantize to dnn int8 module. * change api flag from perTensor to perChannel, and recognize quantize type and onnx importer. * change the default to hpp |
||
|---|---|---|
| .. | ||
| dnn | ||
| dnn.hpp | ||
Add per_tensor_quantize to int8 quantize * add per_tensor_quantize to dnn int8 module. * change api flag from perTensor to perChannel, and recognize quantize type and onnx importer. * change the default to hpp |
||
|---|---|---|
| .. | ||
| dnn | ||
| dnn.hpp | ||