Commit Graph

702 Commits

Author SHA1 Message Date
HAN Liutong
e2bfe0ce76 Use "#if" instead of "#ifdef" for CV_SIMD128. 2022-07-21 03:23:57 +00:00
rogday
ed69bcae2d
Merge pull request #21865 from rogday:nary_eltwise_layers
Reimplementation of Element-wise layers with broadcasting support

* init

* semi-working initial version

* add small_vector

* wip

* remove smallvec

* add nary function

* replace auto with Mat in lambda expr used in transform

* uncomment asserts

* autobuffer shape_buf & step_buf

* fix a missing bracket

* fixed a missing addLayer in parseElementWise

* solve one-dimensional broadcast

* remove pre_broadcast_transform for the case of two constants; fix missing constBlobsExtraInfo when addConstant is called

* one autobuffer for step & shape

* temporal fix for the missing original dimension information

* fix parseUnsqueeze when it gets a 1d tensor constant

* support sum/mean/min/max with only one input

* reuse old code to handle cases of two non-constant inputs

* add condition to handle div & mul of two non-constant inputs

* use || instead of or

* remove trainling spaces

* enlarge buf in binary_forward to contain other buffer

* use autobuffer in nary_forward

* generate data randomly and add more cases for perf

* add op and, or & xor

* update perf_dnn

* remove some comments

* remove legacy; add two ONNX conformance tests in filter

* move from cpu_denylist to all_denylist

* adjust parsing for inputs>=2

Co-authored-by: fengyuentau <yuantao.feng@opencv.org.cn>
2022-07-19 06:14:05 +03:00
Zihao Mu
139c443770
Merge pull request #22183 from zihaomu:fastConv_ARMv7_compatible
DNN: ARMv7 compatible fastConv

* support armv7 on fastConv

* remove whitespace.
2022-07-07 13:23:08 +03:00
Zihao Mu
a80fcacd90
Merge pull request #21372 from zihaomu:dnn_quantize_per_tensor
Add per_tensor_quantize to int8 quantize

* add per_tensor_quantize to dnn int8 module.

* change api flag from perTensor to perChannel, and recognize quantize type and onnx importer.

* change the default to hpp
2022-07-05 19:14:42 +03:00
Zihao Mu
59b870a87a
Merge pull request #21910 from zihaomu:fast_conv_ARM
DNN: Accelerating convolution

* Fast Conv of ARM, X86 and universal intrinsics.

* improve code style.

* error fixed.

* improve the License

* optimize memory allocated and Adjust the threshold.

* change FasterRCNN_vgg16 to 2GB memory.
2022-07-01 13:03:15 +03:00
rogday
9cd5a0a1e6
Merge pull request #21884 from rogday:cuda_cleanup
Fix CUDA compilation issues and adjust thresholds.

* Fix CUDA compilation issues and adjust thresholds.

* add conformance tests to denylist
2022-04-19 16:40:25 +00:00
zihaomu
e36948cfbc add ONNX OP sign, shrink and reciprocal 2022-04-07 15:32:12 +08:00
Alexander Alekhin
a233982931 Merge pull request #20938 from JulieBar:lstm_cuda2 2022-04-01 22:10:08 +00:00
Zihao Mu
7b582b71ba
Merge pull request #21036 from fengyuentau:timvx_backend_support
dnn: TIM-VX NPU backend support

* Add TimVX NPU backend for DNN module.

* use official branch from tim-vx repo; fix detecting viv sdk

Co-authored-by: fytao <yuantao.feng@outlook.com>
2022-03-31 21:42:11 +00:00
Smirnov Egor
abebbf04b1 Add CUDA support for LSTM.
Co-authored-by: Julia Bareeva <jbareeva@gmail.com>
2022-03-31 16:38:22 +03:00
Alexander Alekhin
5e434073d4 Merge pull request #21796 from alalek:dnn_reduce_fixup_21601 2022-03-30 22:26:28 +00:00
Alexander Alekhin
6f5cf8c15f dnn: fix ReduceLayer implementation, update OpenVINO tests 2022-03-30 20:03:41 +00:00
Alexander Alekhin
1339ebaa84 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-03-26 16:00:28 +00:00
Zihao Mu
b6b5c27cec Support for some reduce layers for onnx 2022-03-18 10:19:13 +08:00
Alexander Alekhin
685797f403 Merge pull request #21662 from alalek:dnn_split 2022-03-17 16:09:17 +00:00
rogday
93353aea70
Merge pull request #21522 from rogday:lstm
Fix LSTM support in ONNX

* fix LSTM and add peephole support

* disable old tests

* turn lambdas into functions

* more hacks for  c++98

* add assertions

* slice fixes

* backport of cuda-related fixes

* address review comments
2022-03-15 09:14:05 +03:00
Alexander Alekhin
a80af177b6 dnn: split dnn.cpp code
base commit: 19926e2979
original dnn.cpp content: 19926e2979/modules/dnn/src/dnn.cpp
2022-03-08 19:22:46 +00:00
Alexander Alekhin
901e0ddfe4 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-03-05 19:46:28 +00:00
Alexander Alekhin
5cc27fd3b5 Merge pull request #21542 from rogday:split_expand 2022-02-28 22:38:24 +00:00
Egor Smirnov
375fe81311 fix slice and expand 2022-02-28 17:18:07 +03:00
Alexander Alekhin
19926e2979 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-02-11 17:32:37 +00:00
Alexander Alekhin
effce0573b dnn: drop legacy Inference Engine NN builder API 2022-02-10 11:55:24 +00:00
Alexander Alekhin
d573472a86 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-01-31 12:53:45 +00:00
Alexander Alekhin
dc35633aa4 Merge pull request #21521 from alalek:dnn_ignore_denormals 2022-01-28 15:31:44 +00:00
Alexander Alekhin
70b0274c8e dnn: apply hint to ignore denormals processing 2022-01-26 11:28:35 +00:00
Alexander Alekhin
eb7b45d26b dnn: fix API - explicit ctors, const methods 2022-01-21 12:38:51 +00:00
Alexander Alekhin
b304730225 dnn: fix API - explicit ctors, const methods 2022-01-17 21:45:29 +00:00
Alexander Alekhin
aebb65e983 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2022-01-12 13:26:10 +00:00
Alexander Alekhin
80d9f624d0 dnn: don't use aligned load without alignment checks
- weights are unaligned in dasiamprn sample (comes from numpy)
2022-01-12 05:11:18 +00:00
Alexander Alekhin
217fea9667 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-12-24 16:48:07 +00:00
Alexander Alekhin
6385511e88 dnn: add checks in pooling layer implementation
- to avoid out of buffer access
2021-12-24 00:15:30 +00:00
Alexander Alekhin
80492d663e Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-12-18 16:19:06 +00:00
Smirnov Egor
71a22e45b0 add celu, hardsigmoid, selu, thresholdedrelu layers 2021-12-18 03:19:54 +03:00
Smirnov Egor
1bd382c1d0 Add acos, acosh, asin, asinh, atan, atanh, cos, cosh, erf, hardswish, sin, sinh, softplus, softsign, tan layers 2021-12-17 18:19:40 +03:00
Smirnov Egor
fec2c7e715 fix Flatten layer 2021-12-17 16:29:56 +03:00
Alexander Alekhin
6d677bbd63 dnn(test): update ONNX conformance filters (4.x) 2021-12-16 12:09:31 +00:00
Alexander Alekhin
299f9837b7 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-12-15 16:38:56 +00:00
Smirnov Egor
e97c7e042b fix max_unpool missing attributes, add default value of keepdims in reducemean/max/sum, add support for keepdims=true in full reduction branch, add new padding type to Pad 2021-12-14 22:09:27 +03:00
Alexander Alekhin
6e50e4b9ee Merge pull request #21161 from rogday:elu_alpha_4x 2021-12-10 16:04:01 +00:00
HAN Liutong
1599f9f0c0
Merge pull request #21086 from hanliutong:rvv-dnn
Further optimize DNN for RISC-V Vector.

* Optimize DNN on RVV by using vsetvl.

* Rename vl.

* Update fastConv by using setvl instead of mask.

* Fix fastDepthwiseConv
2021-12-10 16:03:22 +00:00
Smirnov Egor
e608adea60 add ArgMax and ArgMin layers 2021-12-06 20:49:54 +03:00
HAN Liutong
4935b14539
Merge pull request #21012 from hanliutong:rvv_clang
Update RVV backend for using Clang.

* Update cmake file of clang.

* Modify the RVV optimization on DNN to adapt to clang.

* Modify intrin_rvv: Disable some existing types.

* Modify intrin_rvv: Reinterpret instead of load&cast.

* Modify intrin_rvv: Update load&store without cast.

* Modify intrin_rvv: Rename vfredsum to fredosum.

* Modify intrin_rvv: Rewrite Check all/any by using vpopc.

* Modify intrin_rvv: Use reinterpret instead of c-style casting.

* Remove all macros which is not used in v_reinterpret

* Rename vpopc to vcpop according to spec.
2021-12-03 15:13:24 +00:00
Alexander Alekhin
8b4fa2605e Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2021-12-03 12:32:49 +00:00
Alexander Alekhin
dad2b9aac8 Merge pull request #21160 from rogday:elu_alpha 2021-12-02 17:13:57 +00:00
Smirnov Egor
4995aecd62 add alpha parameter to ELU 2021-11-30 14:43:18 +03:00
Smirnov Egor
0e2a3686c0 add alpha parameter to ELU layer 2021-11-30 12:20:35 +03:00
Alexander Alekhin
0d2857a242 Merge pull request #21152 from rogday:fix_defaults 2021-11-29 22:39:27 +00:00
Andrew Ryrie
ea7d4be3f8
Merge pull request #20658 from smbz:lstm_optimisation
* dnn: LSTM optimisation

This uses the AVX-optimised fastGEMM1T for matrix multiplications where available, instead of the standard cv::gemm.

fastGEMM1T is already used by the fully-connected layer.  This commit involves two minor modifications:
 - Use unaligned access.  I don't believe this involves any performance hit in on modern CPUs (Nehalem and Bulldozer onwards) in the case where the address is actually aligned.
 - Allow for weight matrices where the number of columns is not a multiple of 8.

I have not enabled AVX-512 as I don't have an AVX-512 CPU to test on.

* Fix warning about initialisation order

* Remove C++11 syntax

* Fix build when AVX(2) is not available

In this case the CV_TRY_X macros are defined to 0, rather than being undefined.

* Minor changes as requested:

 - Don't check hardware support for AVX(2) when dispatch is disabled for these
 - Add braces

* Fix out-of-bounds access in fully connected layer

The old tail handling in fastGEMM1T implicitly rounded vecsize up to the next multiple of 8, and the fully connected layer implements padding up to the next multiple of 8 to cope with this.  The new tail handling does not round the vecsize upwards like this but it does require that the vecsize is at least 8.  To adapt to the new tail handling, the fully connected layer now rounds vecsize itself at the same time as adding the padding(which makes more sense anyway).

This also means that the fully connected layer always passes a vecsize of at least 8 to fastGEMM1T, which fixes the out-of-bounds access problems.

* Improve tail mask handling

 - Use static array for generating tail masks (as requested)
 - Apply tail mask to the weights as well as the input vectors to prevent spurious propagation of NaNs/Infs

* Revert whitespace change

* Improve readability of conditions for using AVX

* dnn(lstm): minor coding style changes, replaced left aligned load
2021-11-29 21:43:00 +00:00
Smirnov Egor
05db8784ae fix Clip, LeakyReLU, LRN, Split defaults 2021-11-29 20:20:34 +03:00
Hanxi Guo
1fcf7ba5bc
Merge pull request #20406 from MarkGHX:gsoc_2021_webnn
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN

* Add WebNN backend for OpenCV DNN Module

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Add WebNN head files into OpenCV 3rd partiy files

Create webnn.hpp

update cmake

Complete README and add OpenCVDetectWebNN.cmake file

add webnn.cpp

Modify webnn.cpp

Can successfully compile the codes for creating a MLContext

Update webnn.cpp

Update README.md

Update README.md

Update README.md

Update README.md

Update cmake files and

update README.md

Update OpenCVDetectWebNN.cmake and README.md

Update OpenCVDetectWebNN.cmake

Fix OpenCVDetectWebNN.cmake and update README.md

Add source webnn_cpp.cpp and libary libwebnn_proc.so

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

update dnn.cpp

update op_webnn

update op_webnn

Update op_webnn.hpp

update op_webnn.cpp & hpp

Update op_webnn.hpp

Update op_webnn

update the skeleton

Update op_webnn.cpp

Update op_webnn

Update op_webnn.cpp

Update op_webnn.cpp

Update op_webnn.hpp

update op_webnn

update op_webnn

Solved the problems of released variables.

Fixed the bugs in op_webnn.cpp

Implement op_webnn

Implement Relu by WebNN API

Update dnn.cpp for better test

Update elementwise_layers.cpp

Implement ReLU6

Update elementwise_layers.cpp

Implement SoftMax using WebNN API

Implement Reshape by WebNN API

Implement PermuteLayer by WebNN API

Implement PoolingLayer using WebNN API

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Implement poolingLayer by WebNN API and add more detailed logs

Update dnn.cpp

Update dnn.cpp

Remove redundant codes and add more logs for poolingLayer

Add more logs in the pooling layer implementation

Fix the indent issue and resolve the compiling issue

Fix the build problems

Fix the build issue

FIx the build issue

Update dnn.cpp

Update dnn.cpp

* Fix the build issue

* Implement BatchNorm Layer by WebNN API

* Update convolution_layer.cpp

This is a temporary file for Conv2d layer implementation

* Integrate some general functions into op_webnn.cpp&hpp

* Update const_layer.cpp

* Update convolution_layer.cpp

Still have some bugs that should be fixed.

* Update conv2d layer and fc layer

still have some problems to be fixed.

* update constLayer, conv layer, fc layer

There are still some bugs to be fixed.

* Fix the build issue

* Update concat_layer.cpp

Still have some bugs to be fixed.

* Update conv2d layer, fully connected layer and const layer

* Update convolution_layer.cpp

* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)

* Delete bib19450.aux

* Add WebNN backend for OpenCV DNN Module

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Add WebNN head files into OpenCV 3rd partiy files

Create webnn.hpp

update cmake

Complete README and add OpenCVDetectWebNN.cmake file

add webnn.cpp

Modify webnn.cpp

Can successfully compile the codes for creating a MLContext

Update webnn.cpp

Update README.md

Update README.md

Update README.md

Update README.md

Update cmake files and

update README.md

Update OpenCVDetectWebNN.cmake and README.md

Update OpenCVDetectWebNN.cmake

Fix OpenCVDetectWebNN.cmake and update README.md

Add source webnn_cpp.cpp and libary libwebnn_proc.so

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

update dnn.cpp

update op_webnn

update op_webnn

Update op_webnn.hpp

update op_webnn.cpp & hpp

Update op_webnn.hpp

Update op_webnn

update the skeleton

Update op_webnn.cpp

Update op_webnn

Update op_webnn.cpp

Update op_webnn.cpp

Update op_webnn.hpp

update op_webnn

update op_webnn

Solved the problems of released variables.

Fixed the bugs in op_webnn.cpp

Implement op_webnn

Implement Relu by WebNN API

Update dnn.cpp for better test

Update elementwise_layers.cpp

Implement ReLU6

Update elementwise_layers.cpp

Implement SoftMax using WebNN API

Implement Reshape by WebNN API

Implement PermuteLayer by WebNN API

Implement PoolingLayer using WebNN API

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Implement poolingLayer by WebNN API and add more detailed logs

Update dnn.cpp

Update dnn.cpp

Remove redundant codes and add more logs for poolingLayer

Add more logs in the pooling layer implementation

Fix the indent issue and resolve the compiling issue

Fix the build problems

Fix the build issue

FIx the build issue

Update dnn.cpp

Update dnn.cpp

* Fix the build issue

* Implement BatchNorm Layer by WebNN API

* Update convolution_layer.cpp

This is a temporary file for Conv2d layer implementation

* Integrate some general functions into op_webnn.cpp&hpp

* Update const_layer.cpp

* Update convolution_layer.cpp

Still have some bugs that should be fixed.

* Update conv2d layer and fc layer

still have some problems to be fixed.

* update constLayer, conv layer, fc layer

There are still some bugs to be fixed.

* Update conv2d layer, fully connected layer and const layer

* Update convolution_layer.cpp

* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)

* Update dnn.cpp

* Fix Error in dnn.cpp

* Resolve duplication in conditions in convolution_layer.cpp

* Fixed the issues in the comments

* Fix building issue

* Update tutorial

* Fixed comments

* Address the comments

* Update CMakeLists.txt

* Offer more accurate perf test on native

* Add better perf tests for both native and web

* Modify per tests for better results

* Use more latest version of Electron

* Support latest WebNN Clamp op

* Add definition of HAVE_WEBNN macro

* Support group convolution

* Implement Scale_layer using WebNN

* Add Softmax option for native classification example

* Fix comments

* Fix comments
2021-11-23 21:15:31 +00:00