Skip to content

Commit 09b73b2

Browse files
arrybnvpisarev
authored andcommitted
Blobs reuse improvement (#1205)
* Reuse deep learning output blobs * Changed order for iterating through blobs while seeking memory. Refactored a little.
1 parent 1c8809f commit 09b73b2

File tree

10 files changed

+374
-84
lines changed

10 files changed

+374
-84
lines changed

modules/dnn/include/opencv2/dnn/dnn.hpp

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -369,6 +369,21 @@ namespace dnn //! This namespace is used for dnn module functionlaity.
369369
CV_WRAP void getMemoryConsumption(const int layerId,
370370
const MatShape& netInputShape,
371371
size_t& weights, size_t& blobs) const;
372+
373+
/** @brief Computes bytes number which are requered to store
374+
* all weights and intermediate blobs for each layer.
375+
* @param netInputShapes vector of shapes for all net inputs.
376+
* @param layerIds output vector to save layer IDs.
377+
* @param weights output parameter to store resulting bytes for weights.
378+
* @param blobs output parameter to store resulting bytes for intermediate blobs.
379+
*/
380+
CV_WRAP void getMemoryConsumption(const std::vector<MatShape>& netInputShapes,
381+
std::vector<int>& layerIds, std::vector<size_t>& weights,
382+
std::vector<size_t>& blobs) const;
383+
/** @overload */
384+
CV_WRAP void getMemoryConsumption(const MatShape& netInputShape,
385+
std::vector<int>& layerIds, std::vector<size_t>& weights,
386+
std::vector<size_t>& blobs) const;
372387
private:
373388

374389
struct Impl;

modules/dnn/misc/python/pyopencv_dnn.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
typedef dnn::DictValue LayerId;
33
typedef std::vector<dnn::MatShape> vector_MatShape;
44
typedef std::vector<std::vector<dnn::MatShape> > vector_vector_MatShape;
5+
typedef std::vector<size_t> vector_size_t;
56

67
template<>
78
bool pyopencv_to(PyObject *o, dnn::DictValue &dv, const char *name)

0 commit comments

Comments
 (0)