You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/03/22 18:45:54 UTC

[GitHub] [incubator-mxnet] samskalicky opened a new pull request #17885: [WIP] MXNet Extensions enhancements

samskalicky opened a new pull request #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885
 
 
   ## Description ##
   Enhancements to MXNet Extensions (subgraph property, custom ops, custom graph passes). More description on the way!
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] marcoabreu commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
marcoabreu commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r408513273
 
 

 ##########
 File path: CMakeLists.txt
 ##########
 @@ -750,17 +750,33 @@ endif()
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
+add_library(transposecsr_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposecsr_lib.cc)
+add_library(transposerowsp_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposerowsp_lib.cc)
 
 Review comment:
   Is there some way to automatically discover all the .cc files in the lib_custom_op folder instead of hardcoding all the paths?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610773277
 
 
   Jenkins CI successfully triggered : [sanity]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610773250
 
 
   @mxnet-bot run ci [sanity]
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610669060
 
 
   Jenkins CI successfully triggered : [sanity]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610669252
 
 
   Jenkins CI successfully triggered : [sanity]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610787817
 
 
   Jenkins CI successfully triggered : [sanity]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610693459
 
 
   @mxnet-bot run ci [sanity]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610630046
 
 
   Jenkins CI successfully triggered : [sanity]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky edited a comment on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky edited a comment on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610669060
 
 
   @mxnet-bot run ci [sanity]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610787780
 
 
   @mxnet-bot run ci [sanity]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610693482
 
 
   Jenkins CI successfully triggered : [sanity]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-610629994
 
 
   @mxnet-bot run ci [sanity]
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411800538



##########
File path: include/mxnet/lib_api.h
##########
@@ -507,16 +553,13 @@ class OpResource {
   void *rand_cpu_states, *rand_gpu_states;
 };
 
-/*!
- * \brief Json utility to parse serialized subgraph symbol
- */
 /*! \brief Macro to help passing serialized subgraph through attribute dict */
 #define MX_STR_SUBGRAPH_SYM_JSON "subgraph_sym_json"
-#define MX_STR_DTYPE "__dtype__"
-#define MX_STR_SHAPE "__shape__"
+#define MX_STR_DTYPE "__ext_dtype__"
+#define MX_STR_SHAPE "__ext_shape__"
 
 /* \brief get shape value from list of shapes string
- * format: [[1]] or [[1],[2]]
+ * format: [[1]] or [[1],[2,3]], returns "[1]" or "[2,3]"

Review comment:
       done

##########
File path: example/extensions/lib_custom_op/relu_lib.cu
##########
@@ -29,93 +29,93 @@
 #define NumThreadPerBlock 256 // mxnet recommended cuda thread number per block
 
 __global__ void relu_gpu_forward(float *out, float *in, int64_t N) {
-    int tid = blockIdx.x * blockDim.x + threadIdx.x;
-    if (tid < N)
-        out[tid] = in[tid] > 0 ? in[tid] : 0;
+  int tid = blockIdx.x * blockDim.x + threadIdx.x;
+  if (tid < N)
+    out[tid] = in[tid] > 0 ? in[tid] : 0;
 }
 
 __global__ void relu_gpu_backward(float *ingrad, float *outgrad, float *indata, int64_t N) {
-    int tid = blockIdx.x * blockDim.x + threadIdx.x;
-    if (tid < N)
-        ingrad[tid] = indata[tid] > 0 ? 1 * outgrad[tid] : 0;
+  int tid = blockIdx.x * blockDim.x + threadIdx.x;
+  if (tid < N)
+    ingrad[tid] = indata[tid] > 0 ? 1 * outgrad[tid] : 0;
 }
 
-MXReturnValue forwardCPU(std::map<std::string, std::string> attrs,
-                         std::vector<MXTensor> inputs,
-                         std::vector<MXTensor> outputs,
-                         OpResource res) {
-    float* in_data = inputs[0].data<float>();
-    float* out_data = outputs[0].data<float>();
-    for (int i=0; i<inputs[0].size(); i++) {
-        out_data[i] = in_data[i] > 0 ? in_data[i] : 0;
-    }
-    return MX_SUCCESS;
+MXReturnValue forwardCPU(const std::unordered_map<std::string, std::string>& attrs,
+                         std::vector<MXTensor>* inputs,
+                         std::vector<MXTensor>* outputs,
+                         const OpResource& res) {
+  float* in_data = inputs->at(0).data<float>();
+  float* out_data = outputs->at(0).data<float>();
+  for (int i=0; i<inputs->at(0).size(); i++) {
+    out_data[i] = in_data[i] > 0 ? in_data[i] : 0;
+  }
+  return MX_SUCCESS;
 }
 
-MXReturnValue backwardCPU(std::map<std::string, std::string> attrs,
-                          std::vector<MXTensor> inputs,
-                          std::vector<MXTensor> outputs,
-                          OpResource res) {
-    float* out_grad = inputs[0].data<float>();
-    float* in_data = inputs[1].data<float>();
-    float* in_grad = outputs[0].data<float>();
-    for (int i=0; i<inputs[1].size(); i++) {
-        in_grad[i] = in_data[i] > 0 ? 1 * out_grad[i] : 0;
-    }
-    return MX_SUCCESS;
+MXReturnValue backwardCPU(const std::unordered_map<std::string, std::string>& attrs,
+                          std::vector<MXTensor>* inputs,
+                          std::vector<MXTensor>* outputs,
+                          const OpResource& res) {
+  float* out_grad = inputs->at(0).data<float>();
+  float* in_data = inputs->at(1).data<float>();
+  float* in_grad = outputs->at(0).data<float>();
+  for (int i=0; i<inputs->at(1).size(); i++) {
+    in_grad[i] = in_data[i] > 0 ? 1 * out_grad[i] : 0;
+  }
+  return MX_SUCCESS;
 }
 
-MXReturnValue forwardGPU(std::map<std::string, std::string> attrs,
-                         std::vector<MXTensor> inputs,
-                         std::vector<MXTensor> outputs,
-                         OpResource res) {
-    float* in_data = inputs[0].data<float>();
-    float* out_data = outputs[0].data<float>();
+MXReturnValue forwardGPU(const std::unordered_map<std::string, std::string>& attrs,
+                         std::vector<MXTensor>* inputs,
+                         std::vector<MXTensor>* outputs,
+                         const OpResource& res) {
+  float* in_data = inputs->at(0).data<float>();
+  float* out_data = outputs->at(0).data<float>();
 
-    mx_stream_t cuda_stream = res.get_cuda_stream();
-    int64_t N = inputs[0].size();
-    int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+  mx_stream_t cuda_stream = res.get_cuda_stream();
+  int64_t N = inputs->at(0).size();
+  int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
 
-    relu_gpu_forward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(out_data, in_data, N);
+  relu_gpu_forward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(out_data, in_data, N);
 
-    return MX_SUCCESS;
+  return MX_SUCCESS;
 }
 
-MXReturnValue backwardGPU(std::map<std::string, std::string> attrs,
-                          std::vector<MXTensor> inputs,
-                          std::vector<MXTensor> outputs,
-                          OpResource res) {
-    float* out_grad = inputs[0].data<float>();
-    float* in_data = inputs[1].data<float>();
-    float* in_grad = outputs[0].data<float>();
-
-    mx_stream_t cuda_stream = res.get_cuda_stream();
-    int64_t N = inputs[0].size();
-    int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+MXReturnValue backwardGPU(const std::unordered_map<std::string, std::string>& attrs,
+                          std::vector<MXTensor>* inputs,
+                          std::vector<MXTensor>* outputs,
+                          const OpResource& res) {
+  float* out_grad = inputs->at(0).data<float>();
+  float* in_data = inputs->at(1).data<float>();
+  float* in_grad = outputs->at(0).data<float>();
 
-    relu_gpu_backward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(in_grad, out_grad, in_data, N);
+  mx_stream_t cuda_stream = res.get_cuda_stream();
+  int64_t N = inputs->at(0).size();
+  int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+  relu_gpu_backward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(in_grad, out_grad, in_data, N);
 
-    return MX_SUCCESS;
+  return MX_SUCCESS;
 }
 
-MXReturnValue parseAttrs(std::map<std::string, std::string> attrs, int* num_in, int* num_out) {
-    *num_in = 1;
-    *num_out = 1;
-    return MX_SUCCESS;
+MXReturnValue parseAttrs(const std::unordered_map<std::string, std::string>& attrs,
+                         int* num_in, int* num_out) {
+  *num_in = 1;
+  *num_out = 1;
+  return MX_SUCCESS;
 }
 
-MXReturnValue inferType(std::map<std::string, std::string> attrs,
-                        std::vector<int> &intypes,
-                        std::vector<int> &outtypes) {
-    outtypes[0] = intypes[0];
-    return MX_SUCCESS;
+MXReturnValue inferType(const std::unordered_map<std::string, std::string>& attrs,
+                        const std::vector<int>& intypes,

Review comment:
       done

##########
File path: example/extensions/lib_custom_op/gemm_lib.cc
##########
@@ -136,13 +137,13 @@ MXReturnValue inferType(std::map<std::string, std::string> attrs,
     }
   }
 
-  outtypes[0] = intypes[0];
+  outtypes->at(0) = intypes[0];
   return MX_SUCCESS;
 }
 
-MXReturnValue inferShape(std::map<std::string, std::string> attrs,
-                         std::vector<std::vector<unsigned int>> &inshapes,
-                         std::vector<std::vector<unsigned int>> &outshapes) {
+MXReturnValue inferShape(const std::unordered_map<std::string, std::string>& attrs,
+                         const std::vector<std::vector<unsigned int>>& inshapes,

Review comment:
       done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-617280892


   Synced offline with @samskalicky and @rondogency - they are ok with doing the last changes proposed by @rondogency in another PR targeting master specifically, so this PR is ready to merge.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411884892



##########
File path: CMakeLists.txt
##########
@@ -726,18 +726,39 @@ endif()
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
+add_library(transposecsr_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposecsr_lib.cc)
+add_library(transposerowsp_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposerowsp_lib.cc)
 add_library(subgraph_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_subgraph/subgraph_lib.cc)
+add_library(pass_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_pass/pass_lib.cc)
 target_include_directories(customop_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposecsr_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposerowsp_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 target_include_directories(subgraph_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(pass_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 if(USE_CUDA)
   add_library(customop_gpu_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/relu_lib.cu)
   target_include_directories(customop_gpu_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 endif()
-if(MSVC)
+if(UNIX)

Review comment:
       I mean you don't need to add -shared even for customop_gpu_lib, just change the lines inside MSVC block

##########
File path: include/mxnet/lib_api.h
##########
@@ -22,7 +22,11 @@
  * \file lib_api.h
  * \brief APIs to interact with libraries
  * This API specifies function prototypes to
- * register custom ops for library authors
+ * register custom ops, partitioner, and passes
+ * for library authors
+ * See example/extension/lib_custom_op/README.md

Review comment:
       maybe we can rephrase it to "APIs to write extension library, see ... for registering custom operators, ... for custom partitioners, ... for custom graph passes"

##########
File path: include/mxnet/lib_api.h
##########
@@ -45,7 +49,7 @@
 #endif
 
 /* Make sure to update the version number everytime you make changes */
-#define MX_LIBRARY_VERSION 6
+#define MX_LIBRARY_VERSION 7

Review comment:
       I still feel it is better to make it 10
   
   also it is better to add to c_api.cc line 339 version checking message something like "please update lib_api.h to match the version supported by MXNet backend"

##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -22,15 +22,13 @@ C++ Custom Operator Example and Tutorial
 
 Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
 
-Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be registered in MXNet so that users can call those operators natively just like other built-in operators.
 
 ## Getting Started
 
 ### Have MXNet Ready
 
-Custom Operator support was merged (#15921, #17270) and is not available in versions of MXNet prior to v1.7.0.
-To access the feature now, please install MXNet by compiling from source using master or using the previously mentioned commits, downloading one of the nightly builds, or from a release of MXNet 1.7.0+.
-For running the following example, it doesn’t matter if it is a CUDA, MKLDNN or plain MXNet build; the custom operator doesn’t interact with the execution of other native MXNet operators.
+To run the following example, the build type of MXNet doesn’t matter since the custom operator doesn’t interact with the execution of other native MXNet operators.

Review comment:
       it is better to add prerequisite here or run an example like "This requires GCC > 5 or CUDA > 9 to run the examples"




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411649874



##########
File path: example/extensions/lib_custom_op/gemm_lib.cc
##########
@@ -136,13 +137,13 @@ MXReturnValue inferType(std::map<std::string, std::string> attrs,
     }
   }
 
-  outtypes[0] = intypes[0];
+  outtypes->at(0) = intypes[0];
   return MX_SUCCESS;
 }
 
-MXReturnValue inferShape(std::map<std::string, std::string> attrs,
-                         std::vector<std::vector<unsigned int>> &inshapes,
-                         std::vector<std::vector<unsigned int>> &outshapes) {
+MXReturnValue inferShape(const std::unordered_map<std::string, std::string>& attrs,
+                         const std::vector<std::vector<unsigned int>>& inshapes,

Review comment:
       This should not be a const ref - the operator should be able to infer shapes in the other direction (output -> input) as well.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411800431



##########
File path: CMakeLists.txt
##########
@@ -726,18 +726,39 @@ endif()
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
+add_library(transposecsr_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposecsr_lib.cc)
+add_library(transposerowsp_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposerowsp_lib.cc)
 add_library(subgraph_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_subgraph/subgraph_lib.cc)
+add_library(pass_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_pass/pass_lib.cc)
 target_include_directories(customop_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposecsr_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposerowsp_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 target_include_directories(subgraph_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(pass_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 if(USE_CUDA)
   add_library(customop_gpu_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/relu_lib.cu)
   target_include_directories(customop_gpu_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 endif()
-if(MSVC)
+if(UNIX)

Review comment:
       done

##########
File path: include/mxnet/lib_api.h
##########
@@ -529,7 +572,7 @@ std::string getShapeAt(const std::string& shape, unsigned index) {
 }
 
 /* \brief get dtype value from list of dtypes string
- * format: [1] or [1,2]
+ * format: [1] or [1,2], returns "1" or "2"

Review comment:
       done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411797753



##########
File path: example/extensions/lib_custom_op/gemm_lib.cc
##########
@@ -87,20 +87,20 @@ MXReturnValue forward(std::map<std::string, std::string> attrs,
  ***** gradient outputs
  * outputs[0] = dA; outputs[1] = dB
  */
-MXReturnValue backward(std::map<std::string, std::string> attrs,
-                       std::vector<MXTensor> inputs,
-                       std::vector<MXTensor> outputs,
-                       OpResource res) {
+MXReturnValue backward(const std::unordered_map<std::string, std::string>& attrs,
+                       std::vector<MXTensor>* inputs,

Review comment:
       done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r412352929



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -0,0 +1,190 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+Custom Graph Pass Example and Tutorial
+=======================================
+
+## Introduction
+
+Adding custom graph passes in MXNet used to require deep understanding of the MXNet backend, including nnvm pass registration and other internal classes, followed by recompiling MXNet from source. This feature allows adding custom graph passes by dynamically loading external libraries at runtime.
+
+This custom graph pass feature enables users to write custom model modification strategies without compiling against all of MXNet header files and dependencies. When a library containing custom passes is loaded dynamically, the components found in the library will be registered in MXNet so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+To run the following example, the build type of MXNet doesn’t matter since the custom pass doesn’t interact with the execution of other native MXNet features. Note that if you want to use your custom pass with models running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom passes by running an example provided in the **example/extensions/lib_pass** directory. The `myPass` example just copies the input graph to the output. Go to the **lib_pass** directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library **libpass_lib.so** which is compiled from the `pass_lib.cc` file. This is the library you are going to load that contains everything for the custom pass.
+2. Run `python test_pass.py`. It’ll first load the above library, find the components, register them in the MXNet backend, then execute the pass on the model and execute the operators like a regular MXNet operator and output the result. Below is the output when running the `python test_pass.py` command. Notice that it loads 2 passes: myPass and jsonPass.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 0 operators in library
+[10:38:03] src/c_api/c_api.cc:785: Found 0 partitioners in library
+[07:14:00] src/c_api/c_api.cc:887: Found 2 graph passes in library
+[07:14:00] src/c_api/c_api.cc:902:       Graph Pass [0] myPass
+[07:14:00] src/c_api/c_api.cc:902:       Graph Pass [1] jsonPass
+```
+
+### Basic Files For Custom Pass Library
+* **lib_pass/pass_lib.cc**: This file has a source code implementation of all required components to make a custom pass, it also shows registration of them so that they can be loaded by MXNet.
+* **lib_pass/Makefile**: This file compiles the source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom pass is compatible with C++11 onwards.
+* **lib_pass/test_pass.py**: This file calls `mx.library.load(‘libpass_lib.so’)` to load the library containing the custom components, executes the pass on the model using the `optimize_for` API, and prints outputs of the forward passes. The outputs should be the same as the regular MXNet forward pass without running the pass.
+* **include/mxnet/lib_api.h**: This file from MXNet source code is the single header file needed to include all necessary data types and function prototypes for writing a custom library. You can either specify the include path in the `Makefile`, or copy the header file over to `example/extensions/lib_pass` folder. Note that apart from this header, the custom library is independent of MXNet source.
+## Writing Custom Pass Library
+To build your own library containing a custom pass, compose a C++ source file like `mypass_lib.cc`, include `lib_api.h` header file, and write your custom pass with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PASS` - Pass Registration Macro
+- `graphPass` - Pass Implementation
+Then compile it to the `mypass_lib.so` dynamic library using the following command:
+```bash
+g++ -shared -fPIC -std=c++11 mypass_lib.cc -o libmypass_lib.so -I ../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and execute your pass on a model:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmypass_lib.so’)

Review comment:
       what am i missing? looks the same to me...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on issue #17885: MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
leezu commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-617337522


   < We didnt want to rerun the whole CI. did we fix the problem where renaming the PR reruns CI @leezu ?
   
   @samskalicky when merging the commit, Github allows to edit the commit message. So the commit message can be different from the PR title. Indeed we can't edit the PR title without retriggering the CI as of now (cc @ChaiBapchya)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] rondogency commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-616888814


   also please lib_api update version to 10


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-616894534


   > also please lib_api update version to 10
   
   changed to version 7


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411688053



##########
File path: include/mxnet/lib_api.h
##########
@@ -507,16 +553,13 @@ class OpResource {
   void *rand_cpu_states, *rand_gpu_states;
 };
 
-/*!
- * \brief Json utility to parse serialized subgraph symbol
- */
 /*! \brief Macro to help passing serialized subgraph through attribute dict */
 #define MX_STR_SUBGRAPH_SYM_JSON "subgraph_sym_json"
-#define MX_STR_DTYPE "__dtype__"
-#define MX_STR_SHAPE "__shape__"
+#define MX_STR_DTYPE "__ext_dtype__"
+#define MX_STR_SHAPE "__ext_shape__"
 
 /* \brief get shape value from list of shapes string
- * format: [[1]] or [[1],[2]]
+ * format: [[1]] or [[1],[2,3]], returns "[1]" or "[2,3]"

Review comment:
       This docstring is confusing. It should have the index mentioned there (maybe something like:
   `getShapeAt("[[1]]", 0) returns "[1]"` and `getShapeAt("[[1],[2,3]]", 1) returns "[2,3]"`).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411718225



##########
File path: Makefile
##########
@@ -686,21 +686,36 @@ pylint:
 	python3 -m pylint --rcfile=$(ROOTDIR)/ci/other/pylintrc --ignore-patterns=".*\.so$$,.*\.dll$$,.*\.dylib$$" python/mxnet
 
 # MXNet extension dynamically loading libraries
-EXT_LIBS = build/libcustomop_lib.so build/libsubgraph_lib.so
+EXT_LIBS = build/libcustomop_lib.so build/libtransposecsr_lib.so build/libtransposerowsp_lib.so build/libsubgraph_lib.so build/libpass_lib.so
 ifeq ($(USE_CUDA), 1)
         EXT_LIBS += build/libcustomop_gpu_lib.so
 endif
 extension_libs: $(EXT_LIBS)
 
 build/libcustomop_lib.so:
 	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/gemm_lib.cc -o /dev/null -I include/mxnet
 	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/gemm_lib.cc -o $@ -I include/mxnet
+build/libtransposecsr_lib.so:
+	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/transposecsr_lib.cc -o /dev/null -I include/mxnet
+	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/transposecsr_lib.cc -o $@ -I include/mxnet

Review comment:
       we should write explicitly it requires gcc 7 or any other prerequisite to use c++17

##########
File path: Makefile
##########
@@ -686,21 +686,36 @@ pylint:
 	python3 -m pylint --rcfile=$(ROOTDIR)/ci/other/pylintrc --ignore-patterns=".*\.so$$,.*\.dll$$,.*\.dylib$$" python/mxnet
 
 # MXNet extension dynamically loading libraries
-EXT_LIBS = build/libcustomop_lib.so build/libsubgraph_lib.so
+EXT_LIBS = build/libcustomop_lib.so build/libtransposecsr_lib.so build/libtransposerowsp_lib.so build/libsubgraph_lib.so build/libpass_lib.so
 ifeq ($(USE_CUDA), 1)
         EXT_LIBS += build/libcustomop_gpu_lib.so
 endif
 extension_libs: $(EXT_LIBS)
 
 build/libcustomop_lib.so:
 	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/gemm_lib.cc -o /dev/null -I include/mxnet
 	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/gemm_lib.cc -o $@ -I include/mxnet
+build/libtransposecsr_lib.so:
+	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/transposecsr_lib.cc -o /dev/null -I include/mxnet
+	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/transposecsr_lib.cc -o $@ -I include/mxnet

Review comment:
       plus since this will only go into 1.7 where c++17 is not included, we don't need this test at all




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411781628



##########
File path: Makefile
##########
@@ -686,21 +686,36 @@ pylint:
 	python3 -m pylint --rcfile=$(ROOTDIR)/ci/other/pylintrc --ignore-patterns=".*\.so$$,.*\.dll$$,.*\.dylib$$" python/mxnet
 
 # MXNet extension dynamically loading libraries
-EXT_LIBS = build/libcustomop_lib.so build/libsubgraph_lib.so
+EXT_LIBS = build/libcustomop_lib.so build/libtransposecsr_lib.so build/libtransposerowsp_lib.so build/libsubgraph_lib.so build/libpass_lib.so
 ifeq ($(USE_CUDA), 1)
         EXT_LIBS += build/libcustomop_gpu_lib.so
 endif
 extension_libs: $(EXT_LIBS)
 
 build/libcustomop_lib.so:
 	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/gemm_lib.cc -o /dev/null -I include/mxnet
 	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/gemm_lib.cc -o $@ -I include/mxnet
+build/libtransposecsr_lib.so:
+	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/transposecsr_lib.cc -o /dev/null -I include/mxnet
+	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/transposecsr_lib.cc -o $@ -I include/mxnet

Review comment:
       plus since this will only go into 1.7 where c++17 is not included, we don't need this test at all




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411651625



##########
File path: CMakeLists.txt
##########
@@ -726,18 +726,39 @@ endif()
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
+add_library(transposecsr_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposecsr_lib.cc)
+add_library(transposerowsp_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/transposerowsp_lib.cc)
 add_library(subgraph_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_subgraph/subgraph_lib.cc)
+add_library(pass_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_pass/pass_lib.cc)
 target_include_directories(customop_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposecsr_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(transposerowsp_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 target_include_directories(subgraph_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
+target_include_directories(pass_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 if(USE_CUDA)
   add_library(customop_gpu_lib SHARED ${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/relu_lib.cu)
   target_include_directories(customop_gpu_lib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/mxnet)
 endif()
-if(MSVC)
+if(UNIX)

Review comment:
       those things can be deleted

##########
File path: Makefile
##########
@@ -686,21 +686,36 @@ pylint:
 	python3 -m pylint --rcfile=$(ROOTDIR)/ci/other/pylintrc --ignore-patterns=".*\.so$$,.*\.dll$$,.*\.dylib$$" python/mxnet
 
 # MXNet extension dynamically loading libraries
-EXT_LIBS = build/libcustomop_lib.so build/libsubgraph_lib.so
+EXT_LIBS = build/libcustomop_lib.so build/libtransposecsr_lib.so build/libtransposerowsp_lib.so build/libsubgraph_lib.so build/libpass_lib.so
 ifeq ($(USE_CUDA), 1)
         EXT_LIBS += build/libcustomop_gpu_lib.so
 endif
 extension_libs: $(EXT_LIBS)
 
 build/libcustomop_lib.so:
 	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/gemm_lib.cc -o /dev/null -I include/mxnet
 	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/gemm_lib.cc -o $@ -I include/mxnet
+build/libtransposecsr_lib.so:
+	@mkdir -p $(@D)
+	$(CXX) -shared -fPIC -std=c++11 example/extensions/lib_custom_op/transposecsr_lib.cc -o /dev/null -I include/mxnet
+	$(CXX) -shared -fPIC -std=c++17 example/extensions/lib_custom_op/transposecsr_lib.cc -o $@ -I include/mxnet

Review comment:
       we should write explicitly it requires gcc 7 or any other prerequisite to use c++17

##########
File path: example/extensions/lib_custom_op/gemm_lib.cc
##########
@@ -87,20 +87,20 @@ MXReturnValue forward(std::map<std::string, std::string> attrs,
  ***** gradient outputs
  * outputs[0] = dA; outputs[1] = dB
  */
-MXReturnValue backward(std::map<std::string, std::string> attrs,
-                       std::vector<MXTensor> inputs,
-                       std::vector<MXTensor> outputs,
-                       OpResource res) {
+MXReturnValue backward(const std::unordered_map<std::string, std::string>& attrs,
+                       std::vector<MXTensor>* inputs,

Review comment:
       don't forget to update this change to lib_custom_op README




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411688191



##########
File path: include/mxnet/lib_api.h
##########
@@ -529,7 +572,7 @@ std::string getShapeAt(const std::string& shape, unsigned index) {
 }
 
 /* \brief get dtype value from list of dtypes string
- * format: [1] or [1,2]
+ * format: [1] or [1,2], returns "1" or "2"

Review comment:
       Same as above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r412313571



##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -200,6 +208,9 @@ If the number of input and output tensors are fixed, you can use hard-coded numb
 * **inferType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input data types corresponding to the input tensors. The 3rd argument is the placeholder for output tensor data types you need to assign.
 For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
 
+* **inferSType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input storage types corresponding to the input tensors. The 3rd argument is the placeholder for output storage types you need to assign.
+For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.

Review comment:
       data type doesn’t change -> data storage type doesn’t change

##########
File path: example/extensions/lib_pass/README.md
##########
@@ -0,0 +1,190 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+Custom Graph Pass Example and Tutorial
+=======================================
+
+## Introduction
+
+Adding custom graph passes in MXNet used to require deep understanding of the MXNet backend, including nnvm pass registration and other internal classes, followed by recompiling MXNet from source. This feature allows adding custom graph passes by dynamically loading external libraries at runtime.
+
+This custom graph pass feature enables users to write custom model modification strategies without compiling against all of MXNet header files and dependencies. When a library containing custom passes is loaded dynamically, the components found in the library will be registered in MXNet so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+To run the following example, the build type of MXNet doesn’t matter since the custom pass doesn’t interact with the execution of other native MXNet features. Note that if you want to use your custom pass with models running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom passes by running an example provided in the **example/extensions/lib_pass** directory. The `myPass` example just copies the input graph to the output. Go to the **lib_pass** directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library **libpass_lib.so** which is compiled from the `pass_lib.cc` file. This is the library you are going to load that contains everything for the custom pass.
+2. Run `python test_pass.py`. It’ll first load the above library, find the components, register them in the MXNet backend, then execute the pass on the model and execute the operators like a regular MXNet operator and output the result. Below is the output when running the `python test_pass.py` command. Notice that it loads 2 passes: myPass and jsonPass.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 0 operators in library
+[10:38:03] src/c_api/c_api.cc:785: Found 0 partitioners in library
+[07:14:00] src/c_api/c_api.cc:887: Found 2 graph passes in library
+[07:14:00] src/c_api/c_api.cc:902:       Graph Pass [0] myPass
+[07:14:00] src/c_api/c_api.cc:902:       Graph Pass [1] jsonPass
+```
+
+### Basic Files For Custom Pass Library
+* **lib_pass/pass_lib.cc**: This file has a source code implementation of all required components to make a custom pass, it also shows registration of them so that they can be loaded by MXNet.
+* **lib_pass/Makefile**: This file compiles the source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom pass is compatible with C++11 onwards.
+* **lib_pass/test_pass.py**: This file calls `mx.library.load(‘libpass_lib.so’)` to load the library containing the custom components, executes the pass on the model using the `optimize_for` API, and prints outputs of the forward passes. The outputs should be the same as the regular MXNet forward pass without running the pass.
+* **include/mxnet/lib_api.h**: This file from MXNet source code is the single header file needed to include all necessary data types and function prototypes for writing a custom library. You can either specify the include path in the `Makefile`, or copy the header file over to `example/extensions/lib_pass` folder. Note that apart from this header, the custom library is independent of MXNet source.
+## Writing Custom Pass Library
+To build your own library containing a custom pass, compose a C++ source file like `mypass_lib.cc`, include `lib_api.h` header file, and write your custom pass with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PASS` - Pass Registration Macro
+- `graphPass` - Pass Implementation
+Then compile it to the `mypass_lib.so` dynamic library using the following command:
+```bash
+g++ -shared -fPIC -std=c++11 mypass_lib.cc -o libmypass_lib.so -I ../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and execute your pass on a model:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmypass_lib.so’)

Review comment:
       ‘libmypass_lib.so’  -> 'libmypass_lib.so'

##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -200,6 +208,9 @@ If the number of input and output tensors are fixed, you can use hard-coded numb
 * **inferType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input data types corresponding to the input tensors. The 3rd argument is the placeholder for output tensor data types you need to assign.
 For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
 
+* **inferSType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input storage types corresponding to the input tensors. The 3rd argument is the placeholder for output storage types you need to assign.
+For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.

Review comment:
       a list of input storage types corresponding to the input tensors -> a list of input storage types corresponding to the input tensors (dense, row_sparse, or CSR). For details, see https://cwiki.apache.org/confluence/display/MXNET/A+Guide+to+Implementing+Sparse+Operators+in+MXNet+Backend 
   
   It would be good to include the link above in case people wonder why/if inferSType is needed. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r411650846



##########
File path: example/extensions/lib_custom_op/relu_lib.cu
##########
@@ -29,93 +29,93 @@
 #define NumThreadPerBlock 256 // mxnet recommended cuda thread number per block
 
 __global__ void relu_gpu_forward(float *out, float *in, int64_t N) {
-    int tid = blockIdx.x * blockDim.x + threadIdx.x;
-    if (tid < N)
-        out[tid] = in[tid] > 0 ? in[tid] : 0;
+  int tid = blockIdx.x * blockDim.x + threadIdx.x;
+  if (tid < N)
+    out[tid] = in[tid] > 0 ? in[tid] : 0;
 }
 
 __global__ void relu_gpu_backward(float *ingrad, float *outgrad, float *indata, int64_t N) {
-    int tid = blockIdx.x * blockDim.x + threadIdx.x;
-    if (tid < N)
-        ingrad[tid] = indata[tid] > 0 ? 1 * outgrad[tid] : 0;
+  int tid = blockIdx.x * blockDim.x + threadIdx.x;
+  if (tid < N)
+    ingrad[tid] = indata[tid] > 0 ? 1 * outgrad[tid] : 0;
 }
 
-MXReturnValue forwardCPU(std::map<std::string, std::string> attrs,
-                         std::vector<MXTensor> inputs,
-                         std::vector<MXTensor> outputs,
-                         OpResource res) {
-    float* in_data = inputs[0].data<float>();
-    float* out_data = outputs[0].data<float>();
-    for (int i=0; i<inputs[0].size(); i++) {
-        out_data[i] = in_data[i] > 0 ? in_data[i] : 0;
-    }
-    return MX_SUCCESS;
+MXReturnValue forwardCPU(const std::unordered_map<std::string, std::string>& attrs,
+                         std::vector<MXTensor>* inputs,
+                         std::vector<MXTensor>* outputs,
+                         const OpResource& res) {
+  float* in_data = inputs->at(0).data<float>();
+  float* out_data = outputs->at(0).data<float>();
+  for (int i=0; i<inputs->at(0).size(); i++) {
+    out_data[i] = in_data[i] > 0 ? in_data[i] : 0;
+  }
+  return MX_SUCCESS;
 }
 
-MXReturnValue backwardCPU(std::map<std::string, std::string> attrs,
-                          std::vector<MXTensor> inputs,
-                          std::vector<MXTensor> outputs,
-                          OpResource res) {
-    float* out_grad = inputs[0].data<float>();
-    float* in_data = inputs[1].data<float>();
-    float* in_grad = outputs[0].data<float>();
-    for (int i=0; i<inputs[1].size(); i++) {
-        in_grad[i] = in_data[i] > 0 ? 1 * out_grad[i] : 0;
-    }
-    return MX_SUCCESS;
+MXReturnValue backwardCPU(const std::unordered_map<std::string, std::string>& attrs,
+                          std::vector<MXTensor>* inputs,
+                          std::vector<MXTensor>* outputs,
+                          const OpResource& res) {
+  float* out_grad = inputs->at(0).data<float>();
+  float* in_data = inputs->at(1).data<float>();
+  float* in_grad = outputs->at(0).data<float>();
+  for (int i=0; i<inputs->at(1).size(); i++) {
+    in_grad[i] = in_data[i] > 0 ? 1 * out_grad[i] : 0;
+  }
+  return MX_SUCCESS;
 }
 
-MXReturnValue forwardGPU(std::map<std::string, std::string> attrs,
-                         std::vector<MXTensor> inputs,
-                         std::vector<MXTensor> outputs,
-                         OpResource res) {
-    float* in_data = inputs[0].data<float>();
-    float* out_data = outputs[0].data<float>();
+MXReturnValue forwardGPU(const std::unordered_map<std::string, std::string>& attrs,
+                         std::vector<MXTensor>* inputs,
+                         std::vector<MXTensor>* outputs,
+                         const OpResource& res) {
+  float* in_data = inputs->at(0).data<float>();
+  float* out_data = outputs->at(0).data<float>();
 
-    mx_stream_t cuda_stream = res.get_cuda_stream();
-    int64_t N = inputs[0].size();
-    int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+  mx_stream_t cuda_stream = res.get_cuda_stream();
+  int64_t N = inputs->at(0).size();
+  int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
 
-    relu_gpu_forward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(out_data, in_data, N);
+  relu_gpu_forward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(out_data, in_data, N);
 
-    return MX_SUCCESS;
+  return MX_SUCCESS;
 }
 
-MXReturnValue backwardGPU(std::map<std::string, std::string> attrs,
-                          std::vector<MXTensor> inputs,
-                          std::vector<MXTensor> outputs,
-                          OpResource res) {
-    float* out_grad = inputs[0].data<float>();
-    float* in_data = inputs[1].data<float>();
-    float* in_grad = outputs[0].data<float>();
-
-    mx_stream_t cuda_stream = res.get_cuda_stream();
-    int64_t N = inputs[0].size();
-    int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+MXReturnValue backwardGPU(const std::unordered_map<std::string, std::string>& attrs,
+                          std::vector<MXTensor>* inputs,
+                          std::vector<MXTensor>* outputs,
+                          const OpResource& res) {
+  float* out_grad = inputs->at(0).data<float>();
+  float* in_data = inputs->at(1).data<float>();
+  float* in_grad = outputs->at(0).data<float>();
 
-    relu_gpu_backward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(in_grad, out_grad, in_data, N);
+  mx_stream_t cuda_stream = res.get_cuda_stream();
+  int64_t N = inputs->at(0).size();
+  int num_block = (N + NumThreadPerBlock - 1) / NumThreadPerBlock;
+  relu_gpu_backward<<<num_block,NumThreadPerBlock,0,cuda_stream>>>(in_grad, out_grad, in_data, N);
 
-    return MX_SUCCESS;
+  return MX_SUCCESS;
 }
 
-MXReturnValue parseAttrs(std::map<std::string, std::string> attrs, int* num_in, int* num_out) {
-    *num_in = 1;
-    *num_out = 1;
-    return MX_SUCCESS;
+MXReturnValue parseAttrs(const std::unordered_map<std::string, std::string>& attrs,
+                         int* num_in, int* num_out) {
+  *num_in = 1;
+  *num_out = 1;
+  return MX_SUCCESS;
 }
 
-MXReturnValue inferType(std::map<std::string, std::string> attrs,
-                        std::vector<int> &intypes,
-                        std::vector<int> &outtypes) {
-    outtypes[0] = intypes[0];
-    return MX_SUCCESS;
+MXReturnValue inferType(const std::unordered_map<std::string, std::string>& attrs,
+                        const std::vector<int>& intypes,

Review comment:
       Same as in infershape.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] leezu commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
leezu commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-617296259


   @ptrendx let's not merge commits named "[WIP]" to master. You can edit the name prior to merge


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-617306223


   > @ptrendx let's not merge commits named "[WIP]" to master. You can edit the name prior to merge
   
   We didnt want to rerun the whole CI. did we fix the problem where renaming the PR reruns CI @leezu ?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#discussion_r412352205



##########
File path: example/extensions/lib_custom_op/README.md
##########
@@ -200,6 +208,9 @@ If the number of input and output tensors are fixed, you can use hard-coded numb
 * **inferType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input data types corresponding to the input tensors. The 3rd argument is the placeholder for output tensor data types you need to assign.
 For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
 
+* **inferSType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input storage types corresponding to the input tensors. The 3rd argument is the placeholder for output storage types you need to assign.
+For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.

Review comment:
       We need a whole overview for Sparse. Maybe you can help us add another section to the readme about that




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ptrendx commented on issue #17885: [WIP] MXNet Extensions enhancements

Posted by GitBox <gi...@apache.org>.
ptrendx commented on issue #17885:
URL: https://github.com/apache/incubator-mxnet/pull/17885#issuecomment-617304972


   Oops, you are right, sorry, I missed that.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org