You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/07 19:59:37 UTC

[GitHub] [incubator-mxnet] rondogency opened a new pull request #17241: Add CustomOp tutorial doc

rondogency opened a new pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241
 
 
   ## Description ##
   Add a brief tutorial doc on CustomOp tutorial and examples.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369887945
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
 
 Review comment:
   talked with aaron offline, and it only works well when everything is shown on the website, and it requires knowledge about sphinx plugin, so we gonna shelve it now

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369888289
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
 
 Review comment:
   agree with aaron that we can mention a brief real world example like "dropping channels of a image" to show user how to find the output tensor shape

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369889710
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
 
 Review comment:
   we can add a small section at the beginning of "Writing Operator" section, and list out some essential building blocks that goes to .cc file, and some blocks that can be swapped out, to give a user an overview of whole procedure

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365006013
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
 
 Review comment:
   maybe change to pre-requisites? or make "Have MXNet Ready" a subsection?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366112812
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
+
+* **backward**: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* **mutateInputs**: This function is for marking mutable inputs. It takes 2 arguments. 1st argument is the attributes. 2nd argument is a list of input indices that are mutable among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
+
+### Stateful Custom Operator:
+
+Stateful operator is useful when a forward/backward call needs some data or ‘state’ from previous forward/backward calls. Normally we create a class and make instance variables store the states used for computing or caching.
+
+Most of the building blocks for making stateful custom operator is the same as regular custom operator, except it’ll register **createOpState** instead of forward function for the computation.
+
+* [createOpState](./gemm_lib.cc#L204) - Create stateful operator instance:
+    * This function takes 2 arguments. 1st argument is attributes. 2nd argument is a placeholder for CustomStatefulOp object. You must [define a class that inherits CustomStatefulOp](./gemm_lib.cc#L178) and override the forward function (optionally the backward function), then you need to create an instance of your class and assign it to the placeholder. In this way all the forward/backward calls will use the same methods in that instance, and the instance is able to keep the state of the operator.
 
 Review comment:
   ```suggestion
       * This function takes two arguments. The 1st argument is attributes. The 2nd argument is a placeholder for `CustomStatefulOp` object. You must [define a class that inherits CustomStatefulOp](./gemm_lib.cc#L178) and override the forward function (optionally the backward function). Then you need to create an instance of your class and assign it to the placeholder. In this way, all of the forward/backward calls will use the same methods in that instance, and the instance is able to keep the state of the operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366109160
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
 
 Review comment:
   Introduction? What are we going to accomplish in this example?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r370392145
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,147 @@
+CustomOp Example and Tutorial
+=============================
+
+## Introduction
+
+Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
+
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
+
+1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
+2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
+
+## Writing Custom Operator Library:
+
+For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with those essential functions:
 
 Review comment:
   ```suggestion
   For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with these essential functions:
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363930978
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
+
+* backward - Backward Gradient function: This function specifies the computation of backward pass of the operator
+
+* mutateInputs - Mutate Input Mark: This function allows you to mark some inputs to be mutate inputs, useful when using aux parameters for BatchNorm-like operators
+
+Let’s take a closer look at those registry functions:
+
+* parseAttrs: This function takes 3 parameters. 1st parameter is an input, which is the attributes passed all the way from Python code. When user calls mx.nd.my_op_name(s,t,keyword=1), the keyword is passed to the attributes as an entry of the map. 2nd & 3rd parameters are outputs, and you need to assign num_in/num_out values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the keyword value to determine the num_in and num_out.
 
 Review comment:
   parameters ==> arguments
   assign ==> set
   keyword value ==> user-specified attributes
   num_in and num_out ==> number of inputs and outputs

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369833473
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
 
 Review comment:
   yes

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366111150
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
 
 Review comment:
   ```suggestion
   * **parseAttrs**: This function takes three arguments. The 1st argument is an input, which is the attributes passed all the way from Python code. When you call `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. The 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise, you can get the user-specified attributes to determine the number of inputs and outputs.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110494
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
 
 Review comment:
   ```suggestion
       * This function specifies how the custom operator infers output tensor shape using input shape.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366108930
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
 
 Review comment:
   Colons aren't needed in the titles.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369906264
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
 
 Review comment:
   Here's an example that could work for you:
   https://build-me-the-docs-please.readthedocs.io/en/latest/Using_Sphinx/ShowingCodeExamplesInSphinx.html#literalinclude-directive
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-579927006
 
 
   @mxnet-label-bot update [pr-awaiting-merge]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366109842
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
 
 Review comment:
   ```suggestion
   2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-572773130
 
 
   @samskalicky @wkcn resolved all the comments!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-577918479
 
 
   @aaronmarkham thanks for the approval! I resolved your comments since I have to add license header anyway.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r370392906
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,147 @@
+CustomOp Example and Tutorial
+=============================
+
+## Introduction
+
+Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
+
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
+
+1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
+2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
+
+## Writing Custom Operator Library:
+
+For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with those essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_OP` - Operator Registration Marco
+- `parseAttrs` - Attribute Parser
+- `inferType` - Type Inference
+- `inferShape` - Shape Inference
+- `forward` - Forward Computation (can be replace with `createOpState`, see below for details)
+
+Then compile it to `libmyop_lib.so` dynamic library using the following command
+
 
 Review comment:
   Surround this with
   code block `bash`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363929814
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
 
 Review comment:
   Attributes ==> Attribute
   
   Also mention that this is where a custom operator can validate the attributes (ie. options) specified by the user

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r370393532
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,147 @@
+CustomOp Example and Tutorial
+=============================
+
+## Introduction
+
+Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
+
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
+
+1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
+2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
+
+## Writing Custom Operator Library:
+
+For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with those essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_OP` - Operator Registration Marco
+- `parseAttrs` - Attribute Parser
+- `inferType` - Type Inference
+- `inferShape` - Shape Inference
+- `forward` - Forward Computation (can be replace with `createOpState`, see below for details)
+
+Then compile it to `libmyop_lib.so` dynamic library using the following command
+
+    g++ -shared -fPIC -std=c++11 myop_lib.cc -o libmyop_lib.so -I ../../../include/mxnet
+
+Finally you can write a python script to load the library and run your custom operator
 
 Review comment:
   ```suggestion
   Finally, you can write a Python script to load the library and run your custom operator:
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366111402
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
 
 Review comment:
   ```suggestion
   * **inferType**: This function takes three arguments. The 1st argument is the attributes (same as above). The 2nd argument is the a list of input data types corresponding to the input tensors. The 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has one input and one output, and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366112461
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
+
+* **backward**: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* **mutateInputs**: This function is for marking mutable inputs. It takes 2 arguments. 1st argument is the attributes. 2nd argument is a list of input indices that are mutable among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
+
+### Stateful Custom Operator:
+
+Stateful operator is useful when a forward/backward call needs some data or ‘state’ from previous forward/backward calls. Normally we create a class and make instance variables store the states used for computing or caching.
 
 Review comment:
   ```suggestion
   A stateful custom operator is useful when a forward/backward call needs some data or ‘state’ from previous forward/backward calls. Normally we create a class, and make instance variables store the states used for computing or caching.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363930328
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
+
+* backward - Backward Gradient function: This function specifies the computation of backward pass of the operator
+
+* mutateInputs - Mutate Input Mark: This function allows you to mark some inputs to be mutate inputs, useful when using aux parameters for BatchNorm-like operators
 
 Review comment:
   Mutate Input Mark ==> Specify Mutable Inputs

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r364501201
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
 
 Review comment:
   Some users may not know what is gemm, so we can provide the full name: gemm(Generalized Matrix Multiplication).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110748
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
 
 Review comment:
   ```suggestion
       * This macro registers the custom operator to all of the MXNet APIs by its name. You need to call setters to bind the above functions to the registered operator.
   ```
   Is the last sentence clear enough? I'm not really sure what you mean.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366109555
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
 
 Review comment:
   ```suggestion
   You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366112551
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
+
+* **backward**: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* **mutateInputs**: This function is for marking mutable inputs. It takes 2 arguments. 1st argument is the attributes. 2nd argument is a list of input indices that are mutable among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
+
+### Stateful Custom Operator:
+
+Stateful operator is useful when a forward/backward call needs some data or ‘state’ from previous forward/backward calls. Normally we create a class and make instance variables store the states used for computing or caching.
+
+Most of the building blocks for making stateful custom operator is the same as regular custom operator, except it’ll register **createOpState** instead of forward function for the computation.
 
 Review comment:
   ```suggestion
   Most of the building blocks for making a stateful custom operator is the same as regular custom operator, except it’ll register `createOpState` instead of a forward function for the computation.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363928024
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
 
 Review comment:
   intervene ==> interact

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366113614
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
 
 Review comment:
   I think this is missing a transition. How do I go from running this basic example to consuming the following info for my own op? Maybe even a simple example of customization for a particular use case would help.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110539
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
 
 Review comment:
   ```suggestion
       * This function specifies the computation of the forward pass of the operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366112292
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
+
+* **backward**: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* **mutateInputs**: This function is for marking mutable inputs. It takes 2 arguments. 1st argument is the attributes. 2nd argument is a list of input indices that are mutable among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
 
 Review comment:
   ```suggestion
   * **mutateInputs**: This function is for marking mutable inputs. It takes two arguments. The 1st argument is the attributes. The 2nd argument is a list of input indices that are mutable among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember, the index number of `input_indices` should not exceed the number of inputs.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110882
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
 
 Review comment:
   ```suggestion
       * This function allows you to mark some inputs to be mutable inputs. It is useful when using aux parameters for BatchNorm-like operators.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r370393630
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,147 @@
+CustomOp Example and Tutorial
+=============================
+
+## Introduction
+
+Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
+
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
+
+1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
+2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
+
+## Writing Custom Operator Library:
+
+For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with those essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_OP` - Operator Registration Marco
+- `parseAttrs` - Attribute Parser
+- `inferType` - Type Inference
+- `inferShape` - Shape Inference
+- `forward` - Forward Computation (can be replace with `createOpState`, see below for details)
+
+Then compile it to `libmyop_lib.so` dynamic library using the following command
+
+    g++ -shared -fPIC -std=c++11 myop_lib.cc -o libmyop_lib.so -I ../../../include/mxnet
+
+Finally you can write a python script to load the library and run your custom operator
+
 
 Review comment:
   Surround with code block `python`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110828
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
 
 Review comment:
   ```suggestion
       * This function specifies the computation of the backward pass of the operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369830264
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
 
 Review comment:
   can you provide me an example of how to do it?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-573842521
 
 
   @eric-haibin-lin Thanks for the review! I will add it to a TODO list with upcoming features and consolidating all docs about writing operators (to give new users a smooth experience)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110345
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
 
 Review comment:
   Should look into the Sphinx plugin that facilitates this, so you don't use a line number that's gonna move.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366112087
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
+
+* **backward**: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
 
 Review comment:
   ```suggestion
   * **backward**: This function is doing the backward gradient computation. It will be similar to the forward function. And you need to figure out the formula of the backward gradient computation.
   ```
   How?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-572807610
 
 
   @eric-haibin-lin @aaronmarkham can you also take a quick look at the doc, thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366109930
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
 
 Review comment:
   ```suggestion
   * **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r370392731
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,147 @@
+CustomOp Example and Tutorial
+=============================
+
+## Introduction
+
+Adding new operators in MXNet requires understanding of MXNet backend operator registration and recompiling of MXNet with all its dependencies. Users can use the old Python custom operator to add new operators, but it is slow, complicated and has poor adoption rate. So our approach for adding custom operators is to enable dynamic loading of C++ custom operators compiled in external libraries at runtime.
+
+Custom operators (CustomOp) enable users to write new operators without compiling against all of MXNet header files and dependencies. When a library containing custom operators is loaded dynamically, the operators found in the library will be re-registered in MXNet so that users can call those operators natively just like other built-in operators.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operators by running some examples provided in the **example/extensions/lib_custom_op** directory. Start with a common linear algebra operator like `gemm` (Generalized Matrix Multiplication). Go to `lib_custom_op` directory and follow these steps:
+
+1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
+2. Run `python test_gemm.py`. It’ll first load the above .so library, find the operators, register them in the MXNet backend, print "Found x operators", then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has a source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file `include/mxnet/lib_api.h` from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
+
+## Writing Custom Operator Library:
+
+For building a library containing your own custom operator, compose a C++ source file like `myop_lib.cc`, include `lib_api.h` header file, and write your custom operator implementation with those essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_OP` - Operator Registration Marco
+- `parseAttrs` - Attribute Parser
+- `inferType` - Type Inference
+- `inferShape` - Shape Inference
+- `forward` - Forward Computation (can be replace with `createOpState`, see below for details)
+
+Then compile it to `libmyop_lib.so` dynamic library using the following command
 
 Review comment:
   ```suggestion
   Then compile it to `libmyop_lib.so` dynamic library using the following command:
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365007081
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+## Basic Files For GEMM Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * `MXReturnValue parseAttrs(std::map<std::string, std::string> attrs, int* num_in, int* num_out)`
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
 
 Review comment:
   can we make these file/line parts links to the actual code?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363932439
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
+
+* backward - Backward Gradient function: This function specifies the computation of backward pass of the operator
+
+* mutateInputs - Mutate Input Mark: This function allows you to mark some inputs to be mutate inputs, useful when using aux parameters for BatchNorm-like operators
+
+Let’s take a closer look at those registry functions:
+
+* parseAttrs: This function takes 3 parameters. 1st parameter is an input, which is the attributes passed all the way from Python code. When user calls mx.nd.my_op_name(s,t,keyword=1), the keyword is passed to the attributes as an entry of the map. 2nd & 3rd parameters are outputs, and you need to assign num_in/num_out values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the keyword value to determine the num_in and num_out.
+
+* inferType: This function takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input data type enum corresponding to the data types of input tensors. 3rd parameter is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do outtypes[0] = intypes[0]; to populate the data type.
+
+* inferShape: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* forward: This function is doing the main forward computation. It also takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input MXTensors which stores all data and info of input ndarrays. 3rd parameter is the output MXTensors. You need to do the forward computing given the input tensors and data types, and write the result back to the output tensor data pointer. Additionally you can use dltensor tensor structor stored in MXTensor as a more standardized data structure for computing.
+
+* backward: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* mutateInputs: This function is for marking mutate inputs. It takes 2 parameters. 1st parameter is the attributes. 2nd parameter is a list of  indices of mutate inputs among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
+
+## Stateful Custom Operator:
+
+Stateful operator is useful when a forward/backward call needs some data or ‘state’ from the previous forward/backward call. Idiomatically we create a class and make instance variables store the state used for computing or caching.
+
+Most of the building blocks for making stateful custom operator is the same as regular custom operator, except it’ll register *createOpState* instead of forward for the computation.
+
+* createOpState: This function takes 2 parameters. 1st parameter is attributes. 2nd parameter is a placeholder for  CustomStatefulOp object. You must define a class that inherits CustomStatefulOp and override the forward function. Then you need to create an instance and assign it to the placeholder, in this way all the forward/backward calls will use the same methods in that instance and the instance is able to keep the state.
 
 Review comment:
   override the forward function (optionally also the backward function). 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363928803
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
 
 Review comment:
   lets not mention the subgraph operator here. It will be removed from the custom Op example in the subgraph property PR #17034 and moved to the subgraph property example. It doesnt make sense to have this subgraph op example here since they cannot be used in isolation. Subgraph ops have to be inserted by subgraph properties. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-577883990
 
 
   @mxnet-label-bot add [pr-awaiting-review]

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363932111
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
+
+* backward - Backward Gradient function: This function specifies the computation of backward pass of the operator
+
+* mutateInputs - Mutate Input Mark: This function allows you to mark some inputs to be mutate inputs, useful when using aux parameters for BatchNorm-like operators
+
+Let’s take a closer look at those registry functions:
+
+* parseAttrs: This function takes 3 parameters. 1st parameter is an input, which is the attributes passed all the way from Python code. When user calls mx.nd.my_op_name(s,t,keyword=1), the keyword is passed to the attributes as an entry of the map. 2nd & 3rd parameters are outputs, and you need to assign num_in/num_out values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the keyword value to determine the num_in and num_out.
+
+* inferType: This function takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input data type enum corresponding to the data types of input tensors. 3rd parameter is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do outtypes[0] = intypes[0]; to populate the data type.
+
+* inferShape: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* forward: This function is doing the main forward computation. It also takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input MXTensors which stores all data and info of input ndarrays. 3rd parameter is the output MXTensors. You need to do the forward computing given the input tensors and data types, and write the result back to the output tensor data pointer. Additionally you can use dltensor tensor structor stored in MXTensor as a more standardized data structure for computing.
+
+* backward: This function is doing the backward gradient computation. It will be similar to forward function. And you need to  figure out the formula of backward.
+
+* mutateInputs: This function is for marking mutate inputs. It takes 2 parameters. 1st parameter is the attributes. 2nd parameter is a list of  indices of mutate inputs among all input tensors. It is useful when some inputs are auxiliary model parameters and might be altered during forward/backward computation. Remember the index number of input_indices should not exceed the number of inputs.
 
 Review comment:
   mutate ==> mutable
   list of  indices of mutate inputs ==> list of indices for inputs that are mutable

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366109718
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
 
 Review comment:
   ```suggestion
   1. Run `make gemm_lib`. The Makefile will generate a dynamic library **libgemm_lib.so** compiled from `gemm_lib.cc`. This is the library you are going to load that contains everything for the custom gemm operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r369829524
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
 
 Review comment:
   Do you mean make the === longer to match the length of the title?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-573384290
 
 
   There are multiple docs on writing operators. Shall we consolidate them?
   - python op: https://github.com/apache/incubator-mxnet/tree/3ece00b1acf3e5ca2bbf46d6eaf36ae900cd7666/example/numpy-ops 
   - cpp op guide: https://mxnet.apache.org/api/faq/add_op_in_backend
   - old cpp op & python op guide: https://mxnet.apache.org/api/faq/new_op (it mentions mshadow, which is supposed to be deprecated...?)
   - this tutorial 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366111871
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* **forward**: This function executes the main forward computation. It also takes 4 arguments. 1st argument is the attributes. 2nd argument is the input MXTensors which stores all data and info of input ndarrays. 3rd argument is the output MXTensors. 4th argument is OpResource object for memory allocation and other utilities. Additionally you can use dltensor tensor structure stored in MXTensor as a more standardized data structure for computing.
 
 Review comment:
   ```suggestion
   * **forward**: This function executes the main forward computation. It takes four arguments. The 1st argument is the attributes. The 2nd argument is the input `MXTensors` which stores all data and info of input ndarrays. The 3rd argument is the output `MXTensors`. The 4th argument is the `OpResource` object for memory allocation and other utilities. Additionally, you can use a `dltensor` tensor structure stored in the `MXTensor` as a more standardized data structure for computing.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366108808
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
 
 Review comment:
   Sometimes one of the transpilers will complain that this is too short. Recommend making it longer to match the title.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110448
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
 
 Review comment:
   ```suggestion
       * This function specifies how the custom operator infers output data types using input data types.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r364504562
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
 
 Review comment:
   It will be better to provide the arguments list of the function, and a link to the code.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366111466
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+### Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+            MXReturnValue parseAttrs(
+                std::map<std::string,
+                std::string> attrs,
+                int* num_in,
+                int* num_out)
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
+    * This function specifies how custom operator infers output data types using input data types.
+
+            MXReturnValue inferType(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &intypes,
+                std::vector<int> &outtypes)
+
+* [inferShape](./gemm_lib.cc#L143) - Shape Inference:
+    * This function specifies how custom operator infers output tensor shape using input shape.
+
+            MXReturnValue inferShape(
+                std::map<std::string, std::string> attrs,
+                std::vector<std::vector<unsigned int>> &inshapes,
+                std::vector<std::vector<unsigned int>> &outshapes)
+
+* [forward](./gemm_lib.cc#L56) - Forward function:
+    * This function specifies the computation of forward pass of the operator.
+
+            MXReturnValue forward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [REGISTER_OP(my_op_name) Macro](./gemm_lib.cc#L169):
+    * This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+            REGISTER_OP(my_op_name)
+            .setForward(forward)
+            .setParseAttrs(parseAttrs)
+            .setInferType(inferType)
+            .setInferShape(inferShape);
+
+Also there are some optional functions you can specify:
+
+* [backward](./gemm_lib.cc#L90) - Backward Gradient function:
+    * This function specifies the computation of backward pass of the operator.
+
+            MXReturnValue backward(
+                std::map<std::string, std::string> attrs,
+                std::vector<MXTensor> inputs,
+                std::vector<MXTensor> outputs,
+                OpResource res)
+
+* [mutateInputs](./gemm_lib.cc#L214) - Specify mutable input:
+    * This function allows you to mark some inputs to be mutable inputs, useful when using aux parameters for BatchNorm-like operators.
+
+            MXReturnValue mutateInputs(
+                std::map<std::string, std::string> attrs,
+                std::vector<int> &input_indices)
+
+Let’s take a closer look at those registry functions:
+
+* **parseAttrs**: This function takes 3 arguments. 1st argument is an input, which is the attributes passed all the way from Python code. When user calls `mx.nd.my_op_name(s,t,keyword=1)`, the keyword is passed to the attributes as an entry of the map. 2nd & 3rd arguments are outputs, and you need to set number of inputs and outputs values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the user-specified attributes to determine the number of inputs and outputs.
+
+* **inferType**: This function takes 3 arguments. 1st argument is the attributes (same as above). 2nd argument is the a list of input data types corresponding to the input tensors. 3rd argument is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do `outtypes[0] = intypes[0]` to populate the data type.
+
+* **inferShape**: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
 
 Review comment:
   ```suggestion
   * **inferShape**: This function is similar to the `inferType` function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
   ```
   Maybe mention how?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r366110115
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,118 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+### Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+### Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+### Basic Files For Gemm Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
 
 Review comment:
   ```suggestion
   * **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invokes the operator using both NDArray and Symbol APIs, and prints outputs of the forward and backward passes. The outputs should be the same as the regular MXNet `gemm` operator.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] aaronmarkham merged pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
aaronmarkham merged pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363931725
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
+
+* backward - Backward Gradient function: This function specifies the computation of backward pass of the operator
+
+* mutateInputs - Mutate Input Mark: This function allows you to mark some inputs to be mutate inputs, useful when using aux parameters for BatchNorm-like operators
+
+Let’s take a closer look at those registry functions:
+
+* parseAttrs: This function takes 3 parameters. 1st parameter is an input, which is the attributes passed all the way from Python code. When user calls mx.nd.my_op_name(s,t,keyword=1), the keyword is passed to the attributes as an entry of the map. 2nd & 3rd parameters are outputs, and you need to assign num_in/num_out values to those placeholders.  If the number of input and output tensors are fixed, you can use hard-coded numbers. Otherwise you can get the keyword value to determine the num_in and num_out.
+
+* inferType: This function takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input data type enum corresponding to the data types of input tensors. 3rd parameter is the placeholder for output tensor data types you need to assign. For example, if this operator has 1 input and 1 output and data type doesn’t change, then you can do outtypes[0] = intypes[0]; to populate the data type.
+
+* inferShape: This function is similar to inferType function, except it is used for populating the output data shapes. You need to figure out the shapes of each output tensors for this computation.
+
+* forward: This function is doing the main forward computation. It also takes 3 parameters. 1st parameter is the attributes. 2nd parameter is the a list of input MXTensors which stores all data and info of input ndarrays. 3rd parameter is the output MXTensors. You need to do the forward computing given the input tensors and data types, and write the result back to the output tensor data pointer. Additionally you can use dltensor tensor structor stored in MXTensor as a more standardized data structure for computing.
 
 Review comment:
    is doing ==> executes
   3 parameters ==> 4 arguments 
   (4th is OpResource)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365006466
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+## Basic Files For GEMM Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * `MXReturnValue parseAttrs(std::map<std::string, std::string> attrs, int* num_in, int* num_out)`
 
 Review comment:
   for these function declarations, can we put each argument on a new line for clarity?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rondogency commented on issue #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-571747412
 
 
   @samskalicky @mseth10 @wkcn please take a look at the doc

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
wkcn commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r364504562
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
 
 Review comment:
   It will be better to provide the arguments list of the function, and a link to the code.
   For example:
   * forward  function
   'MXReturnValue forward(std::vector<MXTensor> inputs, ......) '
   
   This function specifies the computation of forward pass

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363930093
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* parseAttrs - Attributes Parser: This function specifies number of input and output tensors for the custom operator. 
+
+* inferType - Type Inference: This function specifies how custom operator infers output data types using input data types
+
+* inferShape - Shape Inference: This function specifies how custom operator infers output tensor shape using input shape
+
+* forward - Forward function: This function specifies the computation of forward pass of the operator
+
+* REGISTER_OP(my_op_name) Macro: This macro registers custom operator to all MXNet APIs by its name, and you need to call setters to bind the above functions to the registered operator.
+
+Also there are some operational functions you can specify:
 
 Review comment:
   operational ==> optional

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r363929331
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,69 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t intervene with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the *example/extensions/lib_custom_op* directory. There are 2 examples: a simple 2D gemm operator, a subgraph operator, and a Makefile.
+
+Let’s start with gemm operator. Go to that directory and follow the steps:
+
+1. run *make gemm_lib*, the Makefile will generate a dynamic library libgemm_lib.so compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run *python test_gemm.py*, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, and print "Found x operators"; then invoke the operator like a regular MXNet operator and print the result.
+
+## Basic Files For GEMM Library:
+
+* lib_custom_op/gemm_lib.cc: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* lib_custom_op/Makefile: Compile source code to a dynamic shared library, with a header file include/mxnet/lib_api.h from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* lib_custom_op/test_gemm.py: This file calls mx.library.load(‘libgemm_lib.so’) to load custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
 
 Review comment:
   to load custom operator ==> to load the library containing the custom operator

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365007081
 
 

 ##########
 File path: example/extensions/lib_custom_op/README.md
 ##########
 @@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+====
+
+## Getting Started
+
+## Have MXNet Ready:
+
+First you should install MXNet either from compiling from source code or download from nightly build. It doesn’t matter if the build comes with CUDA or MKLDNN. The custom operator doesn’t interact with the execution of other native MXNet operators.
+
+## Run An Example:
+
+You can start getting familiar with custom operator by running some examples we provide in the **example/extensions/lib_custom_op** directory. Let’s start with gemm (Generalized Matrix Multiplication) operator, a common linear algebra operator. Go to that directory and follow the steps:
+
+1. run `make gemm_lib`, the Makefile will generate a dynamic library **libgemm_lib.so** compiled from gemm_lib.cc. This is the library you are going to load that contains everything of the custom gemm operator.
+2. run `python test_gemm.py`, and it’ll first load the above .so library, find operators,  register them in the MXNet backend, print "Found x operators"; then invoke the operator like a regular MXNet operator and output the result.
+
+## Basic Files For GEMM Library:
+
+* **lib_custom_op/gemm_lib.cc**: This file has source code implementation of all required components of a custom operator, as well as the registration of the custom operator.
+
+* **lib_custom_op/Makefile**: Compile source code to a dynamic shared library, with a header file **include/mxnet/lib_api.h** from MXNet source code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_custom_op/test_gemm.py**: This file calls `mx.library.load(‘libgemm_lib.so’)` to load the library containing the custom operator, invoke the operator using both ndarray and symbol API, and print outputs of forward and backward pass. The outputs should be the same as the regular MXNet gemm operator.
+
+## Writing Custom Operators:
+
+## Regular Custom Operator:
+
+There are several basic building blocks for making a (stateless) custom operator:
+
+* [parseAttrs](./gemm_lib.cc#L118) - Attribute Parser:
+    * `MXReturnValue parseAttrs(std::map<std::string, std::string> attrs, int* num_in, int* num_out)`
+    * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user.
+
+
+* [inferType](./gemm_lib.cc#L124) - Type Inference:
 
 Review comment:
   can we make these file/line parts links to the actual code?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services