You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/11/16 20:59:49 UTC

[GitHub] [incubator-mxnet] Kh4L opened a new pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Kh4L opened a new pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543


   ## Description ##
   This PR separates the partitioning backend from hybridize.
   `hybridize` now only sets the CachedOp args and activate the hybridization.
   `optimize_for` is responsible of setting the backend, backend options and running the partitioning with backend.
   
   If the user wish to use any partitioning backend, they have use optimize_for.
   
   It also changes the default value of `optimize_for`'s `clear` arg, making the chaining of the backend the default behavior.
   
   The PR also make the CachedOp kwargs explicit and documented :  `static_alloc, static_shape, inline_limit, forward_bulk_size, backward_bulk_size`.
   
   ## Examples ##
   
   ### How optimize_for changed
   #### Before
   ```
   blk.optimize_for(x, backend="someBackend", backend_opts={'dedup_subgraph':True})
   ```
   #### After
   ```
   blk.optimize_for(x, backend="someBackend", dedup_subgraph=True)
   ```
   ### How hybridize changed
   #### Before
   ```
   blk.hybridize(backend="someBackend", static_alloc=True)
   blk(x)
   ```
   #### After 
   Hybridize can't be used to set the backend anymore, we now have to use `optimize_for`, which will call hybridize internally.
   ```
   blk.optimize_for(x, backend="someBackend", static_alloc=True)
   ```
   
   ### How chaining backends changed
   #### Before
   ```
   blk.optimize_for(x, backend="firstBackend", static_alloc=True)
   blk.optimize_for(x, backend="secondBackend", clear=False, dedup_subgraph=True)
   ```
   #### After
   `clear` default value is `False`, we simply chain them
   ```
   blk.optimize_for(x, backend="firstBackend", static_alloc=True)
   blk.optimize_for(x, backend="secondBackend", dedup_subgraph=True)
   ```
   
   cc @samskalicky who helped to design the API offline and reviewed the  `1.x` related PR #19386 
        @mseth10 who helped by reviewing the `1.x` PR


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524810384



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,27 +102,27 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
 For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(backend=None, clear=True)
 ```
 
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`. 
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass.

Review comment:
       remove "If you want to chain optimizations together, set `clear` to `False`."




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525580502



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1175,7 +1175,14 @@ def _call_cached_op(self, *args):
             out = [out]
         return _regroup(out, self._out_format)
 
-    def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def optimize_for(self, x, *args, backend=None, clear=False,

Review comment:
       Building on @waytrue17 's [comment](https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525553211), is there any point to calling optimize_for without a backend? or should backend always be required (and we shouldnt set default to be None). What do you think @mseth10?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524810384



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,27 +102,27 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
 For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(backend=None, clear=True)
 ```
 
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`. 
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass.

Review comment:
       remove "If you want to chain optimizations together, set `clear` to `False`."




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525580502



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1175,7 +1175,14 @@ def _call_cached_op(self, *args):
             out = [out]
         return _regroup(out, self._out_format)
 
-    def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def optimize_for(self, x, *args, backend=None, clear=False,

Review comment:
       Building on @waytrue17 's comment, is there any point to calling optimize_for without a backend? or should backend always be required (and we shouldnt set default to be None). What do you think @mseth10?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524856994



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,27 +102,27 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
 For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(backend=None, clear=True)

Review comment:
       we should remove all mentions of hybridize here




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524869683



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,35 +96,35 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
+When the `optimize_for` API is called on a HybridBlock it partitions immediately. This lets users export the partitioned model without running a complete forward pass. Chaining multiple optimizations is as simple as calling `optimize_for` multiple times.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.optimize_for(x, backend='myPart')
+block.optimize_for(x, backend='myOtherPart')
+block.export('partitioned')
 ```
 
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`. 
-
-If you just want to partition the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.
+For the Gluon API, hybridization is needed, so calling `optimize_for` on a non-hybridized block will hybridize it.
+If the users need to pass some hybridization parameters, they can either call `hybridize` explicitedly, or directly pass the arguments to `optimize_for`.
 
+This:
 ```python
-block.optimize_for(x, backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(static_shape=True, static_alloc=False)
+block.optimize_for(x, backend='myPart')
 ```
-
-When the `optimize_for` API is called on a HybridBlock it partitions immediately. This lets users export the partitioned model without running a complete forward pass. Chaining multiple optimizations is as simple as calling `optimize_for` multiple times, no need to execute a forward pass (as opposed to `hybridize`).
-
+is equivalent to:
 ```python
-block.optimize_for(x, backend='myPart')
-block.optimize_for(x, backend='myOtherPart', clear=False)
-block.export('partitioned')
+block.optimize_for(x, backend='myPart', static_shape=True, static_alloc=False)
 ```
 
-But you can also use `optimize_for` in place of `hybridize` and run inference immediately after too.
+It's important to note that `hybridize` clars the CachedOp and any previous optimization.

Review comment:
       clars --> clears




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525608578



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1205,19 +1212,32 @@ def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, pa
             The name of backend, as registered in `SubgraphBackendRegistry`, default None
         backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        clear : bool, default False
+            clears any previous optimizations
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.

Review comment:
       Good catch! Thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
waytrue17 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525600985



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -83,17 +84,7 @@ APIs in MXNet are available in both Symbol and Gluon APIs. For the Symbol API, `
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
-
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
-
-If you just want to run a graph pass on the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs (in the `kwargs`).

Review comment:
       Makes sense, thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524856020



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -85,13 +86,7 @@ sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 
 The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol.

Review comment:
       do we need to mention hybridize at all anymore?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525609652



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1175,7 +1175,14 @@ def _call_cached_op(self, *args):
             out = [out]
         return _regroup(out, self._out_format)
 
-    def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def optimize_for(self, x, *args, backend=None, clear=False,

Review comment:
       That's a good point, we should make it always required




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524818128



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1271,33 +1297,44 @@ def hybridize(self, active=True, backend=None, backend_opts=None, clear=True, pa
             The name of backend, as registered in `SubgraphBackendRegistry`, default None
         backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        clear : bool, default True
+            clears any previous optimizations
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
         """
 
-        self._backend = backend
-        if backend_opts is not None:
-            assert isinstance(backend_opts, dict), \
-            "HybridBlock hybridize requires backend_opts to be a dictionary."
-            self._backend_opts = backend_opts
-
         self._active = active
         self._partition_if_dynamic = partition_if_dynamic
-        self._flags = list(kwargs.items())
+        self._flags = [("static_alloc", static_alloc), ("static_shape", static_shape),
+                       ("inline_limit", inline_limit)]
+        if forward_bulk_size is not None:
+            self._flags.append(("forward_bulk_size", forward_bulk_size))
+        if backward_bulk_size is not None:
+            self._flags.append(("backward_bulk_size", backward_bulk_size))
         if clear:

Review comment:
       We always want to clear cachedop for hybridize now. We can remove `clear` from the args list to hybridize.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 merged pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 merged pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524855961



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -85,13 +86,7 @@ sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 
 The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol.
 
 If you just want to run a graph pass on the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.

Review comment:
       lets remove this sentence, it isnt needed anymore




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525580502



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1175,7 +1175,14 @@ def _call_cached_op(self, *args):
             out = [out]
         return _regroup(out, self._out_format)
 
-    def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def optimize_for(self, x, *args, backend=None, clear=False,

Review comment:
       Building on @waytrue17 's [comment](https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525553211), is there any point to calling optimize_for without a backend? or should backend always be required (and we shouldnt set default to be None). What do you think @mseth10?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525609652



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1175,7 +1175,14 @@ def _call_cached_op(self, *args):
             out = [out]
         return _regroup(out, self._out_format)
 
-    def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def optimize_for(self, x, *args, backend=None, clear=False,

Review comment:
       That's a good point, we should make it always required




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524810384



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,27 +102,27 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
 For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(backend=None, clear=True)
 ```
 
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`. 
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass.

Review comment:
       remove "If you want to chain optimizations together, set `clear` to `False`." ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#issuecomment-729916953


   @mxnet-bot run ci [unix-cpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524869494



##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,35 +96,35 @@ Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For the
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to partition the model for. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before partitioning, and passed to the backend to use during compilation. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend partitioning APIs. The backend options can be passed as kwargs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.
+When the `optimize_for` API is called on a HybridBlock it partitions immediately. This lets users export the partitioned model without running a complete forward pass. Chaining multiple optimizations is as simple as calling `optimize_for` multiple times.
 
 ```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.optimize_for(x, backend='myPart')
+block.optimize_for(x, backend='myOtherPart')
+block.export('partitioned')
 ```
 
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`. 
-
-If you just want to partition the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.
+For the Gluon API, hybridization is needed, so calling `optimize_for` on a non-hybridized block will hybridize it.
+If the users need to pass some hybridization parameters, they can either call `hybridize` explicitedly, or directly pass the arguments to `optimize_for`.

Review comment:
       explicitedly --> explicitly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#issuecomment-729917006


   Jenkins CI successfully triggered : [unix-cpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
waytrue17 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525539102



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1259,45 +1282,55 @@ def register_child(self, block, name=None):
             self._active = False
         self._clear_cached_op()
 
-    def hybridize(self, active=True, backend=None, backend_opts=None, clear=True, partition_if_dynamic=False, **kwargs):
+    def hybridize(self, active=True,
+                  partition_if_dynamic=False,
+                  static_alloc=False,
+                  static_shape=False,
+                  inline_limit=2,
+                  forward_bulk_size=None,
+                  backward_bulk_size=None):
         """Activates or deactivates :py:class:`HybridBlock` s recursively. Has no effect on
         non-hybrid children.
 
         Parameters
         ----------
         active : bool, default True
             Whether to turn hybrid on or off.
-        backend : str
-            The name of backend, as registered in `SubgraphBackendRegistry`, default None
-        backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
-            Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.

Review comment:
       Same here

##########
File path: example/extensions/lib_pass/README.md
##########
@@ -83,17 +84,7 @@ APIs in MXNet are available in both Symbol and Gluon APIs. For the Symbol API, `
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
-
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
-
-If you just want to run a graph pass on the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs (in the `kwargs`).

Review comment:
       Does `optimize_for` take at least 2 arguments, `x` and `backend`?

##########
File path: python/mxnet/gluon/block.py
##########
@@ -1205,19 +1212,32 @@ def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, pa
             The name of backend, as registered in `SubgraphBackendRegistry`, default None
         backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        clear : bool, default False
+            clears any previous optimizations
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.

Review comment:
       Should this be "during backward pass"?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525575877



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -83,17 +84,7 @@ APIs in MXNet are available in both Symbol and Gluon APIs. For the Symbol API, `
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
-
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
-
-If you just want to run a graph pass on the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs (in the `kwargs`).

Review comment:
       There are two different APIs. The symbol optimize_for API needs only backend. The block optimize_for API needs only x. Here we are referring to the first one.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#issuecomment-728326212


   Hey @Kh4L , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [website, windows-gpu, unix-cpu, clang, windows-cpu, sanity, centos-cpu, unix-gpu, edge, centos-gpu, miscellaneous]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#issuecomment-728670132


   @samskalicky I updated the documentation to reflect the changes


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
mseth10 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524816765



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1205,19 +1212,32 @@ def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, pa
             The name of backend, as registered in `SubgraphBackendRegistry`, default None
         backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        clear : bool, default False
+            clears any previous optimizations
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        **kwargs: The backend options, optional
+            Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
         """
+        self._backend = backend
+        if len(kwargs) > 0:
+             self._backend_opts = kwargs

Review comment:
       Let's clear `self._backend` and `self._backend_opts` at the end of this function, so that if users call hybridize after calling optimize_for, it wont run the `sym.optimize_for` again for the same backend.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Kh4L commented on a change in pull request #19543: Separate backend from hybridize and refactor optimize_for kwargs

Posted by GitBox <gi...@apache.org>.
Kh4L commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524863751



##########
File path: example/extensions/lib_pass/README.md
##########
@@ -85,13 +86,7 @@ sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 
 The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol.

Review comment:
       You are right, not anymoew

##########
File path: example/extensions/lib_pass/README.md
##########
@@ -85,13 +86,7 @@ sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 
 The `optimize_for` API takes at least 1 argument, `backend` which is a string that identifies which backend to use to optimize the model. The `args` and `aux` arguments are optional and take a list of NDArray or dict of str to NDArray. They are used to infer shapes and types and before executing the graph pass. The `ctx` argument is optional and takes a device context to infer storage types. It also takes any other user-specified options that will be passed to the backend APIs.
 
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
+The `hybridize` function prepares the HybridBlock to be converted into a backend symbol.

Review comment:
       You are right, not anymore




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org