You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/03 13:19:51 UTC

[GitHub] [tvm] t-vi opened a new pull request #7023: Save PyTorch frontend state in object

t-vi opened a new pull request #7023:
URL: https://github.com/apache/tvm/pull/7023


   While the functional approach is pretty neat, we ended up having global state (default frontend, dtype) and it'll be more soon (caching of inferred types, see #6900). To not have to pass around the state, this moves the op conversion into a class with instances having the state.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Thanks for working on this. I have three questions:
   
   1. What is the distinction between op converters method with `@staticmethod` annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def _unary(name, inputs, input_types):` etc,  can we have something like `def _unary(name): return lambda inputs, input_types: ...` 
   3. Do you intend to move functions such as `convert_operators`, `convert_block` etc into the class later? These are the functions that currently require passing around "global" variables such as `outputs, convert_map, prelude, default_dtype`.
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738631353


   Ah I see, thanks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738811512


   Thank you for reviewing & merging, @masahi !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738362461


   Also, now that we are encapsulating each converter inside a class, I think it is ok to remove underscore `_` in each converter method, if you prefer (replace `def _` -> `def `, `self._` -> `self.`). I'm fine either way.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738607046


   Thank you for looking at this.
   
   1. I used static methods to not have unused `self`. I'm not terribly attached to it, if you like the consistency of everything being a regular method.
   2. This is again a consistency thing. Right now all methods apply the operation. Moving some back to returning impl_ has us doing two different things (though I'm not opposed). One alternative I'd consider is to actually register a method for all ops we handle (`convert_op_aten_mm`) and build the conversion dictionary on the fly. For the time being I'll move the things taking extra arguments back to the impl scheme as you suggest.
   3. Yes, I thought I'd do the conversion in pieces. If you prefer I can move these now, too.
   4. Good point about `_`, in particular as leading `_` in classes is special in Python...
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738654692


   > Also, I wondered if you're attached to excepting the quantized ops in `report_missing_conversions` and then adding them to the `convert_map` later in the `from_pytorch` function. If one moves the addition to `convert_map` up, one can drop the special casing in the `report_missing_conversions`.
   
   Yeah we can definitely register quantized ops converter early. I don't think there is a deep reason it is registered late in the current implementation. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738800569


   Thanks @t-vi 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738647194


   So what I didn't do yet is move all utility functions that use the state (default_dtype and prelude) into the class. We could add that or defer.
   Also, I wondered if you're attached to excepting the quantized ops in `report_missing_conversions` and then adding them to the `convert_map` later in the `from_pytorch` function. If one moves the addition to `convert_map` up, one can drop the special casing in the `report_missing_conversions`. (Also, I'm not sure if one wouldn't add the quantized ops all along, but I did not study the design enough here.)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #7023:
URL: https://github.com/apache/tvm/pull/7023


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738630593


   So the object holds some state such as the `default_dtype` in order to not have to pass it around in the methods. The methods that use this state have to be non-static. The others, which don't use the state, can be static.
   But I have made all conversion methods non-static, maybe that helps, too.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738346443


   Right now CI is having an issue, please retrigger after https://github.com/apache/tvm/pull/7025 is merged.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738630593


   So the object holds some state such as the `default_dtype` in order to not have to pass it around in the methods. The methods that use this state have to be nonstatic. The others, which don't use the state, can be static.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738629079


   Maybe I'm missing something, I'm just wondering why `full_like` at https://github.com/apache/tvm/blob/464396706ec075fbada0f04629211e1ae7276234/python/tvm/relay/frontend/pytorch.py#L610 is nonstatic while `linspace` just below is static? By "all" I just meant all op converter, I don't see why we cannot make some of op converter static?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Ok I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def _unary(name, inputs, input_types):` etc, `def _unary(name): return lambda inputs, input_types: ...` Would that be possible?
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#discussion_r535915492



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -2377,18 +2023,428 @@ def _impl(inputs, input_types):
         counts = _op.zeros(_op.reshape(dim, [1]), out_dtype)
         return _op.scatter_add(counts, data, updates, axis=0)
 
-    return _impl
-
-
-def _scatter_add():
-    def _impl(inputs, input_types):
+    @staticmethod
+    def scatter_add(inputs, input_types):
         data = inputs[0]
         axis = inputs[1]
         index = inputs[2]
         src = inputs[3]
         return _op.scatter_add(data, index, src, axis=axis)
 
-    return _impl
+    # Operator mappings
+    def create_convert_map(self):
+        self.convert_map = {
+            "aten::pixel_shuffle": self.pixel_shuffle,
+            "aten::device": self.none,
+            "prim::device": self.none,
+            "aten::sub": self.make_elemwise("subtract"),
+            "aten::sub_": self.make_elemwise("subtract"),
+            "aten::max": self.max,
+            "aten::min": self.min,
+            "aten::mul": self.make_elemwise("multiply"),
+            "aten::mul_": self.make_elemwise("multiply"),
+            "aten::pow": self.make_elemwise("power"),
+            "aten::arange": self.arange,
+            "aten::meshgrid": self.meshgrid,
+            "aten::div": self.make_elemwise("divide"),
+            "aten::div_": self.make_elemwise("divide"),
+            "aten::floor_divide": self.make_elemwise("floor_divide"),
+            "aten::true_divide": self.make_elemwise("divide"),
+            "aten::addcdiv": self.addcdiv,
+            "aten::addcmul": self.addcmul,
+            "aten::ones": self.ones,
+            "aten::ones_like": self.ones_like,
+            "aten::zeros": self.zeros,
+            "aten::zeros_like": self.zeros_like,
+            "aten::full": self.full,
+            "aten::full_like": self.full_like,
+            "aten::linspace": self.linspace,
+            "aten::reciprocal": self.reciprocal,
+            "aten::repeat": self.repeat,
+            "aten::repeat_interleave": self.repeat_interleave,
+            "aten::to": self.to,
+            "aten::squeeze": self.squeeze,
+            "aten::unsqueeze": self.unsqueeze,
+            "aten::cat": self.concatenate,
+            "aten::slice": self.slice,
+            "aten::split": self.split,
+            "aten::split_with_sizes": self.split_with_sizes,
+            "aten::select": self.select,
+            "aten::take": self.take,
+            "aten::where": self.where,
+            "aten::topk": self.topk,
+            "aten::relu": self.relu,
+            "aten::relu_": self.relu,
+            "aten::prelu": self.prelu,
+            "aten::leaky_relu": self.leaky_relu,
+            "aten::leaky_relu_": self.leaky_relu,
+            "aten::elu": self.elu,
+            "aten::elu_": self.elu,
+            "aten::celu": self.celu,
+            "aten::gelu": self.gelu,
+            "aten::selu": self.selu,
+            "aten::log_sigmoid": self.log_sigmoid,
+            "aten::adaptive_avg_pool2d": self.adaptive_avg_pool_2d,
+            "aten::adaptive_max_pool2d": self.adaptive_max_pool_2d,
+            "aten::max_pool2d": self.maxpool_2d,
+            "aten::max_pool2d_with_indices": self.maxpool_2d_with_indices,
+            "aten::max_pool1d": self.maxpool_1d,
+            "aten::max_pool3d": self.maxpool_3d,
+            "aten::hardtanh": self.hardtanh,
+            "aten::hardtanh_": self.hardtanh,
+            "aten::_convolution": self.convolution,
+            "aten::softmax": self.softmax,
+            "aten::threshold": self.threshold,
+            "aten::threshold_": self.threshold,
+            "aten::contiguous": self.contiguous,
+            "aten::batch_norm": self.batch_norm,
+            "aten::instance_norm": self.instance_norm,
+            "aten::layer_norm": self.layer_norm,
+            "aten::group_norm": self.group_norm,
+            "aten::transpose": self.transpose,
+            "aten::transpose_": self.transpose,
+            "aten::t": self.transpose,
+            "aten::flatten": self.flatten,
+            "aten::addmm": self.addmm,
+            "aten::size": self.size,
+            "aten::view": self.view,
+            "aten::reshape": self.reshape,
+            "aten::clone": self.clone,
+            "aten::log_softmax": self.log_softmax,
+            "aten::sigmoid": self.sigmoid,
+            "aten::softplus": self.softplus,
+            "aten::avg_pool2d": self.avg_pool2d,
+            "aten::avg_pool3d": self.avg_pool3d,
+            "aten::dropout": self.dropout,
+            "aten::dropout_": self.dropout,
+            "aten::feature_dropout": self.dropout,
+            "aten::alpha_dropout": self.dropout,
+            "aten::mean": self.mean,
+            "aten::chunk": self.chunk,
+            "aten::matmul": self.matmul,
+            "aten::bmm": self.matmul,
+            "aten::expand": self.expand,
+            "aten::Int": self.int,
+            "prim::NumToTensor": self.numtotensor,
+            "prim::ImplicitTensorToNum": self.tensortonum,
+            "aten::ScalarImplicit": self.tensortonum,
+            "aten::constant_pad_nd": self.make_pad("constant"),
+            "aten::reflection_pad1d": self.make_pad("reflect"),
+            "aten::reflection_pad2d": self.make_pad("reflect"),
+            "aten::replication_pad1d": self.make_pad("edge"),
+            "aten::replication_pad2d": self.make_pad("edge"),
+            "aten::replication_pad3d": self.make_pad("edge"),
+            "aten::permute": self.transpose,
+            "aten::sum": self.make_reduce("sum"),
+            "aten::prod": self.make_reduce("prod"),
+            "aten::argmin": self.make_reduce("argmin"),
+            "aten::argmax": self.make_reduce("argmax"),
+            "aten::norm": self.norm,
+            "aten::frobenius_norm": self.frobenius_norm,
+            "aten::std": self.std,
+            "aten::var": self.variance,
+            "aten::abs": self.make_unary("abs"),
+            "aten::neg": self.make_unary("negative"),
+            "aten::cos": self.make_unary("cos"),
+            "aten::cosh": self.make_unary("cosh"),
+            "aten::sin": self.make_unary("sin"),
+            "aten::sinh": self.make_unary("sinh"),
+            "aten::tan": self.make_unary("tan"),
+            "aten::tanh": self.make_unary("tanh"),
+            "aten::acos": self.make_unary("acos"),
+            "aten::asin": self.make_unary("asin"),
+            "aten::atan": self.make_unary("atan"),
+            "aten::log": self.make_unary("log"),
+            "aten::log2": self.make_unary("log2"),
+            "aten::log10": self.make_unary("log10"),
+            "aten::log1p": self.log1p,
+            "aten::exp": self.make_unary("exp"),
+            "aten::erf": self.make_unary("erf"),
+            "aten::trunc": self.make_unary("trunc"),
+            "aten::sign": self.make_unary("sign"),
+            "aten::sqrt": self.make_unary("sqrt"),
+            "aten::rsqrt": self.make_unary("rsqrt"),
+            "aten::ceil": self.make_unary("ceil"),
+            "aten::floor": self.make_unary("floor"),
+            "aten::round": self.make_unary("round"),
+            "aten::isfinite": self.make_unary("isfinite"),
+            "aten::isinf": self.make_unary("isinf"),
+            "aten::isnan": self.make_unary("isnan"),
+            "aten::clamp": self.clamp,
+            "aten::clamp_": self.clamp,
+            "aten::detach": self.identity,
+            "aten::upsample_bilinear2d": self.make_upsample("bilinear"),
+            "aten::upsample_nearest2d": self.make_upsample("nearest_neighbor"),
+            "aten::upsample_trilinear3d": self.make_upsample3d("trilinear"),
+            "aten::upsample_nearest3d": self.make_upsample3d("nearest_neighbor"),
+            "aten::expand_as": self.expand_as,
+            "aten::lt": self.make_elemwise("less"),
+            "aten::gt": self.make_elemwise("greater"),
+            "aten::le": self.make_elemwise("less_equal"),
+            "aten::ge": self.make_elemwise("greater_equal"),
+            "aten::ne": self.make_elemwise("not_equal"),
+            "aten::eq": self.make_elemwise("equal"),
+            "aten::logical_not": self.logical_not,
+            "aten::logical_xor": self.logical_xor,
+            "aten::bitwise_not": self.bitwise_not,
+            "aten::bitwise_xor": self.bitwise_xor,
+            "aten::Bool": self.Bool,
+            "aten::Float": self.Float,
+            "aten::adaptive_avg_pool3d": self.adaptive_avg_pool_3d,
+            "aten::adaptive_max_pool3d": self.adaptive_max_pool_3d,
+            "aten::rsub": self.rsub,
+            "aten::embedding": self.embedding,
+            "aten::one_hot": self.one_hot,
+            "aten::mm": self.matmul,
+            "aten::add": self.add,
+            "aten::add_": self.add,
+            "aten::stack": self.stack,
+            "aten::__getitem__": self.list_getitem,
+            "aten::len": self.list_len,
+            "aten::type_as": self.type_as,
+            "aten::gather": self.gather,
+            "aten::index_select": self.select,
+            "aten::index": self.index,
+            "torchvision::nms": self.nms,
+            "aten::logsumexp": self.logsumexp,
+            "torchvision::roi_align": self.roi_align,
+            "aten::unbind": self.unbind,
+            "aten::__and__": self.logical_and,
+            "aten::_shape_as_tensor": self.shape_as_tensor,
+            "aten::nonzero": self.nonzero,
+            "aten::nonzero_numpy": self.nonzero_numpy,
+            "aten::scatter": self.scatter,
+            "aten::scalar_tensor": self.scalar_tensor,
+            "aten::__interpolate": self.interpolate,
+            "aten::IntImplicit": self.identity,
+            "aten::tensor": self.identity,  # used for example in tensor(1.0)
+            "aten::numel": self.numel,
+            "aten::empty": self.empty,
+            "aten::bincount": self.bincount,
+            "aten::scatter_add": self.scatter_add,
+            "aten::__not__": self.logical_not,
+        }
+
+    def update_convert_map(self, custom_map):
+        self.convert_map.update(custom_map)
+
+    def report_missing_conversion(self, op_names):
+        """ Check if all ops in an input graph are supported by TVM """
+        known_ops = [
+            "prim::Constant",
+            "prim::GetAttr",
+            "prim::ListConstruct",
+            "prim::ListUnpack",
+            "prim::TupleConstruct",
+            "prim::TupleUnpack",
+            "prim::RaiseException",
+            "prim::If",
+            "prim::Loop",
+        ]
+        known_ops += list(self.convert_map.keys())
+        known_ops += list(qnn_torch.convert_map.keys())
+
+        missing = [op_name for op_name in op_names if op_name not in known_ops]
+
+        if missing:
+            msg = "The following operators are not implemented: {}".format(missing)
+            raise NotImplementedError(msg)
+
+    def convert_block(self, block, outputs):

Review comment:
       we can also store `outputs` in the class, if you prefer




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738629079


   Maybe I'm missing something, I'm just wondering why, for example,  `full_like` at https://github.com/apache/tvm/blob/464396706ec075fbada0f04629211e1ae7276234/python/tvm/relay/frontend/pytorch.py#L610 is nonstatic while `linspace` just below is static? By "all" I just meant all op converter, I don't see why we cannot make some of op converter static?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738619723


   1. To avoid confusion when we add a new converter, I think we should make everything staticmethod or regular one. Since this class is supposed to be a singleton, staticmethod makes sense to me.
   2. Yes, the arguments of each op converter are supposed to be `inputs, input_types`. If you add another arg like `name, inputs, input_types` I'd say it is already not consistent. So I prefer lifting `name` arg to a wrapper function and returning a new impl function. I think being able to remove all the `functools.partial(...)` is a big plus. 
   3. Ok we can do that later.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738629079


   Maybe I'm missing something, I'm just wondering why, for example,  `full_like` at https://github.com/apache/tvm/blob/464396706ec075fbada0f04629211e1ae7276234/python/tvm/relay/frontend/pytorch.py#L610 is nonstatic while `linspace` just below is static? By "all" I just meant all op converters (not all methods), I don't see why we cannot make some of op converters static?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738629079


   Maybe I'm missing something, I'm just wondering why, for example,  `full_like` at https://github.com/apache/tvm/blob/464396706ec075fbada0f04629211e1ae7276234/python/tvm/relay/frontend/pytorch.py#L610 is nonstatic while `linspace` just below is static? By "all" I just meant all op converters (not all methods), I don't see why we cannot make some of op converter static?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#discussion_r535915492



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -2377,18 +2023,428 @@ def _impl(inputs, input_types):
         counts = _op.zeros(_op.reshape(dim, [1]), out_dtype)
         return _op.scatter_add(counts, data, updates, axis=0)
 
-    return _impl
-
-
-def _scatter_add():
-    def _impl(inputs, input_types):
+    @staticmethod
+    def scatter_add(inputs, input_types):
         data = inputs[0]
         axis = inputs[1]
         index = inputs[2]
         src = inputs[3]
         return _op.scatter_add(data, index, src, axis=axis)
 
-    return _impl
+    # Operator mappings
+    def create_convert_map(self):
+        self.convert_map = {
+            "aten::pixel_shuffle": self.pixel_shuffle,
+            "aten::device": self.none,
+            "prim::device": self.none,
+            "aten::sub": self.make_elemwise("subtract"),
+            "aten::sub_": self.make_elemwise("subtract"),
+            "aten::max": self.max,
+            "aten::min": self.min,
+            "aten::mul": self.make_elemwise("multiply"),
+            "aten::mul_": self.make_elemwise("multiply"),
+            "aten::pow": self.make_elemwise("power"),
+            "aten::arange": self.arange,
+            "aten::meshgrid": self.meshgrid,
+            "aten::div": self.make_elemwise("divide"),
+            "aten::div_": self.make_elemwise("divide"),
+            "aten::floor_divide": self.make_elemwise("floor_divide"),
+            "aten::true_divide": self.make_elemwise("divide"),
+            "aten::addcdiv": self.addcdiv,
+            "aten::addcmul": self.addcmul,
+            "aten::ones": self.ones,
+            "aten::ones_like": self.ones_like,
+            "aten::zeros": self.zeros,
+            "aten::zeros_like": self.zeros_like,
+            "aten::full": self.full,
+            "aten::full_like": self.full_like,
+            "aten::linspace": self.linspace,
+            "aten::reciprocal": self.reciprocal,
+            "aten::repeat": self.repeat,
+            "aten::repeat_interleave": self.repeat_interleave,
+            "aten::to": self.to,
+            "aten::squeeze": self.squeeze,
+            "aten::unsqueeze": self.unsqueeze,
+            "aten::cat": self.concatenate,
+            "aten::slice": self.slice,
+            "aten::split": self.split,
+            "aten::split_with_sizes": self.split_with_sizes,
+            "aten::select": self.select,
+            "aten::take": self.take,
+            "aten::where": self.where,
+            "aten::topk": self.topk,
+            "aten::relu": self.relu,
+            "aten::relu_": self.relu,
+            "aten::prelu": self.prelu,
+            "aten::leaky_relu": self.leaky_relu,
+            "aten::leaky_relu_": self.leaky_relu,
+            "aten::elu": self.elu,
+            "aten::elu_": self.elu,
+            "aten::celu": self.celu,
+            "aten::gelu": self.gelu,
+            "aten::selu": self.selu,
+            "aten::log_sigmoid": self.log_sigmoid,
+            "aten::adaptive_avg_pool2d": self.adaptive_avg_pool_2d,
+            "aten::adaptive_max_pool2d": self.adaptive_max_pool_2d,
+            "aten::max_pool2d": self.maxpool_2d,
+            "aten::max_pool2d_with_indices": self.maxpool_2d_with_indices,
+            "aten::max_pool1d": self.maxpool_1d,
+            "aten::max_pool3d": self.maxpool_3d,
+            "aten::hardtanh": self.hardtanh,
+            "aten::hardtanh_": self.hardtanh,
+            "aten::_convolution": self.convolution,
+            "aten::softmax": self.softmax,
+            "aten::threshold": self.threshold,
+            "aten::threshold_": self.threshold,
+            "aten::contiguous": self.contiguous,
+            "aten::batch_norm": self.batch_norm,
+            "aten::instance_norm": self.instance_norm,
+            "aten::layer_norm": self.layer_norm,
+            "aten::group_norm": self.group_norm,
+            "aten::transpose": self.transpose,
+            "aten::transpose_": self.transpose,
+            "aten::t": self.transpose,
+            "aten::flatten": self.flatten,
+            "aten::addmm": self.addmm,
+            "aten::size": self.size,
+            "aten::view": self.view,
+            "aten::reshape": self.reshape,
+            "aten::clone": self.clone,
+            "aten::log_softmax": self.log_softmax,
+            "aten::sigmoid": self.sigmoid,
+            "aten::softplus": self.softplus,
+            "aten::avg_pool2d": self.avg_pool2d,
+            "aten::avg_pool3d": self.avg_pool3d,
+            "aten::dropout": self.dropout,
+            "aten::dropout_": self.dropout,
+            "aten::feature_dropout": self.dropout,
+            "aten::alpha_dropout": self.dropout,
+            "aten::mean": self.mean,
+            "aten::chunk": self.chunk,
+            "aten::matmul": self.matmul,
+            "aten::bmm": self.matmul,
+            "aten::expand": self.expand,
+            "aten::Int": self.int,
+            "prim::NumToTensor": self.numtotensor,
+            "prim::ImplicitTensorToNum": self.tensortonum,
+            "aten::ScalarImplicit": self.tensortonum,
+            "aten::constant_pad_nd": self.make_pad("constant"),
+            "aten::reflection_pad1d": self.make_pad("reflect"),
+            "aten::reflection_pad2d": self.make_pad("reflect"),
+            "aten::replication_pad1d": self.make_pad("edge"),
+            "aten::replication_pad2d": self.make_pad("edge"),
+            "aten::replication_pad3d": self.make_pad("edge"),
+            "aten::permute": self.transpose,
+            "aten::sum": self.make_reduce("sum"),
+            "aten::prod": self.make_reduce("prod"),
+            "aten::argmin": self.make_reduce("argmin"),
+            "aten::argmax": self.make_reduce("argmax"),
+            "aten::norm": self.norm,
+            "aten::frobenius_norm": self.frobenius_norm,
+            "aten::std": self.std,
+            "aten::var": self.variance,
+            "aten::abs": self.make_unary("abs"),
+            "aten::neg": self.make_unary("negative"),
+            "aten::cos": self.make_unary("cos"),
+            "aten::cosh": self.make_unary("cosh"),
+            "aten::sin": self.make_unary("sin"),
+            "aten::sinh": self.make_unary("sinh"),
+            "aten::tan": self.make_unary("tan"),
+            "aten::tanh": self.make_unary("tanh"),
+            "aten::acos": self.make_unary("acos"),
+            "aten::asin": self.make_unary("asin"),
+            "aten::atan": self.make_unary("atan"),
+            "aten::log": self.make_unary("log"),
+            "aten::log2": self.make_unary("log2"),
+            "aten::log10": self.make_unary("log10"),
+            "aten::log1p": self.log1p,
+            "aten::exp": self.make_unary("exp"),
+            "aten::erf": self.make_unary("erf"),
+            "aten::trunc": self.make_unary("trunc"),
+            "aten::sign": self.make_unary("sign"),
+            "aten::sqrt": self.make_unary("sqrt"),
+            "aten::rsqrt": self.make_unary("rsqrt"),
+            "aten::ceil": self.make_unary("ceil"),
+            "aten::floor": self.make_unary("floor"),
+            "aten::round": self.make_unary("round"),
+            "aten::isfinite": self.make_unary("isfinite"),
+            "aten::isinf": self.make_unary("isinf"),
+            "aten::isnan": self.make_unary("isnan"),
+            "aten::clamp": self.clamp,
+            "aten::clamp_": self.clamp,
+            "aten::detach": self.identity,
+            "aten::upsample_bilinear2d": self.make_upsample("bilinear"),
+            "aten::upsample_nearest2d": self.make_upsample("nearest_neighbor"),
+            "aten::upsample_trilinear3d": self.make_upsample3d("trilinear"),
+            "aten::upsample_nearest3d": self.make_upsample3d("nearest_neighbor"),
+            "aten::expand_as": self.expand_as,
+            "aten::lt": self.make_elemwise("less"),
+            "aten::gt": self.make_elemwise("greater"),
+            "aten::le": self.make_elemwise("less_equal"),
+            "aten::ge": self.make_elemwise("greater_equal"),
+            "aten::ne": self.make_elemwise("not_equal"),
+            "aten::eq": self.make_elemwise("equal"),
+            "aten::logical_not": self.logical_not,
+            "aten::logical_xor": self.logical_xor,
+            "aten::bitwise_not": self.bitwise_not,
+            "aten::bitwise_xor": self.bitwise_xor,
+            "aten::Bool": self.Bool,
+            "aten::Float": self.Float,
+            "aten::adaptive_avg_pool3d": self.adaptive_avg_pool_3d,
+            "aten::adaptive_max_pool3d": self.adaptive_max_pool_3d,
+            "aten::rsub": self.rsub,
+            "aten::embedding": self.embedding,
+            "aten::one_hot": self.one_hot,
+            "aten::mm": self.matmul,
+            "aten::add": self.add,
+            "aten::add_": self.add,
+            "aten::stack": self.stack,
+            "aten::__getitem__": self.list_getitem,
+            "aten::len": self.list_len,
+            "aten::type_as": self.type_as,
+            "aten::gather": self.gather,
+            "aten::index_select": self.select,
+            "aten::index": self.index,
+            "torchvision::nms": self.nms,
+            "aten::logsumexp": self.logsumexp,
+            "torchvision::roi_align": self.roi_align,
+            "aten::unbind": self.unbind,
+            "aten::__and__": self.logical_and,
+            "aten::_shape_as_tensor": self.shape_as_tensor,
+            "aten::nonzero": self.nonzero,
+            "aten::nonzero_numpy": self.nonzero_numpy,
+            "aten::scatter": self.scatter,
+            "aten::scalar_tensor": self.scalar_tensor,
+            "aten::__interpolate": self.interpolate,
+            "aten::IntImplicit": self.identity,
+            "aten::tensor": self.identity,  # used for example in tensor(1.0)
+            "aten::numel": self.numel,
+            "aten::empty": self.empty,
+            "aten::bincount": self.bincount,
+            "aten::scatter_add": self.scatter_add,
+            "aten::__not__": self.logical_not,
+        }
+
+    def update_convert_map(self, custom_map):
+        self.convert_map.update(custom_map)
+
+    def report_missing_conversion(self, op_names):
+        """ Check if all ops in an input graph are supported by TVM """
+        known_ops = [
+            "prim::Constant",
+            "prim::GetAttr",
+            "prim::ListConstruct",
+            "prim::ListUnpack",
+            "prim::TupleConstruct",
+            "prim::TupleUnpack",
+            "prim::RaiseException",
+            "prim::If",
+            "prim::Loop",
+        ]
+        known_ops += list(self.convert_map.keys())
+        known_ops += list(qnn_torch.convert_map.keys())
+
+        missing = [op_name for op_name in op_names if op_name not in known_ops]
+
+        if missing:
+            msg = "The following operators are not implemented: {}".format(missing)
+            raise NotImplementedError(msg)
+
+    def convert_block(self, block, outputs):

Review comment:
       we can also store `outputs` in the class, if you prefer.
   
   UPDATE: Not sure if that makes recursive conversion we do for `If` and `Loop` difficult.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738625073


   For 1. I can make all methods nonstatic. I wouldn't know how to reasonably make all methods static.
   Part of this is that I don't agree with the singleton.
   With convert map moved into the state and late the graph, it's not a singleton anymore.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738671472


   What do you think, add more now or do it later? If the current state is good for you, I'd have a mild tendency to stop here and to a separate PR for further things (including exploiting the new stateful converter for reworking speedup patch I look forward to a lot).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Thanks for working on this. I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def _unary(name, inputs, input_types):` etc,  can we have something like `def _unary(name): return lambda inputs, input_types: ...` 
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-737990439


   I've tried to keep this "somewhat minimal", so I didn't move around all the helper functions.
   Personally, I think it'd be neat to make the dispatching of the ops into `getattr` and having functions for them all, but I realize that this is 1) a style thing and 2) somewhat orthogonal to this change, so I'm leaving it out.
   And I'm happy about shaving off a few lines, not all of them are blank...
   
   @masahi
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#discussion_r535916137



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -2377,18 +2023,428 @@ def _impl(inputs, input_types):
         counts = _op.zeros(_op.reshape(dim, [1]), out_dtype)
         return _op.scatter_add(counts, data, updates, axis=0)
 
-    return _impl
-
-
-def _scatter_add():
-    def _impl(inputs, input_types):
+    @staticmethod
+    def scatter_add(inputs, input_types):
         data = inputs[0]
         axis = inputs[1]
         index = inputs[2]
         src = inputs[3]
         return _op.scatter_add(data, index, src, axis=axis)
 
-    return _impl
+    # Operator mappings
+    def create_convert_map(self):
+        self.convert_map = {
+            "aten::pixel_shuffle": self.pixel_shuffle,
+            "aten::device": self.none,
+            "prim::device": self.none,
+            "aten::sub": self.make_elemwise("subtract"),
+            "aten::sub_": self.make_elemwise("subtract"),
+            "aten::max": self.max,
+            "aten::min": self.min,
+            "aten::mul": self.make_elemwise("multiply"),
+            "aten::mul_": self.make_elemwise("multiply"),
+            "aten::pow": self.make_elemwise("power"),
+            "aten::arange": self.arange,
+            "aten::meshgrid": self.meshgrid,
+            "aten::div": self.make_elemwise("divide"),
+            "aten::div_": self.make_elemwise("divide"),
+            "aten::floor_divide": self.make_elemwise("floor_divide"),
+            "aten::true_divide": self.make_elemwise("divide"),
+            "aten::addcdiv": self.addcdiv,
+            "aten::addcmul": self.addcmul,
+            "aten::ones": self.ones,
+            "aten::ones_like": self.ones_like,
+            "aten::zeros": self.zeros,
+            "aten::zeros_like": self.zeros_like,
+            "aten::full": self.full,
+            "aten::full_like": self.full_like,
+            "aten::linspace": self.linspace,
+            "aten::reciprocal": self.reciprocal,
+            "aten::repeat": self.repeat,
+            "aten::repeat_interleave": self.repeat_interleave,
+            "aten::to": self.to,
+            "aten::squeeze": self.squeeze,
+            "aten::unsqueeze": self.unsqueeze,
+            "aten::cat": self.concatenate,
+            "aten::slice": self.slice,
+            "aten::split": self.split,
+            "aten::split_with_sizes": self.split_with_sizes,
+            "aten::select": self.select,
+            "aten::take": self.take,
+            "aten::where": self.where,
+            "aten::topk": self.topk,
+            "aten::relu": self.relu,
+            "aten::relu_": self.relu,
+            "aten::prelu": self.prelu,
+            "aten::leaky_relu": self.leaky_relu,
+            "aten::leaky_relu_": self.leaky_relu,
+            "aten::elu": self.elu,
+            "aten::elu_": self.elu,
+            "aten::celu": self.celu,
+            "aten::gelu": self.gelu,
+            "aten::selu": self.selu,
+            "aten::log_sigmoid": self.log_sigmoid,
+            "aten::adaptive_avg_pool2d": self.adaptive_avg_pool_2d,
+            "aten::adaptive_max_pool2d": self.adaptive_max_pool_2d,
+            "aten::max_pool2d": self.maxpool_2d,
+            "aten::max_pool2d_with_indices": self.maxpool_2d_with_indices,
+            "aten::max_pool1d": self.maxpool_1d,
+            "aten::max_pool3d": self.maxpool_3d,
+            "aten::hardtanh": self.hardtanh,
+            "aten::hardtanh_": self.hardtanh,
+            "aten::_convolution": self.convolution,
+            "aten::softmax": self.softmax,
+            "aten::threshold": self.threshold,
+            "aten::threshold_": self.threshold,
+            "aten::contiguous": self.contiguous,
+            "aten::batch_norm": self.batch_norm,
+            "aten::instance_norm": self.instance_norm,
+            "aten::layer_norm": self.layer_norm,
+            "aten::group_norm": self.group_norm,
+            "aten::transpose": self.transpose,
+            "aten::transpose_": self.transpose,
+            "aten::t": self.transpose,
+            "aten::flatten": self.flatten,
+            "aten::addmm": self.addmm,
+            "aten::size": self.size,
+            "aten::view": self.view,
+            "aten::reshape": self.reshape,
+            "aten::clone": self.clone,
+            "aten::log_softmax": self.log_softmax,
+            "aten::sigmoid": self.sigmoid,
+            "aten::softplus": self.softplus,
+            "aten::avg_pool2d": self.avg_pool2d,
+            "aten::avg_pool3d": self.avg_pool3d,
+            "aten::dropout": self.dropout,
+            "aten::dropout_": self.dropout,
+            "aten::feature_dropout": self.dropout,
+            "aten::alpha_dropout": self.dropout,
+            "aten::mean": self.mean,
+            "aten::chunk": self.chunk,
+            "aten::matmul": self.matmul,
+            "aten::bmm": self.matmul,
+            "aten::expand": self.expand,
+            "aten::Int": self.int,
+            "prim::NumToTensor": self.numtotensor,
+            "prim::ImplicitTensorToNum": self.tensortonum,
+            "aten::ScalarImplicit": self.tensortonum,
+            "aten::constant_pad_nd": self.make_pad("constant"),
+            "aten::reflection_pad1d": self.make_pad("reflect"),
+            "aten::reflection_pad2d": self.make_pad("reflect"),
+            "aten::replication_pad1d": self.make_pad("edge"),
+            "aten::replication_pad2d": self.make_pad("edge"),
+            "aten::replication_pad3d": self.make_pad("edge"),
+            "aten::permute": self.transpose,
+            "aten::sum": self.make_reduce("sum"),
+            "aten::prod": self.make_reduce("prod"),
+            "aten::argmin": self.make_reduce("argmin"),
+            "aten::argmax": self.make_reduce("argmax"),
+            "aten::norm": self.norm,
+            "aten::frobenius_norm": self.frobenius_norm,
+            "aten::std": self.std,
+            "aten::var": self.variance,
+            "aten::abs": self.make_unary("abs"),
+            "aten::neg": self.make_unary("negative"),
+            "aten::cos": self.make_unary("cos"),
+            "aten::cosh": self.make_unary("cosh"),
+            "aten::sin": self.make_unary("sin"),
+            "aten::sinh": self.make_unary("sinh"),
+            "aten::tan": self.make_unary("tan"),
+            "aten::tanh": self.make_unary("tanh"),
+            "aten::acos": self.make_unary("acos"),
+            "aten::asin": self.make_unary("asin"),
+            "aten::atan": self.make_unary("atan"),
+            "aten::log": self.make_unary("log"),
+            "aten::log2": self.make_unary("log2"),
+            "aten::log10": self.make_unary("log10"),
+            "aten::log1p": self.log1p,
+            "aten::exp": self.make_unary("exp"),
+            "aten::erf": self.make_unary("erf"),
+            "aten::trunc": self.make_unary("trunc"),
+            "aten::sign": self.make_unary("sign"),
+            "aten::sqrt": self.make_unary("sqrt"),
+            "aten::rsqrt": self.make_unary("rsqrt"),
+            "aten::ceil": self.make_unary("ceil"),
+            "aten::floor": self.make_unary("floor"),
+            "aten::round": self.make_unary("round"),
+            "aten::isfinite": self.make_unary("isfinite"),
+            "aten::isinf": self.make_unary("isinf"),
+            "aten::isnan": self.make_unary("isnan"),
+            "aten::clamp": self.clamp,
+            "aten::clamp_": self.clamp,
+            "aten::detach": self.identity,
+            "aten::upsample_bilinear2d": self.make_upsample("bilinear"),
+            "aten::upsample_nearest2d": self.make_upsample("nearest_neighbor"),
+            "aten::upsample_trilinear3d": self.make_upsample3d("trilinear"),
+            "aten::upsample_nearest3d": self.make_upsample3d("nearest_neighbor"),
+            "aten::expand_as": self.expand_as,
+            "aten::lt": self.make_elemwise("less"),
+            "aten::gt": self.make_elemwise("greater"),
+            "aten::le": self.make_elemwise("less_equal"),
+            "aten::ge": self.make_elemwise("greater_equal"),
+            "aten::ne": self.make_elemwise("not_equal"),
+            "aten::eq": self.make_elemwise("equal"),
+            "aten::logical_not": self.logical_not,
+            "aten::logical_xor": self.logical_xor,
+            "aten::bitwise_not": self.bitwise_not,
+            "aten::bitwise_xor": self.bitwise_xor,
+            "aten::Bool": self.Bool,
+            "aten::Float": self.Float,
+            "aten::adaptive_avg_pool3d": self.adaptive_avg_pool_3d,
+            "aten::adaptive_max_pool3d": self.adaptive_max_pool_3d,
+            "aten::rsub": self.rsub,
+            "aten::embedding": self.embedding,
+            "aten::one_hot": self.one_hot,
+            "aten::mm": self.matmul,
+            "aten::add": self.add,
+            "aten::add_": self.add,
+            "aten::stack": self.stack,
+            "aten::__getitem__": self.list_getitem,
+            "aten::len": self.list_len,
+            "aten::type_as": self.type_as,
+            "aten::gather": self.gather,
+            "aten::index_select": self.select,
+            "aten::index": self.index,
+            "torchvision::nms": self.nms,
+            "aten::logsumexp": self.logsumexp,
+            "torchvision::roi_align": self.roi_align,
+            "aten::unbind": self.unbind,
+            "aten::__and__": self.logical_and,
+            "aten::_shape_as_tensor": self.shape_as_tensor,
+            "aten::nonzero": self.nonzero,
+            "aten::nonzero_numpy": self.nonzero_numpy,
+            "aten::scatter": self.scatter,
+            "aten::scalar_tensor": self.scalar_tensor,
+            "aten::__interpolate": self.interpolate,
+            "aten::IntImplicit": self.identity,
+            "aten::tensor": self.identity,  # used for example in tensor(1.0)
+            "aten::numel": self.numel,
+            "aten::empty": self.empty,
+            "aten::bincount": self.bincount,
+            "aten::scatter_add": self.scatter_add,
+            "aten::__not__": self.logical_not,
+        }
+
+    def update_convert_map(self, custom_map):
+        self.convert_map.update(custom_map)
+
+    def report_missing_conversion(self, op_names):
+        """ Check if all ops in an input graph are supported by TVM """
+        known_ops = [
+            "prim::Constant",
+            "prim::GetAttr",
+            "prim::ListConstruct",
+            "prim::ListUnpack",
+            "prim::TupleConstruct",
+            "prim::TupleUnpack",
+            "prim::RaiseException",
+            "prim::If",
+            "prim::Loop",
+        ]
+        known_ops += list(self.convert_map.keys())
+        known_ops += list(qnn_torch.convert_map.keys())
+
+        missing = [op_name for op_name in op_names if op_name not in known_ops]
+
+        if missing:
+            msg = "The following operators are not implemented: {}".format(missing)
+            raise NotImplementedError(msg)
+
+    def convert_block(self, block, outputs):

Review comment:
       sorry that would make recursive conversion difficult, so this is good. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738607046


   Thank you for looking at this.
   
   1. I used static methods to not have unused `self`. I'm not terribly attached to it, if you like the consistency of everything being a regular method.
   2. This is again a consistency thing. Right now all methods apply the operation. Moving some back to returning impl_ has us doing two different things (though I'm not opposed). One alternative I'd consider is to actually register a method for all ops we handle (`convert_op_aten_mm`) and build the conversion dictionary on the fly. For the time being I'll move the things taking extra arguments back to the impl scheme as you suggest.
   3. Yes, I thought I'd do the conversion in pieces. If you prefer I can move these now, too. I will tentatively add them to this PR.
   4. Good point about `_`, in particular as leading `_` in classes is special in Python...
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738619723


   1. To avoid confusion when we add a new converter, I think we should make everything staticmethod or regular one. Since this class is supposed to be a singleton, staticmethod makes sense to me.
   2. Yes, the arguments of each op converter are supposed to be `inputs, input_types`. If you add another arg like `name, inputs, input_types` I'd say it is already not consistent anyway. So I prefer lifting `name` arg to a wrapper function and returning a new impl function. I think being able to remove all the `functools.partial(...)` is a big plus. 
   3. Ok we can do that later.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Ok I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def _unary(name, inputs, input_types):` etc,  can we have something like `def _unary(name): return lambda inputs, input_types: ...` 
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738625073


   For 1. I can make all methods nonstatic. I wouldn't know how to reasonably make all methods static.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738685243


   Yes I'll merge this as it is after CI is cleared. We can do other stuff later.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738781159


   At last all the CI is happy. Sorry this took so long.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org