You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/03/22 21:35:51 UTC

[GitHub] [tvm] AndrewZhaoLuo opened a new pull request #7722: [Topi, Relay] Add cumprod

AndrewZhaoLuo opened a new pull request #7722:
URL: https://github.com/apache/tvm/pull/7722


   Thanks for contributing to TVM!   Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600849643



##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -19,16 +19,17 @@
 # pylint: disable=too-many-local-variables, too-many-arguments, no-else-return
 
 from __future__ import absolute_import
+
 import tvm
-from tvm import te
-from tvm.te.hybrid import script
+from tvm import te, topi
 from tvm.runtime import convert
-from tvm import topi
+from tvm.te.hybrid import script

Review comment:
       It's used elsewhere in this file, for some reason the autoformater moved it around in the input list




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600842607



##########
File path: python/tvm/topi/cuda/scan.py
##########
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
         The result has the same size as data, and the same shape as data if axis is not None.
         If axis is None, the result is a 1-d array.
     """
-    if axis is None:
-        axis = 0
-        data = reshape(data, (prod(data.shape),))
-    axis = get_const_int(axis)
-    if exclusive is not None and exclusive != 0:
-        return exclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
-    return inclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
+    return scanop(
+        data=data,
+        binop=tvm.tir.generic.add,
+        identity_value=0,
+        axis=axis,
+        dtype=dtype,
+        exclusive=exclusive,
+    )
+
+
+def cumprod(
+    data: tvm.te.Tensor,
+    axis: Optional[int] = None,
+    dtype: Optional[int] = None,
+    exclusive: Optional[bool] = None,
+):
+    """Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
+
+    Parameters
+    ----------
+    data : tvm.te.Tensor
+        The input data to the operator.
+
+    axis : int, optional
+        Axis along which the cumulative product is computed. The default (None) is to compute
+        the cumproduct over the flattened array.
+
+    dtype : string, optional
+        Type of the returned array and of the accumulator in which the elements are multiplied.
+        If dtype is not specified, it defaults to the dtype of data.
+
+    exclusive : bool, optional
+        If True, will return exclusive product in which the first element is not
+        included. In other terms, if True, the j-th output element would be
+        the product of the first (j-1) elements. Otherwise, it would be the product of
+        the first j elements.

Review comment:
       Hehehe, I tagged the wrong line. I know why cuda scan needs to be inclusive or exclusive (it's for other ops), but I'm not sure why cumprod needs the option?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600838442



##########
File path: python/tvm/topi/cuda/scan.py
##########
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
         The result has the same size as data, and the same shape as data if axis is not None.
         If axis is None, the result is a 1-d array.
     """
-    if axis is None:
-        axis = 0
-        data = reshape(data, (prod(data.shape),))
-    axis = get_const_int(axis)
-    if exclusive is not None and exclusive != 0:
-        return exclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
-    return inclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
+    return scanop(
+        data=data,
+        binop=tvm.tir.generic.add,
+        identity_value=0,
+        axis=axis,
+        dtype=dtype,
+        exclusive=exclusive,
+    )
+
+
+def cumprod(
+    data: tvm.te.Tensor,
+    axis: Optional[int] = None,
+    dtype: Optional[int] = None,
+    exclusive: Optional[bool] = None,
+):
+    """Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
+
+    Parameters
+    ----------
+    data : tvm.te.Tensor
+        The input data to the operator.
+
+    axis : int, optional
+        Axis along which the cumulative product is computed. The default (None) is to compute
+        the cumproduct over the flattened array.
+
+    dtype : string, optional
+        Type of the returned array and of the accumulator in which the elements are multiplied.
+        If dtype is not specified, it defaults to the dtype of data.
+
+    exclusive : bool, optional
+        If True, will return exclusive product in which the first element is not
+        included. In other terms, if True, the j-th output element would be
+        the product of the first (j-1) elements. Otherwise, it would be the product of
+        the first j elements.

Review comment:
       I'll defer to @masahi on this. I do agree that it adds a lot of complexity for not much gain. 
   
   For what it's worth, thrust has inclusive and exclusive scan operations which is what may have motivated the current interface.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600847541



##########
File path: python/tvm/topi/cuda/scan.py
##########
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
         The result has the same size as data, and the same shape as data if axis is not None.
         If axis is None, the result is a 1-d array.
     """
-    if axis is None:
-        axis = 0
-        data = reshape(data, (prod(data.shape),))
-    axis = get_const_int(axis)
-    if exclusive is not None and exclusive != 0:
-        return exclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
-    return inclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
+    return scanop(
+        data=data,
+        binop=tvm.tir.generic.add,
+        identity_value=0,
+        axis=axis,
+        dtype=dtype,
+        exclusive=exclusive,
+    )
+
+
+def cumprod(
+    data: tvm.te.Tensor,
+    axis: Optional[int] = None,
+    dtype: Optional[int] = None,
+    exclusive: Optional[bool] = None,
+):
+    """Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
+
+    Parameters
+    ----------
+    data : tvm.te.Tensor
+        The input data to the operator.
+
+    axis : int, optional
+        Axis along which the cumulative product is computed. The default (None) is to compute
+        the cumproduct over the flattened array.
+
+    dtype : string, optional
+        Type of the returned array and of the accumulator in which the elements are multiplied.
+        If dtype is not specified, it defaults to the dtype of data.
+
+    exclusive : bool, optional
+        If True, will return exclusive product in which the first element is not
+        included. In other terms, if True, the j-th output element would be
+        the product of the first (j-1) elements. Otherwise, it would be the product of
+        the first j elements.

Review comment:
       Yes, ONNX cumsum has an `exclusive` option. There is a good use case for exclusive cumsum but probably not for cumprod. We can always do inclusive scan for cumprod.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600848449



##########
File path: python/tvm/relay/op/_transform.py
##########
@@ -19,16 +19,17 @@
 # pylint: disable=too-many-local-variables, too-many-arguments, no-else-return
 
 from __future__ import absolute_import
+
 import tvm
-from tvm import te
-from tvm.te.hybrid import script
+from tvm import te, topi
 from tvm.runtime import convert
-from tvm import topi
+from tvm.te.hybrid import script

Review comment:
       unused import?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-806353561


   thanks @AndrewZhaoLuo @mbrookhart @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-805359158


   You've got a merge issue, need to rebase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-806073342


   @masahi PTAL. I've implemented all your suggested changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600830424



##########
File path: python/tvm/topi/cuda/scan.py
##########
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
         The result has the same size as data, and the same shape as data if axis is not None.
         If axis is None, the result is a 1-d array.
     """
-    if axis is None:
-        axis = 0
-        data = reshape(data, (prod(data.shape),))
-    axis = get_const_int(axis)
-    if exclusive is not None and exclusive != 0:
-        return exclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
-    return inclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
+    return scanop(
+        data=data,
+        binop=tvm.tir.generic.add,
+        identity_value=0,
+        axis=axis,
+        dtype=dtype,
+        exclusive=exclusive,
+    )
+
+
+def cumprod(
+    data: tvm.te.Tensor,
+    axis: Optional[int] = None,
+    dtype: Optional[int] = None,
+    exclusive: Optional[bool] = None,
+):
+    """Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
+
+    Parameters
+    ----------
+    data : tvm.te.Tensor
+        The input data to the operator.
+
+    axis : int, optional
+        Axis along which the cumulative product is computed. The default (None) is to compute
+        the cumproduct over the flattened array.
+
+    dtype : string, optional
+        Type of the returned array and of the accumulator in which the elements are multiplied.
+        If dtype is not specified, it defaults to the dtype of data.
+
+    exclusive : bool, optional
+        If True, will return exclusive product in which the first element is not
+        included. In other terms, if True, the j-th output element would be
+        the product of the first (j-1) elements. Otherwise, it would be the product of
+        the first j elements.

Review comment:
       I don't think numpy has an exclusive option? Do we need this? https://numpy.org/doc/stable/reference/generated/numpy.cumprod.html




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600860418



##########
File path: python/tvm/topi/cuda/scan.py
##########
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
         The result has the same size as data, and the same shape as data if axis is not None.
         If axis is None, the result is a 1-d array.
     """
-    if axis is None:
-        axis = 0
-        data = reshape(data, (prod(data.shape),))
-    axis = get_const_int(axis)
-    if exclusive is not None and exclusive != 0:
-        return exclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
-    return inclusive_scan(data, axis, output_dtype=dtype, binop=tvm.tir.generic.add)
+    return scanop(
+        data=data,
+        binop=tvm.tir.generic.add,
+        identity_value=0,
+        axis=axis,
+        dtype=dtype,
+        exclusive=exclusive,
+    )
+
+
+def cumprod(
+    data: tvm.te.Tensor,
+    axis: Optional[int] = None,
+    dtype: Optional[int] = None,
+    exclusive: Optional[bool] = None,
+):
+    """Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
+
+    Parameters
+    ----------
+    data : tvm.te.Tensor
+        The input data to the operator.
+
+    axis : int, optional
+        Axis along which the cumulative product is computed. The default (None) is to compute
+        the cumproduct over the flattened array.
+
+    dtype : string, optional
+        Type of the returned array and of the accumulator in which the elements are multiplied.
+        If dtype is not specified, it defaults to the dtype of data.
+
+    exclusive : bool, optional
+        If True, will return exclusive product in which the first element is not
+        included. In other terms, if True, the j-th output element would be
+        the product of the first (j-1) elements. Otherwise, it would be the product of
+        the first j elements.

Review comment:
       Well we might as well keep cumprod's exclusive option since it has very little implementation overhead. The only thing that is there to support it is the `identity_value` fields which are pretty simple. I say we might as well keep it at this point.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-805391908


   @masahi PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #7722: [Topi, Relay] Add cumprod

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #7722:
URL: https://github.com/apache/tvm/pull/7722


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org