You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/02/10 15:43:41 UTC

[GitHub] [tvm] ANSHUMAN87 opened a new pull request #7435: [TOPI] Sparse Add Op added

ANSHUMAN87 opened a new pull request #7435:
URL: https://github.com/apache/tvm/pull/7435


   1: Sparse Add Op added in Topi.
   2: TF Frontend support added.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r587517097



##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")

Review comment:
       @tkonolige : Sorry for late response was stuck in another . I think this is general convention for generic strategy.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r578600737



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,39 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.

Review comment:
       Okay i get it. I thought you were referring to operation examples :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r578597721



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,39 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.

Review comment:
       Look in `python/tvm/relay/op/transform.py`.
   
   Examples are useful for showing both new and experienced users how to use the op. Especially in this case where the input formats maybe be a little confusing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
mbaret commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-797046796


   Thanks @ANSHUMAN87, @tkonolige, this is merged.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-785195522


   @tkonolige : All your comments are addressed now, PTAL. Thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
mbaret commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r589584883



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,53 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix addition
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix(CSR) for the matrix addition.

Review comment:
       Says CSR here but above that either BSR or CSR is supported.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r587716253



##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")

Review comment:
       Yes, there will be follow-up PRs for sparse add, I believe it will be taken care in those. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r578594031



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,39 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.

Review comment:
       IMO example wont be required. Also i could not find a reference from other ops too. Could you suggest ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-795675102


   > Broadly lgtm. I'm not especially familiar with sparse implementations. Perhaps @giuseros you could also take a quick look at the TOPI stuff?
   
   Gentle ping @giuseros !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbaret merged pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
mbaret merged pull request #7435:
URL: https://github.com/apache/tvm/pull/7435


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r578599975



##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -202,9 +202,10 @@ bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   ICHECK_EQ(types.size(), 5);
   const auto* dense_data = types[0].as<TensorTypeNode>();
   const auto* sparse_data = types[1].as<TensorTypeNode>();
-  ICHECK_EQ(sparse_data->shape.size(), 1);
+  ICHECK(reporter->Assert(sparse_data->dtype == dense_data->dtype));
+  ICHECK(reporter->Assert(sparse_data->shape.size() == 1));
   const auto* sparse_indices = types[2].as<TensorTypeNode>();
-  ICHECK_EQ(sparse_indices->shape.size(), 1);
+  ICHECK(reporter->Assert(sparse_indices->shape.size() == 1));

Review comment:
       You don't need to wrap this in an ICHECK.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r579325284



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,54 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.
+
+    Examples
+    -------
+    .. code-block:: python
+        dense_data = [[ 3.,   4.,   4. ]
+                                  [ 4.,  2.,  5. ]]

Review comment:
       nit: alignment seems off here.

##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,54 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.
+
+    Examples
+    -------
+    .. code-block:: python
+        dense_data = [[ 3.,   4.,   4. ]
+                                  [ 4.,  2.,  5. ]]
+        sparse_data = [4., 8.]
+        sparse_indices =[0, 2]
+        sparse_indptr =[0, 1, 2]
+        dense_shape = [2, 3]

Review comment:
       Is this dense_shape necessary?

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,44 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK(reporter->Assert(sparse_data->dtype == dense_data->dtype));
+  ICHECK(reporter->Assert(sparse_data->shape.size() == 1));
+  const auto* sparse_indices = types[2].as<TensorTypeNode>();
+  ICHECK(reporter->Assert(sparse_indices->shape.size() == 1));

Review comment:
       Can you add error messages on all the ICHECKs?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-780686967


   cc @tkonolige , @FrozenGene!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-791500800


   cc @FrozenGene , @mbaret !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#issuecomment-792573964


   Gentle ping @FrozenGene , @mbaret !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r578089073



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,39 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.
+
+    Returns
+    -------
+    result: tvm.relay.Expr
+        The computed result.

Review comment:
       Good documentation! Could you add an example?

##########
File path: python/tvm/relay/frontend/tensorflow.py
##########
@@ -1020,6 +1020,38 @@ def _impl(inputs, attr, params, mod):
     return _impl
 
 
+def _sparse_tensor_dense_add():
+    # Sparse utility from scipy
+    from scipy.sparse import csr_matrix
+
+    def _impl(inputs, attr, params, mod):
+        assert len(inputs) == 4, "There should be 4 input tensors"

Review comment:
       What are the input tensors? (Document them in this assert please).

##########
File path: python/tvm/relay/op/nn/_nn.py
##########
@@ -142,6 +142,11 @@ def alter_op_layout_sparse_dense(attrs, inputs, tinfos, out_type):
     return topi.nn.sparse_dense_alter_layout(attrs, inputs, tinfos, out_type)
 
 
+# sparse_add
+reg.register_strategy("nn.sparse_add", strategy.sparse_add_strategy)
+reg.register_pattern("nn.sparse_add", reg.OpPattern.ELEMWISE)

Review comment:
       I think this should be `reg.OpPattern.OPAQUE` instead of `reg.OpPattern.ELEMWISE`. I don't think te.extern is fusable.

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,67 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indices : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indptr : tvm.te.Tensor
+        1-D with shape [M + 1] (CSR) or
+
+    Returns
+    -------
+    output : tvm.te.Tensor
+        2-D with shape [M, N]
+    """
+    # TODO(ANSHUMAN87): support BSR format too
+    assert len(sparse_data.shape) == 1

Review comment:
       Could you toss an error message on this.

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_data->shape.size(), 1);
+  const auto* sparse_indices = types[2].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_indices->shape.size(), 1);

Review comment:
       I think this could also be user-facing. (see above).

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_data->shape.size(), 1);
+  const auto* sparse_indices = types[2].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_indices->shape.size(), 1);
+
+  reporter->Assign(types[4], TensorType(dense_data->shape, dense_data->dtype));
+  return true;
+}
+
+Expr MakeSparseAdd(Expr dense_data, Expr sparse_data, Expr sparse_indices, Expr sparse_indptr) {
+  static const Op& op = Op::Get("nn.sparse_add");
+  return Call(op, {dense_data, sparse_data, sparse_indices, sparse_indptr}, Attrs(), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_add").set_body_typed(MakeSparseAdd);
+
+RELAY_REGISTER_OP("nn.sparse_add")
+    .describe(R"code(Add a dense matrix X with sparse matrix Y.
+
+- **dense**: `(M, N)`
+- **sparse**: `(M, N)`
+
+- **out**: `(M, N)`.
+
+)code" TVM_ADD_FILELINE)
+    .set_num_inputs(4)
+    .add_argument("dense_data", "2D Tensor", "Dense data matrix.")
+    .add_argument("sparse_data", "1D Tensor", "Sparse data matrix.")
+    .add_argument("sparse_indices", "1D Tensor", "Sparse indices matrix.")
+    .add_argument("sparse_indptr", "1D Tensor", "Sparse index pointer matrix.")

Review comment:
       ```suggestion
       .add_argument("sparse_indptr", "1D Tensor", "Sparse index pointer vector.")
   ```

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_data->shape.size(), 1);
+  const auto* sparse_indices = types[2].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_indices->shape.size(), 1);
+
+  reporter->Assign(types[4], TensorType(dense_data->shape, dense_data->dtype));
+  return true;
+}
+
+Expr MakeSparseAdd(Expr dense_data, Expr sparse_data, Expr sparse_indices, Expr sparse_indptr) {
+  static const Op& op = Op::Get("nn.sparse_add");
+  return Call(op, {dense_data, sparse_data, sparse_indices, sparse_indptr}, Attrs(), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_add").set_body_typed(MakeSparseAdd);
+
+RELAY_REGISTER_OP("nn.sparse_add")
+    .describe(R"code(Add a dense matrix X with sparse matrix Y.
+
+- **dense**: `(M, N)`
+- **sparse**: `(M, N)`
+
+- **out**: `(M, N)`.
+
+)code" TVM_ADD_FILELINE)
+    .set_num_inputs(4)
+    .add_argument("dense_data", "2D Tensor", "Dense data matrix.")
+    .add_argument("sparse_data", "1D Tensor", "Sparse data matrix.")

Review comment:
       ```suggestion
       .add_argument("sparse_data", "1D Tensor", "Sparse data vector.")
   ```

##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")
+    strategy = _op.OpStrategy()
+    strategy.add_implementation(
+        wrap_compute_sparse_add(topi.nn.sparse_add),
+        wrap_topi_schedule(topi.generic.schedule_sparse_add),

Review comment:
       ```suggestion
           wrap_topi_schedule(topi.generic.schedule_extern),
   ```
   Then you can delete all the unnecessary scheduling functions.

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,67 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indices : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indptr : tvm.te.Tensor
+        1-D with shape [M + 1] (CSR) or
+
+    Returns
+    -------
+    output : tvm.te.Tensor
+        2-D with shape [M, N]
+    """
+    # TODO(ANSHUMAN87): support BSR format too
+    assert len(sparse_data.shape) == 1
+    return _sparse_add_csr(dense_data, sparse_data, sparse_indices, sparse_indptr)
+
+
+def _sparse_add_csr(dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp):
+    oshape = get_const_tuple(dense_data_inp.shape)
+
+    def _csr_add_ir(dense_data, sparse_data, sparse_indices, sparse_indptr, out_data):
+        irb = tvm.tir.ir_builder.create()
+        dense_data_ptr = irb.buffer_ptr(dense_data)
+        sparse_data_ptr = irb.buffer_ptr(sparse_data)
+        sparse_indices_ptr = irb.buffer_ptr(sparse_indices)
+        sparse_indptr_ptr = irb.buffer_ptr(sparse_indptr)
+
+        out_data_ptr = irb.buffer_ptr(out_data)
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:
+            with irb.for_range(0, oshape[1], kind="serial", name="col") as col:
+                out_data_ptr[row, col] = dense_data_ptr[row, col]
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:
+            offset = sparse_indptr_ptr[row]
+            diff = sparse_indptr_ptr[row + 1] - sparse_indptr_ptr[row]
+            with irb.for_range(0, diff, kind="serial", name="idx") as idx:
+                real_idx = offset + idx
+                col = sparse_indices_ptr[real_idx]
+                out_data_ptr[row, col] = sparse_data_ptr[real_idx] + out_data_ptr[row, col]
+
+        return irb.get()
+
+    return te.extern(
+        shape=oshape,
+        inputs=[dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp],
+        fcompute=lambda ins, outs: _csr_add_ir(ins[0], ins[1], ins[2], ins[3], outs[0]),
+        tag="sparse_add_csr",
+        dtype=["float32", "float32", "int32", "int32"],

Review comment:
       I think these dtypes should be inferred from the input.

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,

Review comment:
       I think you also need to check if the sparse_data dtype is the same as the dense dtype.

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_data->shape.size(), 1);
+  const auto* sparse_indices = types[2].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_indices->shape.size(), 1);
+
+  reporter->Assign(types[4], TensorType(dense_data->shape, dense_data->dtype));
+  return true;
+}
+
+Expr MakeSparseAdd(Expr dense_data, Expr sparse_data, Expr sparse_indices, Expr sparse_indptr) {
+  static const Op& op = Op::Get("nn.sparse_add");
+  return Call(op, {dense_data, sparse_data, sparse_indices, sparse_indptr}, Attrs(), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_add").set_body_typed(MakeSparseAdd);
+
+RELAY_REGISTER_OP("nn.sparse_add")
+    .describe(R"code(Add a dense matrix X with sparse matrix Y.
+
+- **dense**: `(M, N)`
+- **sparse**: `(M, N)`
+
+- **out**: `(M, N)`.
+
+)code" TVM_ADD_FILELINE)
+    .set_num_inputs(4)
+    .add_argument("dense_data", "2D Tensor", "Dense data matrix.")
+    .add_argument("sparse_data", "1D Tensor", "Sparse data matrix.")
+    .add_argument("sparse_indices", "1D Tensor", "Sparse indices matrix.")

Review comment:
       ```suggestion
       .add_argument("sparse_indices", "1D Tensor", "Sparse indices vector.")
   ```

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,43 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as<TensorTypeNode>();
+  const auto* sparse_data = types[1].as<TensorTypeNode>();
+  ICHECK_EQ(sparse_data->shape.size(), 1);

Review comment:
       This should be a user-facing error. Can you use `reporter->AssertEq`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r583225876



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,53 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix multiplication
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix for the matrix multiplication.

Review comment:
       nit: say that this is in CSR format

##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")
+    strategy = _op.OpStrategy()
+    strategy.add_implementation(
+        wrap_compute_sparse_add(topi.nn.sparse_add),
+        wrap_topi_schedule(topi.generic.schedule_extern),
+        name="sparse_add.generic",
+    )
+    return strategy
+
+
+@generic_func
+def schedule_sparse_add(attrs, outs, target):
+    """schedule sparse_add"""
+    with target:
+        return topi.generic.schedule_sparse_add(outs)

Review comment:
       Remove this given you are using schedule extern

##########
File path: python/tvm/topi/generic/nn.py
##########
@@ -730,6 +730,23 @@ def schedule_sparse_transpose(outs):
     return _default_schedule(outs, False)
 
 
+def schedule_sparse_add(outs):
+    """Schedule for sparse_add
+
+    Parameters
+    ----------
+    outs: Array of Tensor
+          The computation graph description of sparse_add
+          in the format of an array of tensors.
+
+    Returns
+    -------
+    sch: Schedule
+        The computation schedule for the op.
+    """
+    return _default_schedule(outs, False)

Review comment:
       Remove this given you are using schedule extern

##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -196,5 +196,46 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
     .set_support_level(1)
     .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
+                  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);

Review comment:
       add an error message

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,72 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indices : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indptr : tvm.te.Tensor
+        1-D with shape [M + 1] (CSR) or
+
+    Returns
+    -------
+    output : tvm.te.Tensor
+        2-D with shape [M, N]
+    """
+    # TODO(ANSHUMAN87): support BSR format too
+    assert len(sparse_data.shape) == 1, "only CSR format is supported"
+    return _sparse_add_csr(dense_data, sparse_data, sparse_indices, sparse_indptr)
+
+
+def _sparse_add_csr(dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp):
+    oshape = get_const_tuple(dense_data_inp.shape)
+
+    def _csr_add_ir(dense_data, sparse_data, sparse_indices, sparse_indptr, out_data):
+        irb = tvm.tir.ir_builder.create()
+        dense_data_ptr = irb.buffer_ptr(dense_data)
+        sparse_data_ptr = irb.buffer_ptr(sparse_data)
+        sparse_indices_ptr = irb.buffer_ptr(sparse_indices)
+        sparse_indptr_ptr = irb.buffer_ptr(sparse_indptr)
+
+        out_data_ptr = irb.buffer_ptr(out_data)
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:

Review comment:
       this loop could be parallel

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,72 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or

Review comment:
       remove trailing or. there are some on the lines below too

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,72 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32

Review comment:
       we should not hardcode foat32

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,72 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indices : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indptr : tvm.te.Tensor
+        1-D with shape [M + 1] (CSR) or
+
+    Returns
+    -------
+    output : tvm.te.Tensor
+        2-D with shape [M, N]
+    """
+    # TODO(ANSHUMAN87): support BSR format too
+    assert len(sparse_data.shape) == 1, "only CSR format is supported"
+    return _sparse_add_csr(dense_data, sparse_data, sparse_indices, sparse_indptr)
+
+
+def _sparse_add_csr(dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp):
+    oshape = get_const_tuple(dense_data_inp.shape)
+
+    def _csr_add_ir(dense_data, sparse_data, sparse_indices, sparse_indptr, out_data):
+        irb = tvm.tir.ir_builder.create()
+        dense_data_ptr = irb.buffer_ptr(dense_data)
+        sparse_data_ptr = irb.buffer_ptr(sparse_data)
+        sparse_indices_ptr = irb.buffer_ptr(sparse_indices)
+        sparse_indptr_ptr = irb.buffer_ptr(sparse_indptr)
+
+        out_data_ptr = irb.buffer_ptr(out_data)
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:
+            with irb.for_range(0, oshape[1], kind="serial", name="col") as col:
+                out_data_ptr[row, col] = dense_data_ptr[row, col]
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:
+            offset = sparse_indptr_ptr[row]
+            diff = sparse_indptr_ptr[row + 1] - sparse_indptr_ptr[row]
+            with irb.for_range(0, diff, kind="serial", name="idx") as idx:
+                real_idx = offset + idx
+                col = sparse_indices_ptr[real_idx]
+                out_data_ptr[row, col] = sparse_data_ptr[real_idx] + out_data_ptr[row, col]
+
+        return irb.get()
+
+    return te.extern(
+        shape=oshape,
+        inputs=[dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp],
+        fcompute=lambda ins, outs: _csr_add_ir(ins[0], ins[1], ins[2], ins[3], outs[0]),
+        tag="sparse_add_csr",
+        dtype=[
+            dense_data_inp.dtype,
+            sparse_data_inp.dtype,
+            sparse_indices_inp.dtype,
+            sparse_indptr_inp.dtype,
+        ],
+        name="out",

Review comment:
       use a better name here.

##########
File path: tests/python/frontend/tensorflow/test_forward.py
##########
@@ -1915,6 +1915,52 @@ def test_forward_sparse_fill_empty_rows(
     )
 
 
+#######################################################################
+# tensorflow.sparse.add
+# ----------------------------------
+
+
+def _test_sparse_add(indices, values, A_shape, B_shape, dtype, flip=False):
+    """ One iteration of tf.sparse.add """
+
+    # TODO(ANSHUMAN87): support cuda
+    # TODO(ANSHUMAN87): support both sparse input case
+
+    with tf.Graph().as_default():
+        A_sp = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=A_shape)
+        B = tf.placeholder(shape=B_shape, dtype=dtype, name="B")
+
+        # TODO(ANSHUMAN87): support user input threashold values
+        if flip:
+            result = tf.sparse.add(B, A_sp, threshold=0)
+        else:
+            result = tf.sparse.add(A_sp, B, threshold=0)
+
+        B_np = np.random.uniform(high=5.0, size=B_shape).astype(dtype)
+
+        compare_tf_with_tvm([B_np], [B.name], result.name, no_gpu=True)
+
+
+def test_sparse_add():
+    """ sparse.add op test"""
+    ###################################################################
+    #
+    # In order to create a SparseTensor, it requires 3 input as below:
+    #    SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
+    #
+    # Above Sparse can be represented in Dense as below :
+    #    [[1, 0, 0, 0]
+    #     [0, 0, 2, 0]
+    #     [0, 0, 0, 0]]
+    #
+    # ------------------------------------------------------------------
+
+    _test_sparse_add([[0, 0], [1, 2]], [4.0, 8.0], [3, 4], [3, 4], "float32")
+    _test_sparse_add([[0, 0], [1, 2]], [4.0, 8.0], [3, 4], [3, 4], "float32", True)
+    _test_sparse_add([[0, 0], [1, 3], [4, 3]], [3.0, 6.0, 9.0], [5, 5], [5, 5], "float32")
+    _test_sparse_add([[0, 0], [1, 3], [4, 3]], [3.0, 6.0, 9.0], [5, 5], [5, 5], "float32", True)

Review comment:
       can you change one of these to not be float64, so we can test handling of different dtypes.

##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -356,3 +356,72 @@ def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
     Unlike other TOPI functions, this function operates on both graph level and operator level.
     """
     return None
+
+
+def sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr):
+    """
+    Computes sparse-dense addition
+
+    Parameters
+    ----------
+    dense_data : tvm.te.Tensor
+        2-D with shape [M, N], float32
+
+    sparse_data : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indices : tvm.te.Tensor
+        1-D with shape [nnz] (CSR) or
+
+    sparse_indptr : tvm.te.Tensor
+        1-D with shape [M + 1] (CSR) or
+
+    Returns
+    -------
+    output : tvm.te.Tensor
+        2-D with shape [M, N]
+    """
+    # TODO(ANSHUMAN87): support BSR format too
+    assert len(sparse_data.shape) == 1, "only CSR format is supported"
+    return _sparse_add_csr(dense_data, sparse_data, sparse_indices, sparse_indptr)
+
+
+def _sparse_add_csr(dense_data_inp, sparse_data_inp, sparse_indices_inp, sparse_indptr_inp):
+    oshape = get_const_tuple(dense_data_inp.shape)
+
+    def _csr_add_ir(dense_data, sparse_data, sparse_indices, sparse_indptr, out_data):
+        irb = tvm.tir.ir_builder.create()
+        dense_data_ptr = irb.buffer_ptr(dense_data)
+        sparse_data_ptr = irb.buffer_ptr(sparse_data)
+        sparse_indices_ptr = irb.buffer_ptr(sparse_indices)
+        sparse_indptr_ptr = irb.buffer_ptr(sparse_indptr)
+
+        out_data_ptr = irb.buffer_ptr(out_data)
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:
+            with irb.for_range(0, oshape[1], kind="serial", name="col") as col:
+                out_data_ptr[row, col] = dense_data_ptr[row, col]
+
+        with irb.for_range(0, oshape[0], kind="serial", name="row") as row:

Review comment:
       you could probably parallelize this loop

##########
File path: tests/python/topi/python/test_topi_sparse.py
##########
@@ -526,6 +526,31 @@ def test_sparse_dense_padded_alter_op():
             x = relay.build(tvm.IRModule.from_expr(f), target=tvm.target.Target("cuda"))
 
 
+def test_sparse_add_csr():

Review comment:
       id like to test float64 dtype here and int32 vs int64 for the indices.

##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")

Review comment:
       so you are adding no optimized implementations?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
ANSHUMAN87 commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r589977655



##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2148,6 +2148,53 @@ def sparse_transpose(x):
     return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+    r"""
+    Computes the matrix addition of `dense_mat` and `sparse_mat`, where `dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple with
+    fields `data`, `indices`, and `indptr`.
+
+    .. math::
+
+        \mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = \mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+    where `as_dense` returns dense equivalent of the given S(sparse matrix)
+    while performing addition with given D(dense matrix).
+
+    Parameters
+    ----------
+    dense_mat : tvm.relay.Expr
+        The input dense matrix for the matrix addition
+
+    sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+        The input sparse matrix(CSR) for the matrix addition.

Review comment:
       Actually i have plans to support both. We remove it for now. Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

Posted by GitBox <gi...@apache.org>.
tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r587687230



##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -799,6 +799,36 @@ def sparse_dense_padded_strategy(attrs, inputs, out_type, target):
     raise NotImplementedError("sparse_dense_padded is only implemented for cuda")
 
 
+# sparse_add
+def wrap_compute_sparse_add(topi_compute):
+    """wrap sparse add topi compute"""
+
+    def _compute_sparse_add(attrs, inputs, out_type):
+        return [topi_compute(inputs[0], inputs[1], inputs[2], inputs[3])]
+
+    return _compute_sparse_add
+
+
+@override_native_generic_func("sparse_add_strategy")
+def sparse_add_strategy(attrs, inputs, out_type, target):
+    """sparse add generic strategy"""
+    logger.warning("sparse add is not optimized for this platform.")

Review comment:
       are you going to add optimized implementations in a follow on pr?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org