You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/11/03 13:42:55 UTC

[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

tkonolige commented on a change in pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#discussion_r516151334



##########
File path: python/tvm/relay/frontend/tensorflow.py
##########
@@ -903,6 +903,46 @@ def _impl(inputs, attr, params, mod):
     return _impl
 
 
+def _sparse_tensor_dense_matmul():
+    # Sparse utility from scipy
+    from scipy.sparse import csr_matrix
+
+    def _impl(inputs, attr, params, mod):
+        assert len(inputs) == 4, "There should be 4 input tensors"
+
+        indices_tensor = _infer_value(inputs[0], params, mod).asnumpy()
+        values_tensor = _infer_value(inputs[1], params, mod).asnumpy()
+        dense_shape_tensor = _infer_value(inputs[2], params, mod).asnumpy()
+
+        data = inputs[3]
+
+        rows = [x[0] for x in indices_tensor]
+        cols = [x[1] for x in indices_tensor]
+
+        # Create scipy sparse Tensor(CSR)
+        weight_sp = csr_matrix(
+            (values_tensor, (rows, cols)), shape=tuple(dense_shape_tensor.tolist())
+        )
+        weight_sp = csr_matrix(weight_sp.transpose())
+
+        weight_data = _expr.const(weight_sp.data, weight_sp.data.dtype)
+        weight_indptrs = _expr.const(weight_sp.indptr, weight_sp.indptr.dtype)
+        weight_indices = _expr.const(weight_sp.indices, weight_sp.indices.dtype)
+
+        ret = _op.nn.sparse_dense(data, [weight_data, weight_indices, weight_indptrs])
+
+        # If both are true means First input was dense and second was sparse
+        # TODO: Support other adjoint option too
+        if attr.get("adjoint_a") and attr.get("adjoint_b"):
+            ret = _op.transpose(ret)
+        else:
+            raise tvm.error.OpAttributeUnImplemented("Adjoint option is not supported yet.")

Review comment:
       Could you make this error message a little more specific. Something like "Only tf.sparse.sparse_dense_matmul with adjoint_a=True and adjoint_b=True is supported, but adjoint_a=X and adjoint_b=Y was supplied."

##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -367,9 +367,14 @@ def _alter_sparse_dense_layout(_attrs, inputs, _tinfos, _out_type):
         and isinstance(inputs[2], relay.Constant)
         and isinstance(inputs[3], relay.Constant)
     ):
-        sparse_matrix = sp.bsr_matrix(
-            (inputs[1].data.asnumpy(), inputs[2].data.asnumpy(), inputs[3].data.asnumpy())
-        )
+        if len(inputs[1].data.asnumpy().shape) == 1:

Review comment:
       Still confused on why this is necessary. The scipy docs show an example where a 1D data array is passed in to `bsr_matrix`. I guess it really doesn't matter though.

##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -367,9 +367,14 @@ def _alter_sparse_dense_layout(_attrs, inputs, _tinfos, _out_type):
         and isinstance(inputs[2], relay.Constant)
         and isinstance(inputs[3], relay.Constant)
     ):
-        sparse_matrix = sp.bsr_matrix(
-            (inputs[1].data.asnumpy(), inputs[2].data.asnumpy(), inputs[3].data.asnumpy())
-        )
+        if len(inputs[1].data.asnumpy().shape) == 1:

Review comment:
       It seems like the different constructors have different behavior. Interesting.
   
   Thanks for clearing that up.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org