You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/03/29 08:28:35 UTC

[GitHub] [tvm] wxyhv opened a new issue #7762: The asnumpy() function costs most of the time during infer, how to use numpy to conpute directly to prevent use tvm NDarray()?

wxyhv opened a new issue #7762:
URL: https://github.com/apache/tvm/issues/7762


   After I complete tune module, I build the tvm module,set_input() and run, then I use GraphModule.get_out() funtion to get TVM compute result,  the result is a class named "tvm.runtime.ndarray.NDArray". 
   However, I need 'numpy.ndarray' to do the following operations,such as get data by index,numpy.expand_dims(),and so on. So I use the asnumpy() function to convert the data type, but the asnumpy() funtion costs a lot of time even surpass the TVM module run time! This make the whole inference time much big than "no TVM tuned DeepLearning model"
   The following is part of my inference code, which indicate how the asnumpy enlarge the whole inference time!
   `      # set input_data
           self.tuned_module.set_input("Input", data)
           # execute
           self.tuned_module.run()
           # get outputs
           # result = self.tuned_module.get_output(
           #     0, tvm.nd.empty(self.output_shape, "float32"))
           result = self.tuned_module.get_output(0)
           print(type(logits)) # <class 'tvm.runtime.ndarray.NDArray'>
   
           t1 = time.time()
           logits = logits.asnumpy()  # 
           t2 = time.time()
           print("asnumpy_time:", t2-t1) #255ms
           print(type(logits)) # <class 'numpy.ndarray'>
   
           result = result[:, 0, :]
           result = numpy.expand_dims(result, 1)
   `
   
   Consequently, my questions are:
   1. Why the TVM use "tvm.runtime.ndarray.NDArray" to compute data, rather than numpy library? The numpy library have many useful funtion to process data which can't find in "tvm.runtime.ndarray.NDArray".
   
   2. How to use numpy to compute directly ranther than  tvm NDarray()?  This will save much time during the whole DeepLearning inference
   
   3. How to get the TVM run (or compute) result in the memery address, to prevent  convert & copy result to NDarray again(the input data already convertd to NDarray by set_input())? 
   
   Looking forward to your reply! 
   Thank you very much!
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] VoVAllen commented on issue #7762: The asnumpy() function costs most of the time during infer, how to use numpy to compute directly ranther than use tvm NDarray()?

Posted by GitBox <gi...@apache.org>.
VoVAllen commented on issue #7762:
URL: https://github.com/apache/tvm/issues/7762#issuecomment-813070441


   You can try dlpack in tvm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #7762: The asnumpy() function costs most of the time during infer, how to use numpy to compute directly ranther than use tvm NDarray()?

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #7762:
URL: https://github.com/apache/tvm/issues/7762#issuecomment-813505969


   Thanks @wxyhv , can you open a new thread on https://discuss.tvm.apache.org/ about the question. The time of asnumpy could also due to the fact that previous computations are asynchronous, so the cost comes from the actual compute instead of the copy


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen closed issue #7762: The asnumpy() function costs most of the time during infer, how to use numpy to compute directly ranther than use tvm NDarray()?

Posted by GitBox <gi...@apache.org>.
tqchen closed issue #7762:
URL: https://github.com/apache/tvm/issues/7762


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org