You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/21 17:08:23 UTC

[GitHub] [tvm] ganler edited a comment on pull request #8267: [Bugfix, CuDNN] fix segfault when cudnnDestroy called with destroyed cuda context

ganler edited a comment on pull request #8267:
URL: https://github.com/apache/tvm/pull/8267#issuecomment-865201694


   > Thanks for the contribution! It is definitely a classic bug on program exit, when cuda context gets destroyed before the handle is released.
   > 
   > I do believe that a better way to handle this issue is to "let it leak": just do not destroy the handle on exit, because the OS is going to handle it anyway.
   > 
   > CC: @icemelon9
   
   Agreed. PyTorch also applies the "let-it-leak" mechanism b.c. their handlers are maintained by a pool and threads reuse them. But actually, in TVM this handler is called on threads' exit (`thread_local` lifetime). So the leak might be significant if TVM kills and re-creates many threads during the whole program. So I want to confirm if it is the case? If not, "let-it-leak" is definitely the best approach. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org