You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/06/13 06:09:41 UTC
[GitHub] [incubator-mxnet] ZhennanQin commented on issue #15118: Conversion
from FP32 model to Mixed Precision model
ZhennanQin commented on issue #15118: Conversion from FP32 model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#issuecomment-501562587
Thanks @anirudh2290 providing this fantastic feature. I see that the ability to extend bfloat16 support is well considered. Here's an question from API design:
```
def convert_hybrid_block(block, target_dtype="float16", target_dtype_ops=None,
fp32_ops=None, conditional_fp32_ops=None,
excluded_sym_names=None, ctx=gpu(0)):
```
This API is to convert a gluon hybrid block to low precision, and it accepts `excluded_sym_names` to exclude some certain layer. My question is, since gluon layers don't have layer names, then what would you expect user to use this parameter?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services