You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by zh...@apache.org on 2018/06/17 00:23:23 UTC
[incubator-mxnet] branch master updated: bump up rtol for fp16 case
of test_sgd (#11246)
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 7210c2c bump up rtol for fp16 case of test_sgd (#11246)
7210c2c is described below
commit 7210c2cb750f666f6143b34b02635aadbf37ea13
Author: Hao Jin <ha...@users.noreply.github.com>
AuthorDate: Sat Jun 16 17:22:47 2018 -0700
bump up rtol for fp16 case of test_sgd (#11246)
---
tests/python/unittest/test_optimizer.py | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/tests/python/unittest/test_optimizer.py b/tests/python/unittest/test_optimizer.py
index 0540736..fba10fb 100644
--- a/tests/python/unittest/test_optimizer.py
+++ b/tests/python/unittest/test_optimizer.py
@@ -230,7 +230,10 @@ def test_sgd():
('multi_precision' not in kwarg or
not kwarg['multi_precision'])):
continue
- compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, dtype)
+ if dtype == np.float16:
+ compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, dtype, rtol=1e-3)
+ else:
+ compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape, dtype)
# test operator fallback on cpu
if dtype != np.float16:
compare_optimizer(opt1(**kwarg), opt2(**kwarg), shape[:2],
--
To stop receiving notification emails like this one, please contact
zhasheng@apache.org.