You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/01/17 12:56:29 UTC

[GitHub] larroy commented on a change in pull request #13857: float32 -> float16 cast consistency across implementations

larroy commented on a change in pull request #13857: float32 -> float16 cast consistency across implementations
URL: https://github.com/apache/incubator-mxnet/pull/13857#discussion_r248657815
 
 

 ##########
 File path: tests/python/unittest/test_operator.py
 ##########
 @@ -3994,6 +3994,44 @@ def test_cast():
             assert_almost_equal(exe.grad_arrays[0].asnumpy(), X.astype(dsttype).astype(srctype), rtol=1e-3, atol=1e-5)
 
 
+# Test requires all platforms to round float32->float16 with same round-to-nearest-even policy.
+@with_seed()
+def test_cast_float32_to_float16():
+    fp16_fraction_bits = 10
+    fp32_fraction_bits = 23
+    fp32_exp_min = -126
+    fp32_exp_max = 127
+    # generate test cases in the vicinity of representable float16 mantissas
+    # and mid-way between them, but over the full range of float32 exponents.
+    def get_data():
+        for sign_bit in [0, 1]:
+            for exponent in range(fp32_exp_min - fp32_fraction_bits - 1, fp32_exp_max + 2):
+                denominator = 2**(fp16_fraction_bits + 1)
+                for numerator in range(0, denominator):
+                    for y in [-1.0, 0.0, 1.0]:
+                        small_delta = y / 2**fp32_fraction_bits
+                        val = (-1.0)**sign_bit * 2.0**exponent * (1.0 +
 
 Review comment:
   nit: Could we break (1.0 +  also in the next line for readability?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services