You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/08/08 01:30:19 UTC
[GitHub] haojin2 commented on a change in pull request #12059: Support selu
activation function
haojin2 commented on a change in pull request #12059: Support selu activation function
URL: https://github.com/apache/incubator-mxnet/pull/12059#discussion_r208432600
##########
File path: tests/python/unittest/test_operator.py
##########
@@ -819,6 +819,37 @@ def fprelu_grad(x, y, gamma):
check_symbolic_backward(y, [xa, gam_full], [np.ones(shape), np.ones(gam_full.shape)],
[g_xa_full, g_gam_full], rtol=rtol, atol=atol, dtype=dtype)
+@with_seed()
+def test_selu():
+ def fselu(x):
+ neg_indices = x < 0
+ out = x.copy()
+ out[neg_indices] = 1.6732632423543772848170429916717 * np.expm1(out[neg_indices])
+ return out * 1.0507009873554804934193349852946
+ def fselu_grad(grad, x, y):
+ neg_indices = x < 0
+ out = np.ones(x.shape).astype(x.dtype)
+ out[neg_indices] = y[neg_indices] + 1.6732632423543772848170429916717
+ return out * 1.0507009873554804934193349852946
+
+ shape = (3, 4)
+ x = mx.sym.Variable("x")
+ y = mx.sym.LeakyReLU(data=x, act_type="selu")
+ for dtype in [np.float16, np.float32, np.float64]:
+ xa = np.random.uniform(low=-0.1,high=0.1,size=shape).astype(dtype)
+ eps = 1e-4
+ rtol = 1e-2
+ atol = 1e-4
+ xa[abs(xa) < eps] = 0.1
+ ya = fselu(xa)
+ ga = fselu_grad(np.ones(shape).astype(dtype), xa, ya)
+ # Skip numeric check for float16 type to get rid of flaky behavior
Review comment:
It's related to the low precision of fp16, we can also increase rtol and atol to address it
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services