You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@horn.apache.org by "Edward J. Yoon (JIRA)" <ji...@apache.org> on 2016/10/06 09:58:20 UTC

[jira] [Commented] (HORN-33) Add activation code for Horn

    [ https://issues.apache.org/jira/browse/HORN-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551504#comment-15551504 ] 

Edward J. Yoon commented on HORN-33:
------------------------------------

This can be good contribution, I'm looking forward your pull request! Mr. Kim.

> Add activation code for Horn
> ----------------------------
>
>                 Key: HORN-33
>                 URL: https://issues.apache.org/jira/browse/HORN-33
>             Project: Apache Horn
>          Issue Type: Improvement
>          Components: build
>            Reporter: Kwihoon Kim
>            Assignee: Kwihoon Kim
>            Priority: Minor
>
> This implements a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. [1]
> PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. 
> This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). 
> To our knowledge, our result is the first to surpass the reported human-level performance (5.1%) on this dataset.
> [1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE International Conference on Computer Vision. 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)