You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Kamuela Lau (JIRA)" <ji...@apache.org> on 2018/09/21 02:39:00 UTC

[jira] [Comment Edited] (SOLR-12785) Add test for activation functions in NeuralNetworkModel

    [ https://issues.apache.org/jira/browse/SOLR-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16622973#comment-16622973 ] 

Kamuela Lau edited comment on SOLR-12785 at 9/21/18 2:38 AM:
-------------------------------------------------------------

A couple of comments regarding the patch:
 # I moved the default implementations to the Activation interface, and implemented in enums (this pattern is mentioned in "Effective Java", Item 3). I decided on the enum as activation is just a function; if another layer is created with the same activation as a previous layer, only one object will be created (i.e. if there are three layers with sigmoid activation, one object will be created, instead of three as it is now). Doing this also made it easier to test. If it is preferred that the implementations not be singletons or in an enum class, please let me know; any comments are appreciated.
 # I also attempted to write the test for the activation functions without changing current implementation of DefaultLayer.setActivation(), however in doing so, I felt that I had to construct a NeuralNetworkModel object (with one input, one output, weight 1 and bias 0). See test-no-activation-change.txt, which I have attached.

Any comments or advice would be greatly appreciated!


was (Author: kamulau):
A couple of couple of comments regarding the patch:
 # I moved the default implementations to the Activation interface, and implemented in enums (this pattern is mentioned in "Effective Java", Item 3). I decided on the enum as activation is just a function; if another layer is created with the same activation as a previous layer, only one object will be created (i.e. if there are three layers with sigmoid activation, one object will be created, instead of three as it is now). Doing this also made it easier to test. If it is preferred that the implementations not be singletons or in an enum class, please let me know; any comments are appreciated.
 # I also attempted to write the test for the activation functions without changing current implementation of DefaultLayer.setActivation(), however in doing so, I felt that I had to construct a NeuralNetworkModel object (with one input, one output, weight 1 and bias 0). See test-no-activation-change.txt, which I have attached.

Any comments or advice would be greatly appreciated!

> Add test for activation functions in NeuralNetworkModel
> -------------------------------------------------------
>
>                 Key: SOLR-12785
>                 URL: https://issues.apache.org/jira/browse/SOLR-12785
>             Project: Solr
>          Issue Type: Test
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: contrib - LTR
>            Reporter: Kamuela Lau
>            Priority: Minor
>         Attachments: SOLR-12785.patch, test-no-activation-change.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org