You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@madlib.apache.org by "Frank McQuillan (JIRA)" <ji...@apache.org> on 2017/08/15 20:38:00 UTC

[jira] [Issue Comment Deleted] (MADLIB-413) Neural Networks - MLP - Phase 1

     [ https://issues.apache.org/jira/browse/MADLIB-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Frank McQuillan updated MADLIB-413:
-----------------------------------
    Comment: was deleted

(was: Github user fmcquillan99 commented on the issue:

    https://github.com/apache/incubator-madlib/pull/167
  
    Here are some suggested changes/additions:
    
    1) Change release date to Fri Aug 18 which might be a better estimate.
    
    2) MLP
    Change
    New Module: Multilayer Perceptron (MADLIB-413)
    to
    New Module: Multilayer Perceptron (MADLIB-413, MADLIB-1134)
    
    3) APSP
    Change
    New module: Graph - All Pairs Shortest Path (MADLIB-1099)
    to
    New module: Graph - All Pairs Shortest Path (MADLIB-1072, MADLIB-1099, MADLIB-1106)
    
    4) WCC
    Change
    New module: Graph - Weakly Connected Components (MADLIB-1071, MADLIB-1083)
    to 
    New module: Graph - Weakly Connected Components (MADLIB-1071, MADLIB-1083, MADLIB-1101)
    
    5) Summary
    Change
    Summary: Allow user to determine the number of columns per run (MADLIB-1117)
    to
    Summary: 
      - Allow user to determine the number of columns per run (MADLIB-1117)
      - Improve efficiency of computation time by ~35% (MADLIB-1104)
    
    6) TLP
    Updates for Apache Top Level Project readiness (MADLIB-1130, MADLIB-1133)
    * what about MADLIB-1132 and MADLIB-1142
    * also add the epic MADLIB-1112
    
    7) Train-test split
    Add:
     New Module: Sample - Train-test split (MADLIB-1119)
    
    8) under the bugs section:
    Change
    - Fix the data scaling bug with normalization
    to
    - Fix the data scaling bug with normalization (MADLIB-1094)
    
    9) under the bugs section:
    change:
    Update 'optimizer' GUC only if editable
    to
    Update 'optimizer' GUC only if editable (MADLIB-1109)
    
    10) Change
    Promote cardinality estimators to top level module from early stage 
    to
    Promote cardinality estimators to top level module from early stage (MADLIB-1120)
    
    11) Under bugs section:
    change
    Graph: Quoted output table name does not work for some modules
    to
    Graph: Quoted output table name does not work for some modules (MADLIB-1137)
)

> Neural Networks - MLP - Phase 1
> -------------------------------
>
>                 Key: MADLIB-413
>                 URL: https://issues.apache.org/jira/browse/MADLIB-413
>             Project: Apache MADlib
>          Issue Type: New Feature
>          Components: Module: Neural Networks
>            Reporter: Caleb Welton
>            Assignee: Cooper Sloan
>             Fix For: v1.12
>
>         Attachments: screenshot-1.png
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real Valued array of size 1 or greater
> hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before training stops (or until it reaches n_iterations)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)