You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@madlib.apache.org by GitBox <gi...@apache.org> on 2019/05/29 23:58:34 UTC

[GitHub] [madlib] fmcquillan99 commented on issue #398: Updated the code, state_size was pointing to the wrong value

fmcquillan99 commented on issue #398: Updated the code, state_size was pointing to the wrong value
URL: https://github.com/apache/madlib/pull/398#issuecomment-497150387
 
 
   No tolerance :
   
   ```
   madlib=# SELECT madlib.mlp_classification(
   madlib(#     'iris_data',      -- Source table
   madlib(#     'mlp_model',      -- Destination table
   madlib(#     'attributes',     -- Input features
   madlib(#     'class_text',     -- Label
   madlib(#     ARRAY[5],         -- Number of units per layer
   madlib(#     'learning_rate_init=0.003,
   madlib'#     n_iterations=25,
   madlib'#     tolerance=0',     -- Optimizer params
   madlib(#     'tanh',           -- Activation function
   madlib(#     NULL,             -- Default weight (1)
   madlib(#     FALSE,            -- No warm start
   madlib(#     TRUE             -- Not verbose
   madlib(# );
   INFO:  Iteration: 1, Loss: <1.57403083382>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <1.18209701678>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <0.805838101786>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.456743716108>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.269013324379>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.176825519276>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.12831471301>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.0995580174594>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.0808205589525>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.0679497236685>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.058451236734>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.0510716510938>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.0453356242116>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.040657636203>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.0368149574231>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.0336085514468>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.0309155266308>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.0286056560803>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.0266028934429>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.0248412801724>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.023296777543>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.0219201410755>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.0206909609214>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.0195874516687>
   CONTEXT:  PL/Python function "mlp_classification"
    mlp_classification 
   --------------------
    
   (1 row)
   ```
   
   
   tolerance=0.1 :
   
   ```
   madlib=# SELECT madlib.mlp_classification(
   madlib(#     'iris_data',      -- Source table
   madlib(#     'mlp_model',      -- Destination table
   madlib(#     'attributes',     -- Input features
   madlib(#     'class_text',     -- Label
   madlib(#     ARRAY[5],         -- Number of units per layer
   madlib(#     'learning_rate_init=0.003,
   madlib'#     n_iterations=25,
   madlib'#     tolerance=0.1',     -- Optimizer params
   madlib(#     'tanh',           -- Activation function
   madlib(#     NULL,             -- Default weight (1)
   madlib(#     FALSE,            -- No warm start
   madlib(#     TRUE             -- Not verbose
   madlib(# );
   INFO:  Iteration: 1, Loss: <1.47220611074>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <1.24578776394>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <0.887573516227>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.530981965126>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.306411380721>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.195035410704>
   CONTEXT:  PL/Python function "mlp_classification"
    mlp_classification 
   --------------------
    
   (1 row)
   ```
   
   LGTM
   
   @hpandeycodeit @njayaram2   Just to confirm:  in the case of grouping, all groups must meet the tolerance threshold before training is ended?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services