You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@madlib.apache.org by "Frank McQuillan (JIRA)" <ji...@apache.org> on 2019/04/24 21:17:00 UTC

[jira] [Resolved] (MADLIB-1329) MLP warm start not working

     [ https://issues.apache.org/jira/browse/MADLIB-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Frank McQuillan resolved MADLIB-1329.
-------------------------------------
    Resolution: Fixed

> MLP warm start not working
> --------------------------
>
>                 Key: MADLIB-1329
>                 URL: https://issues.apache.org/jira/browse/MADLIB-1329
>             Project: Apache MADlib
>          Issue Type: Bug
>          Components: Module: Neural Networks
>            Reporter: Frank McQuillan
>            Priority: Major
>             Fix For: v1.16
>
>
> MLP warm start not working when have >1 hidden layers:
> {code}
> madlib=# DROP TABLE IF EXISTS mlp_model, mlp_model_summary, mlp_model_standardization;
> DROP TABLE
> madlib=# -- Set seed so results are reproducible
> madlib=# SELECT setseed(0);
> -[ RECORD 1 ]
> setseed | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5,5,5,5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params, with n_tries
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     FALSE,            -- No warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <1.36043166072>
> INFO:  Iteration: 2, Loss: <1.3499381049>
> INFO:  Iteration: 3, Loss: <1.33443838909>
> INFO:  Iteration: 4, Loss: <1.29191921391>
> INFO:  Iteration: 5, Loss: <1.13514652999>
> INFO:  Iteration: 6, Loss: <0.605807639127>
> INFO:  Iteration: 7, Loss: <0.222521966636>
> INFO:  Iteration: 8, Loss: <0.119710009513>
> INFO:  Iteration: 9, Loss: <0.0798186326376>
> INFO:  Iteration: 10, Loss: <0.059110614479>
> INFO:  Iteration: 11, Loss: <0.0465561473767>
> INFO:  Iteration: 12, Loss: <0.0381870587426>
> INFO:  Iteration: 13, Loss: <0.0322380000338>
> INFO:  Iteration: 14, Loss: <0.0278080860799>
> INFO:  Iteration: 15, Loss: <0.0243910955926>
> INFO:  Iteration: 16, Loss: <0.0216814708136>
> INFO:  Iteration: 17, Loss: <0.019484269578>
> INFO:  Iteration: 18, Loss: <0.0176694972659>
> INFO:  Iteration: 19, Loss: <0.0161472761464>
> INFO:  Iteration: 20, Loss: <0.0148535627994>
> INFO:  Iteration: 21, Loss: <0.0137415416838>
> INFO:  Iteration: 22, Loss: <0.0127762321545>
> INFO:  Iteration: 23, Loss: <0.0119309943525>
> INFO:  Iteration: 24, Loss: <0.0111851991017>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5,5,5,5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     TRUE,             -- Warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <0.526775235884>
> INFO:  Iteration: 2, Loss: <0.117145401113>
> INFO:  Iteration: 3, Loss: <0.0602106163466>
> INFO:  Iteration: 4, Loss: <0.04108334178>
> INFO:  Iteration: 5, Loss: <0.0312666961015>
> INFO:  Iteration: 6, Loss: <0.0252605498585>
> INFO:  Iteration: 7, Loss: <0.0211986830952>
> INFO:  Iteration: 8, Loss: <0.01826564738>
> INFO:  Iteration: 9, Loss: <0.0160470744822>
> INFO:  Iteration: 10, Loss: <0.0143096846719>
> INFO:  Iteration: 11, Loss: <0.0129119612879>
> INFO:  Iteration: 12, Loss: <0.011763011155>
> INFO:  Iteration: 13, Loss: <0.0108017593072>
> INFO:  Iteration: 14, Loss: <0.00998563483374>
> INFO:  Iteration: 15, Loss: <0.0092840495142>
> INFO:  Iteration: 16, Loss: <0.00867445508889>
> INFO:  Iteration: 17, Loss: <0.00813986234697>
> INFO:  Iteration: 18, Loss: <0.00766722611383>
> INFO:  Iteration: 19, Loss: <0.00724636272143>
> INFO:  Iteration: 20, Loss: <0.00686920569199>
> INFO:  Iteration: 21, Loss: <0.00652928239318>
> INFO:  Iteration: 22, Loss: <0.00622133870019>
> INFO:  Iteration: 23, Loss: <0.00594106500617>
> INFO:  Iteration: 24, Loss: <0.00568489301496>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5,5,5,5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     TRUE,             -- Warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <0.310183199428>
> INFO:  Iteration: 2, Loss: <0.088393889654>
> INFO:  Iteration: 3, Loss: <0.0516018746862>
> INFO:  Iteration: 4, Loss: <0.0371242489164>
> INFO:  Iteration: 5, Loss: <0.0291275739389>
> INFO:  Iteration: 6, Loss: <0.0240085087393>
> INFO:  Iteration: 7, Loss: <0.0204364115942>
> INFO:  Iteration: 8, Loss: <0.0177966114175>
> INFO:  Iteration: 9, Loss: <0.0157638172685>
> INFO:  Iteration: 10, Loss: <0.0141490621788>
> INFO:  Iteration: 11, Loss: <0.0128347944511>
> INFO:  Iteration: 12, Loss: <0.0117439367706>
> INFO:  Iteration: 13, Loss: <0.0108237918575>
> INFO:  Iteration: 14, Loss: <0.0100370813095>
> INFO:  Iteration: 15, Loss: <0.00935667554548>
> INFO:  Iteration: 16, Loss: <0.00876235168295>
> INFO:  Iteration: 17, Loss: <0.00823872217701>
> INFO:  Iteration: 18, Loss: <0.00777386815345>
> INFO:  Iteration: 19, Loss: <0.00735841229808>
> INFO:  Iteration: 20, Loss: <0.0069848745105>
> INFO:  Iteration: 21, Loss: <0.00664721443072>
> INFO:  Iteration: 22, Loss: <0.00634050043973>
> INFO:  Iteration: 23, Loss: <0.00606066608629>
> INFO:  Iteration: 24, Loss: <0.00580432810321>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5,5,5,5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     TRUE,             -- Warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <0.241686849256>
> INFO:  Iteration: 2, Loss: <0.0789037812418>
> INFO:  Iteration: 3, Loss: <0.0477383278583>
> INFO:  Iteration: 4, Loss: <0.0349153179967>
> INFO:  Iteration: 5, Loss: <0.0276779503206>
> INFO:  Iteration: 6, Loss: <0.0229787339952>
> INFO:  Iteration: 7, Loss: <0.0196655648511>
> INFO:  Iteration: 8, Loss: <0.0171975669069>
> INFO:  Iteration: 9, Loss: <0.0152849520197>
> INFO:  Iteration: 10, Loss: <0.0137577086149>
> INFO:  Iteration: 11, Loss: <0.0125092120633>
> INFO:  Iteration: 12, Loss: <0.0114690677404>
> INFO:  Iteration: 13, Loss: <0.0105888625494>
> INFO:  Iteration: 14, Loss: <0.00983417252295>
> INFO:  Iteration: 15, Loss: <0.00917983203434>
> INFO:  Iteration: 16, Loss: <0.00860700679725>
> INFO:  Iteration: 17, Loss: <0.0081013143176>
> INFO:  Iteration: 18, Loss: <0.0076515781172>
> INFO:  Iteration: 19, Loss: <0.00724897914204>
> INFO:  Iteration: 20, Loss: <0.00688646377197>
> INFO:  Iteration: 21, Loss: <0.00655832207597>
> INFO:  Iteration: 22, Loss: <0.00625988170135>
> INFO:  Iteration: 23, Loss: <0.00598728196025>
> INFO:  Iteration: 24, Loss: <0.0057373045858>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> {code}
> MLP warm start does seem to work OK for 1 hidden layer:
> {code}
> madlib=# DROP TABLE IF EXISTS mlp_model, mlp_model_summary, mlp_model_standardization;
> DROP TABLE
> madlib=# -- Set seed so results are reproducible
> madlib=# SELECT setseed(0);
> -[ RECORD 1 ]
> setseed | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params, with n_tries
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     FALSE,            -- No warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <1.43845150739>
> INFO:  Iteration: 2, Loss: <0.645143112079>
> INFO:  Iteration: 3, Loss: <0.198510249554>
> INFO:  Iteration: 4, Loss: <0.10299445227>
> INFO:  Iteration: 5, Loss: <0.0683773141272>
> INFO:  Iteration: 6, Loss: <0.0508462725678>
> INFO:  Iteration: 7, Loss: <0.0403303687053>
> INFO:  Iteration: 8, Loss: <0.0333482996416>
> INFO:  Iteration: 9, Loss: <0.0283871624887>
> INFO:  Iteration: 10, Loss: <0.02468658589>
> INFO:  Iteration: 11, Loss: <0.0218236746963>
> INFO:  Iteration: 12, Loss: <0.0195449339706>
> INFO:  Iteration: 13, Loss: <0.0176893779887>
> INFO:  Iteration: 14, Loss: <0.0161499639458>
> INFO:  Iteration: 15, Loss: <0.0148528011871>
> INFO:  Iteration: 16, Loss: <0.0137452714235>
> INFO:  Iteration: 17, Loss: <0.0127889068292>
> INFO:  Iteration: 18, Loss: <0.0119549448347>
> INFO:  Iteration: 19, Loss: <0.011221456629>
> INFO:  Iteration: 20, Loss: <0.0105714364345>
> INFO:  Iteration: 21, Loss: <0.00999149680482>
> INFO:  Iteration: 22, Loss: <0.00947095724551>
> INFO:  Iteration: 23, Loss: <0.00900119461623>
> INFO:  Iteration: 24, Loss: <0.00857517170977>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
> madlib(#     'iris_data',      -- Source table
> madlib(#     'mlp_model',      -- Destination table
> madlib(#     'attributes',     -- Input features
> madlib(#     'class_text',     -- Label
> madlib(#     ARRAY[5],         -- Number of units per layer
> madlib(#     'learning_rate_init=0.003,
> madlib'#     n_iterations=25,
> madlib'#     tolerance=0',     -- Optimizer params
> madlib(#     'tanh',           -- Activation function
> madlib(#     NULL,             -- Default weight (1)
> madlib(#     TRUE,             -- Warm start
> madlib(#     TRUE             -- Not verbose
> madlib(# );
> INFO:  Iteration: 1, Loss: <0.00783212719879>
> INFO:  Iteration: 2, Loss: <0.00754868929356>
> INFO:  Iteration: 3, Loss: <0.00724537071819>
> INFO:  Iteration: 4, Loss: <0.00696499503481>
> INFO:  Iteration: 5, Loss: <0.0067052209318>
> INFO:  Iteration: 6, Loss: <0.00646387227329>
> INFO:  Iteration: 7, Loss: <0.00623906734339>
> INFO:  Iteration: 8, Loss: <0.00602917134597>
> INFO:  Iteration: 9, Loss: <0.00583275728324>
> INFO:  Iteration: 10, Loss: <0.00564857402972>
> INFO:  Iteration: 11, Loss: <0.00547552010753>
> INFO:  Iteration: 12, Loss: <0.0053126220109>
> INFO:  Iteration: 13, Loss: <0.00515901618608>
> INFO:  Iteration: 14, Loss: <0.00501393396817>
> INFO:  Iteration: 15, Loss: <0.00487668892432>
> INFO:  Iteration: 16, Loss: <0.00474666616715>
> INFO:  Iteration: 17, Loss: <0.00462331328983>
> INFO:  Iteration: 18, Loss: <0.00450613264339>
> INFO:  Iteration: 19, Loss: <0.00439467473002>
> INFO:  Iteration: 20, Loss: <0.00428853252899>
> INFO:  Iteration: 21, Loss: <0.00418733660521>
> INFO:  Iteration: 22, Loss: <0.00409075087755>
> INFO:  Iteration: 23, Loss: <0.00399846894544>
> INFO:  Iteration: 24, Loss: <0.00391021088973>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
>     'iris_data',      -- Source table
>     'mlp_model',      -- Destination table
>     'attributes',     -- Input features
>     'class_text',     -- Label
>     ARRAY[5],         -- Number of units per layer
>     'learning_rate_init=0.003,
>     n_iterations=25,
>     tolerance=0',     -- Optimizer params
>     'tanh',           -- Activation function
>     NULL,             -- Default weight (1)
>     TRUE,             -- Warm start
>     TRUE             -- Not verbose
> );
> INFO:  Iteration: 1, Loss: <0.00374476271638>
> INFO:  Iteration: 2, Loss: <0.00367708783353>
> INFO:  Iteration: 3, Loss: <0.00360220992802>
> INFO:  Iteration: 4, Loss: <0.00353024687379>
> INFO:  Iteration: 5, Loss: <0.00346107072758>
> INFO:  Iteration: 6, Loss: <0.00339452362577>
> INFO:  Iteration: 7, Loss: <0.00333045921955>
> INFO:  Iteration: 8, Loss: <0.00326874179276>
> INFO:  Iteration: 9, Loss: <0.00320924531599>
> INFO:  Iteration: 10, Loss: <0.0031518525991>
> INFO:  Iteration: 11, Loss: <0.00309645453113>
> INFO:  Iteration: 12, Loss: <0.00304294939712>
> INFO:  Iteration: 13, Loss: <0.00299124226312>
> INFO:  Iteration: 14, Loss: <0.00294124442135>
> INFO:  Iteration: 15, Loss: <0.00289287288902>
> INFO:  Iteration: 16, Loss: <0.00284604995461>
> INFO:  Iteration: 17, Loss: <0.00280070276667>
> INFO:  Iteration: 18, Loss: <0.0027567629604>
> INFO:  Iteration: 19, Loss: <0.0027141663181>
> INFO:  Iteration: 20, Loss: <0.00267285246002>
> INFO:  Iteration: 21, Loss: <0.00263276456234>
> INFO:  Iteration: 22, Loss: <0.00259384909969>
> INFO:  Iteration: 23, Loss: <0.00255605560962>
> INFO:  Iteration: 24, Loss: <0.00251933647706>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> madlib=# SELECT madlib.mlp_classification(
>     'iris_data',      -- Source table
>     'mlp_model',      -- Destination table
>     'attributes',     -- Input features
>     'class_text',     -- Label
>     ARRAY[5],         -- Number of units per layer
>     'learning_rate_init=0.003,
>     n_iterations=25,
>     tolerance=0',     -- Optimizer params
>     'tanh',           -- Activation function
>     NULL,             -- Default weight (1)
>     TRUE,             -- Warm start
>     TRUE             -- Not verbose
> );
> INFO:  Iteration: 1, Loss: <0.00244894389095>
> INFO:  Iteration: 2, Loss: <0.00241947565705>
> INFO:  Iteration: 3, Loss: <0.00238653134239>
> INFO:  Iteration: 4, Loss: <0.00235444586402>
> INFO:  Iteration: 5, Loss: <0.0023232030616>
> INFO:  Iteration: 6, Loss: <0.00229277038229>
> INFO:  Iteration: 7, Loss: <0.00226311685617>
> INFO:  Iteration: 8, Loss: <0.00223421306776>
> INFO:  Iteration: 9, Loss: <0.00220603106015>
> INFO:  Iteration: 10, Loss: <0.00217854424586>
> INFO:  Iteration: 11, Loss: <0.0021517273241>
> INFO:  Iteration: 12, Loss: <0.00212555620409>
> INFO:  Iteration: 13, Loss: <0.00210000793359>
> INFO:  Iteration: 14, Loss: <0.00207506063261>
> INFO:  Iteration: 15, Loss: <0.00205069343161>
> INFO:  Iteration: 16, Loss: <0.0020268864139>
> INFO:  Iteration: 17, Loss: <0.00200362056202>
> INFO:  Iteration: 18, Loss: <0.00198087770756>
> INFO:  Iteration: 19, Loss: <0.00195864048445>
> INFO:  Iteration: 20, Loss: <0.0019368922852>
> INFO:  Iteration: 21, Loss: <0.00191561721998>
> INFO:  Iteration: 22, Loss: <0.00189480007833>
> INFO:  Iteration: 23, Loss: <0.00187442629334>
> INFO:  Iteration: 24, Loss: <0.00185448190796>
> -[ RECORD 1 ]------+-
> mlp_classification | 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)