You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Hayri Volkan Agun <vo...@gmail.com> on 2016/02/09 23:26:06 UTC

Learning Fails with 4 Number of Layes at ANN Training with SGDOptimizer

Hi Everyone,

When MultilayerPerceptronClassifier set to three or four number of layers
and the SGDOptimizer's selected parameters are as follows.

tol : 1e-5
numIter=10000
layers : 82,100,30,29
stepSize=0.05
sigmoidFunction in all layers

learning finishes but it doesn't converge. What may be the problem of this.
What should be the parameters?

-- 
Hayri Volkan Agun
PhD. Student - Anadolu University

RE: Learning Fails with 4 Number of Layes at ANN Training with SGDOptimizer

Posted by "Ulanov, Alexander" <al...@hpe.com>.
Hi Hayri,

The default MLP optimizer is LBFGS. SGD is available only thought the private interface and its use is discouraged due to multiple reasons. With regards to SGD in general, the paramters are very specific to the dataset and network configuration, one need to find them empirically. The good starting point would be to have a small learning rate, rather small batch and big number of iterations to make sure the number of processed instances is more than the training set.

Best regards, Alexander

From: Hayri Volkan Agun [mailto:volkanagun@gmail.com]
Sent: Tuesday, February 09, 2016 2:26 PM
To: user @spark
Subject: Learning Fails with 4 Number of Layes at ANN Training with SGDOptimizer

Hi Everyone,

When MultilayerPerceptronClassifier set to three or four number of layers and the SGDOptimizer's selected parameters are as follows.

tol : 1e-5
numIter=10000
layers : 82,100,30,29
stepSize=0.05
sigmoidFunction in all layers

learning finishes but it doesn't converge. What may be the problem of this. What should be the parameters?

--
Hayri Volkan Agun
PhD. Student - Anadolu University