You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hama.apache.org by "Christian Herta (JIRA)" <ji...@apache.org> on 2012/11/23 17:14:58 UTC

[jira] [Created] (HAMA-681) Multi Layer Perceptron

Christian Herta created HAMA-681:
------------------------------------

             Summary: Multi Layer Perceptron 
                 Key: HAMA-681
                 URL: https://issues.apache.org/jira/browse/HAMA-681
             Project: Hama
          Issue Type: New Feature
          Components: machine learning
    Affects Versions: 0.5.0
            Reporter: Christian Herta


Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

Implementation should be the basis for the long range goals: 
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HAMA-681) Multi Layer Perceptron

Posted by "Christian Herta (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning. 


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

Implementation should be the basis for the long range goals: 
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HAMA-681) Multi Layer Perceptron

Posted by "Christian Herta (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:
 Videos:
    - http://www.youtube.com/watch?v=ZmNOAtZIgIk
    - http://techtalks.tv/talks/57639/

 MLP and Deep Learning Tutorial:
 - http://www.stanford.edu/class/cs294a/

 Scientific Papers:
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:
 Videos:
    - http://www.youtube.com/watch?v=ZmNOAtZIgIk
    - http://techtalks.tv/talks/57639/
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://www.stanford.edu/class/cs294a/
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Videos:
>     - http://www.youtube.com/watch?v=ZmNOAtZIgIk
>     - http://techtalks.tv/talks/57639/
>  MLP and Deep Learning Tutorial:
>  - http://www.stanford.edu/class/cs294a/
>  Scientific Papers:
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HAMA-681) Multi Layer Perceptron

Posted by "Christian Herta (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:
 Videos:
    - http://www.youtube.com/watch?v=ZmNOAtZIgIk
    - http://techtalks.tv/talks/57639/
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://www.stanford.edu/class/cs294a/
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:  
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://www.stanford.edu/class/cs294a/
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Videos:
>     - http://www.youtube.com/watch?v=ZmNOAtZIgIk
>     - http://techtalks.tv/talks/57639/
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://www.stanford.edu/class/cs294a/
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HAMA-681) Multi Layer Perceptron

Posted by "Christian Herta (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:  
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning. 


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:  
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HAMA-681) Multi Layer Perceptron

Posted by "Christian Herta (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:  
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://www.stanford.edu/class/cs294a/
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

Different strategies of efficient synchronized weight updates has to be evaluated.

Resources:  
 - Google's "Brain" project: 
http://research.google.com/archive/large_deep_networks_nips2012.html
 - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:  
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://www.stanford.edu/class/cs294a/
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira