You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by ni...@apache.org on 2019/03/27 16:24:11 UTC

[systemml] branch gh-pages updated: [MINOR][DOC] Updated Keras2DML and Caffe2DML reference guides.

This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git


The following commit(s) were added to refs/heads/gh-pages by this push:
     new c71d404  [MINOR][DOC] Updated Keras2DML and Caffe2DML reference guides.
c71d404 is described below

commit c71d404922591300ef6c9e872069ba94ae944cd1
Author: Niketan Pansare <np...@us.ibm.com>
AuthorDate: Wed Mar 27 09:21:30 2019 -0700

    [MINOR][DOC] Updated Keras2DML and Caffe2DML reference guides.
---
 reference-guide-caffe2dml.md | 12 ++++++------
 reference-guide-keras2dml.md | 30 +++++++++++++++++++++---------
 2 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/reference-guide-caffe2dml.md b/reference-guide-caffe2dml.md
index 1a3d154..993d587 100644
--- a/reference-guide-caffe2dml.md
+++ b/reference-guide-caffe2dml.md
@@ -1137,17 +1137,17 @@ class           precision       recall          f1-score        num_true_labels
 
 #### Design document of Caffe2DML
 
-1. Caffe2DML is designed to fit well into the mllearn framework. Hence, the key methods that were to be implemented are:
+Caffe2DML is designed to fit well into the mllearn framework. Hence, the key methods that were to be implemented are:
 - `getTrainingScript` for the `Estimator` class.
 - `getPredictionScript` for the `Model` class.
 
 These methods should be the starting point of any developer to understand the DML generated for training and prediction respectively.
 
-2. To simplify the DML generation in `getTrainingScript` and `getPredictionScript method`, we use DMLGenerator interface. 
+To simplify the DML generation in `getTrainingScript` and `getPredictionScript method`, we use DMLGenerator interface. 
 This interface generates DML string for common operations such as loops (such as if, for, while) as well as built-in functions (read, write), etc. 
 Also, this interface helps in "code reading" of the Caffe2DML class.
 
-3. Here is an analogy for SystemML developers to think of various moving components of Caffe2DML:
+Here is an analogy for SystemML developers to think of various moving components of Caffe2DML:
 - Like `Dml.g4` in the `org.apache.sysml.parser.dml` package, `caffe.proto` in the `src/main/proto/caffe` directory
 is used to generate classes to parse the input files.
 
@@ -1187,7 +1187,7 @@ trait CaffeSolver {
 }
 ```
 
-4. To simplify the traversal of the network, we created a Network interface:
+To simplify the traversal of the network, we created a Network interface:
 ```
 trait Network {
   def getLayers(): List[String]
@@ -1198,8 +1198,8 @@ trait Network {
 }
 ```
 
-5. One of the key design restriction of Caffe2DML is that every layer is identified uniquely by its name.
+One of the key design restriction of Caffe2DML is that every layer is identified uniquely by its name.
 This restriction simplifies the code significantly.
 To shield from network files that violates this restriction, Caffe2DML performs rewrites in CaffeNetwork class (search for condition 1-5 in Caffe2DML class).
 
-6. Like Caffe, Caffe2DML also expects the layers to be in sorted order.
+Like Caffe, Caffe2DML also expects the layers to be in sorted order.
\ No newline at end of file
diff --git a/reference-guide-keras2dml.md b/reference-guide-keras2dml.md
index a576ee7..d04ff51 100644
--- a/reference-guide-keras2dml.md
+++ b/reference-guide-keras2dml.md
@@ -30,10 +30,30 @@ limitations under the License.
 
 # Layers supported in Keras2DML
 
-TODO:
+If a Keras layer or a hyperparameter is not supported, we throw an error informing that the layer is not supported.
+We follow the Keras specification very closely during DML generation and compare the results of our layers (both forward and backward) with Tensorflow to validate that.
+
+- Following layers are not supported but will be supported in near future: `Reshape, Permute, RepeatVector, ActivityRegularization, Masking, SpatialDropout1D, SpatialDropout2D, SeparableConv1D, SeparableConv2D, DepthwiseConv2D, Cropping1D, Cropping2D, GRU and Embedding`.
+- Following layers are not supported by their 2D variants exists (consider using them instead): `UpSampling1D, ZeroPadding1D, MaxPooling1D, AveragePooling1D and Conv1D`.
+- Specialized `CuDNNGRU and CuDNNLSTM` layers are not required in SystemML. Instead use `LSTM` layer. 
+- We do not have immediate plans to support the following layers: `Lambda, SpatialDropout3D, Conv3D, Conv3DTranspose, Cropping3D, UpSampling3D, ZeroPadding3D, MaxPooling3D, AveragePooling3D and ConvLSTM2D*`.
 
 # Frequently asked questions
 
+#### How do I specify the batch size, the number of epochs and the validation dataset?
+
+Like Keras, the user can provide `batch_size` and `epochs` via the `fit` method. 
+
+```python
+# Either:
+sysml_model.fit(features, labels, epochs=10, batch_size=64, validation_split=0.3)
+# Or
+sysml_model.fit(features, labels, epochs=10, batch_size=64, validation_data=(Xval_numpy, yval_numpy))
+```
+
+Note, we do not support `verbose` and `callbacks` parameters in our `fit` method. Please use SparkContext's `setLogLevel` method to control the verbosity.
+
+
 #### How can I get the training and prediction DML script for the Keras model?
 
 The training and prediction DML scripts can be generated using `get_training_script()` and `get_prediction_script()` methods.
@@ -49,8 +69,6 @@ print(sysml_model.get_training_script())
 |                                                        | Specified via the given parameter in the Keras2DML constructor | From input Keras' model                                                                 | Corresponding parameter in the Caffe solver file |
 |--------------------------------------------------------|----------------------------------------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------|
 | Solver type                                            |                                                                | `type(keras_model.optimizer)`. Supported types: `keras.optimizers.{SGD, Adagrad, Adam}` | `type`                                           |
-| Validation dataset                                     | `test_iter` (explained in the below section)                   | The `validation_data` parameter in the `fit` method is not supported.                   | `test_iter`                                      |
-| Monitoring the loss                                    | `display, test_interval` (explained in the below section)      | The `LossHistory` callback in the `fit` method is not supported.                        | `display, test_interval`                         |
 | Learning rate schedule                                 | `lr_policy`                                                    | The `LearningRateScheduler` callback in the `fit` method is not supported.              | `lr_policy` (default: step)                      |
 | Base learning rate                                     |                                                                | `keras_model.optimizer.lr`                                                              | `base_lr`                                        |
 | Learning rate decay over each update                   |                                                                | `keras_model.optimizer.decay`                                                           | `gamma`                                          |
@@ -59,12 +77,6 @@ print(sysml_model.get_training_script())
 | If type of the optimizer is `keras.optimizers.Adam`    |                                                                | `beta_1, beta_2, epsilon`. The parameter `amsgrad` is not supported.                    | `momentum, momentum2, delta`                     |
 | If type of the optimizer is `keras.optimizers.Adagrad` |                                                                | `epsilon`                                                                               | `delta`                                          |
 
-#### How do I specify the batch size and the number of epochs?
-
-Like Keras, the user can provide `batch_size` and `epochs` via the `fit` method. For example: `sysml_model.fit(features, labels, epochs=10, batch_size=64)`.
-
-Note, we do not support `verbose` and `callbacks` parameters in our `fit` method. Please use SparkContext's `setLogLevel` method to control the verbosity.
-
 #### What optimizer and loss does Keras2DML use by default if `keras_model` is not compiled ?
 
 If the user does not `compile` the keras model, then we throw an error.