You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by ni...@apache.org on 2018/08/30 22:45:15 UTC

systemml git commit: [SYSTEMML-445] Removed batch_norm builtin functions

Repository: systemml
Updated Branches:
  refs/heads/gh-pages 14049d257 -> d38bf4ee9


[SYSTEMML-445] Removed batch_norm builtin functions

- Removed batch_norm builtin functions to exploit codegen in CP.
- Added rewrites for compiling efficient CuDNN operators.
- Added rewrites for SGD update operations.
- To simplify adding new GPU rewrites, added HopDagPatternMatcher that allows for pattern matching at the HOP-level. This can be extended for other rewrites as well.
- Added GPU tests to validate the rewrites.
- Updated the DML language documentation.


Project: http://git-wip-us.apache.org/repos/asf/systemml/repo
Commit: http://git-wip-us.apache.org/repos/asf/systemml/commit/d38bf4ee
Tree: http://git-wip-us.apache.org/repos/asf/systemml/tree/d38bf4ee
Diff: http://git-wip-us.apache.org/repos/asf/systemml/diff/d38bf4ee

Branch: refs/heads/gh-pages
Commit: d38bf4ee982946a7d06c855690c26f072d2ab17d
Parents: 14049d2
Author: Niketan Pansare <np...@us.ibm.com>
Authored: Thu Aug 30 15:40:44 2018 -0700
Committer: Niketan Pansare <np...@us.ibm.com>
Committed: Thu Aug 30 15:40:44 2018 -0700

----------------------------------------------------------------------
 dml-language-reference.md | 2 --
 1 file changed, 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/systemml/blob/d38bf4ee/dml-language-reference.md
----------------------------------------------------------------------
diff --git a/dml-language-reference.md b/dml-language-reference.md
index 924336a..cdcc529 100644
--- a/dml-language-reference.md
+++ b/dml-language-reference.md
@@ -1522,8 +1522,6 @@ Hence, the images are internally represented as a matrix with dimension (N, C *
 | bias_add                                    | input, bias              | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                               | Adds the bias (row vector of size num_channels) to input with the given num_channels                                                              |
 | bias_multiply                               | input, bias              | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                               | Multiplies the bias (row vector of size num_channels) to input with the given num_channels                                                        |
 | lstm                                        | X,  W, bias, out0, c0    | [batch_size X seq_length*num_features]                    | [num_features+hidden_size X 4*hidden_size]                | [batch_size X seq_length*hidden_size] if return_sequences else  [batch_size X hidden_size]  | return_sequences                                                                                                                                                                              | Perform computation for single-layer unidirectional LSTM (outputs: out, carryOut)                                                                 |
-| batch_norm2d                                | input                    | [batch_size X num_channels* height_image* width_image]    |                                                           | [batch_size X num_channels* height_image* width_image]                                      | scale, shift, exponentialMovingAverage_Mean, exponentialMovingAverage_Variance, mode, epsilon, momentum                                                                                       | Performs batch normalization operation  (outputs: updated exponential moving average mean and variance, cache of the batch mean and variance)     |
-| batch_norm2d_backward                       | input, dout              | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels* height_image* width_image]                                      | scale, epsilon, cache_mean (from forward), cache_inv_var (from forward)                                                                                                                       | Computed backpropagation error for batch normalization operation                                                                                  |
 
 Note: the builtin functions `batch_norm2d` and `batch_norm2d_backward` are deprecated and will be removed in the next release. The `lstm` builtin function is in experimental phase and is only supported for the GPU backend.