You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by ni...@apache.org on 2019/03/19 20:25:47 UTC

[systemml] branch gh-pages updated (d38bf4e -> ff681d8)

This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a change to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git.


    from d38bf4e  [SYSTEMML-445] Removed batch_norm builtin functions
     new aa7e0a9  [MINOR] Fixes bug causing stats output to be cleared in JMLC
     new 0599687  [SYSTEMML-2499] Built-in functions for binomial distribution
     new 2251f40  [SYSTEMML-540] Improve the performance of GPU lstm backward operator by passing the state
     new 834ee04  [SYSTEMML-2520] Add documentation search with Algolia service
     new ff681d8  [SYSTEMML-2520][DOC] Specified the steps required to update Crawler configuration for the search indexing in our release documentation.

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _layouts/global.html      | 31 ++++++++++++++++++++++++++++++-
 css/main.css              | 30 +++++++++++++++++++++++++++---
 dml-language-reference.md | 34 ++++++++++++++++++++++------------
 jmlc.md                   | 10 +++++-----
 release-process.md        | 33 +++++++++++++++++++--------------
 5 files changed, 103 insertions(+), 35 deletions(-)


[systemml] 01/05: [MINOR] Fixes bug causing stats output to be cleared in JMLC

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git

commit aa7e0a9baa63622bf5a778ae81e8f65313a45f2f
Author: Anthony Thomas <ah...@eng.ucsd.edu>
AuthorDate: Mon Nov 12 18:56:31 2018 +0530

    [MINOR] Fixes bug causing stats output to be cleared in JMLC
    
    Closes #843.
---
 jmlc.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/jmlc.md b/jmlc.md
index 08d1688..e0d72ea 100644
--- a/jmlc.md
+++ b/jmlc.md
@@ -53,7 +53,7 @@ dependent on the nature of the business use case being addressed.
 
 JMLC can be configured to gather runtime statistics, as in the MLContext API, by calling Connection's `setStatistics()`
 method with a value of `true`. JMLC can also be configured to gather statistics on the memory used by matrices and
-frames in the DML script. To enable collection of memory statistics, call Connection's `gatherMemStats()` method
+frames in the DML script. To enable collection of memory statistics, call PreparedScript's `gatherMemStats()` method
 with a value of `true`. When finegrained statistics are enabled in `SystemML.conf`, JMLC will also report the variables
 in the DML script which used the most memory. An example showing how to enable statistics in JMLC is presented in the
 section below.
@@ -122,10 +122,6 @@ the resulting `"predicted_y"` matrix. We repeat this process. When done, we clos
  
         // obtain connection to SystemML
         Connection conn = new Connection();
-
-        // turn on gathering of runtime statistics and memory use
-        conn.setStatistics(true);
-        conn.gatherMemStats(true);
  
         // read in and precompile DML script, registering inputs and outputs
         String dml = conn.readScript("scoring-example.dml");
@@ -135,6 +131,10 @@ the resulting `"predicted_y"` matrix. We repeat this process. When done, we clos
         String plan = script.explain();
         System.out.println(plan);
 
+        // turn on gathering of runtime statistics and memory use
+        script.setStatistics(true);
+        script.gatherMemStats(true);
+
         double[][] mtx = matrix(4, 3, new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 });
         double[][] result = null;
  


[systemml] 03/05: [SYSTEMML-540] Improve the performance of GPU lstm backward operator by passing the state

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git

commit 2251f4031e745635ba308af12851e2a5ffa7255d
Author: Niketan Pansare <np...@us.ibm.com>
AuthorDate: Tue Mar 19 12:30:01 2019 -0700

    [SYSTEMML-540] Improve the performance of GPU lstm backward operator by passing the state
    
    - The lstm builtin function extended to return state: [out, c, state] = lstm(X, W, b, out0, c0, return_sequences)
    - The lstm_backward builtin function extended to accept state: [dX, dW, db, dout0, dc0] = lstm_backward(X, W, b, out0, c0, given_sequences, dout, dc, state)
    - Updated the DML documentation to reflect this change.
    - Updated the release documentation.
    
    Closes #856.
---
 dml-language-reference.md | 21 +++++++++++----------
 release-process.md        | 25 +++++++++++--------------
 2 files changed, 22 insertions(+), 24 deletions(-)

diff --git a/dml-language-reference.md b/dml-language-reference.md
index 6f1c854..f64b6ea 100644
--- a/dml-language-reference.md
+++ b/dml-language-reference.md
@@ -1521,16 +1521,17 @@ The images are assumed to be stored NCHW format, where N = batch size, C = #chan
 Hence, the images are internally represented as a matrix with dimension (N, C * H * W).
 
 
-| Function name                               | Input matrices           | Dimension of first input matrix                           | Dimension of second input matrix (if applicable)          | Dimension of (first) output matrix                                                          | Input Parameters                                                                                                                                                                              | Notes       [...]
-|---------------------------------------------|--------------------------|-----------------------------------------------------------|-----------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------ [...]
-| conv2d                                      | input, filter            | [batch_size X num_channels* height_image* width_image]    | [num_filters X num_channels* height_filter* width_filter] | [batch_size X num_channels_out* height_out* width_out]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter, width_filter] | Performs 2D [...]
-| conv2d_backward_filter                      | input, dout              | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels_out* height_out* width_out]    | [num_filters X num_channels* height_filter* width_filter]                                   | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter, width_filter] | Computes th [...]
-| conv2d_backward_data                        | filter, dout             | [num_filters X num_channels* height_filter* width_filter] | [batch_size X num_channels_out* height_out* width_out]    | [batch_size X num_channels* height_image* width_image]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter, width_filter] | Computes th [...]
-| max_pool, avg_pool                          | input                    | [batch_size X num_channels* height_image* width_image]    |                                                           | [batch_size X num_channels* height_out* width_out]                                          | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], pool_size=[height_pool, width_pool]                                   | Performs ma [...]
-| max_pool_backward, avg_pool_backward        | input, dout              | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels* height_out* width_out]        | [batch_size X num_channels* height_image* width_image]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], pool_size=[height_pool, width_pool]                                   | Computes th [...]
-| bias_add                                    | input, bias              | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                               | Adds the bi [...]
-| bias_multiply                               | input, bias              | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                               | Multiplies  [...]
-| lstm                                        | X,  W, bias, out0, c0    | [batch_size X seq_length*num_features]                    | [num_features+hidden_size X 4*hidden_size]                | [batch_size X seq_length*hidden_size] if return_sequences else  [batch_size X hidden_size]  | return_sequences                                                                                                                                                                              | Perform com [...]
+| Function name                               | Input matrices                                      | Dimension of first input matrix                           | Dimension of second input matrix (if applicable)          | Dimension of (first) output matrix                                                          | Input Parameters                                                                                                                                                                 [...]
+|---------------------------------------------|-----------------------------------------------------|-----------------------------------------------------------|-----------------------------------------------------------|---------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| conv2d                                      | input, filter                                       | [batch_size X num_channels* height_image* width_image]    | [num_filters X num_channels* height_filter* width_filter] | [batch_size X num_channels_out* height_out* width_out]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter,  [...]
+| conv2d_backward_filter                      | input, dout                                         | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels_out* height_out* width_out]    | [num_filters X num_channels* height_filter* width_filter]                                   | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter,  [...]
+| conv2d_backward_data                        | filter, dout                                        | [num_filters X num_channels* height_filter* width_filter] | [batch_size X num_channels_out* height_out* width_out]    | [batch_size X num_channels* height_image* width_image]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], filter_shape=[num_filters, num_channels, height_filter,  [...]
+| max_pool, avg_pool                          | input                                               | [batch_size X num_channels* height_image* width_image]    |                                                           | [batch_size X num_channels* height_out* width_out]                                          | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], pool_size=[height_pool, width_pool]                      [...]
+| max_pool_backward, avg_pool_backward        | input, dout                                         | [batch_size X num_channels* height_image* width_image]    | [batch_size X num_channels* height_out* width_out]        | [batch_size X num_channels* height_image* width_image]                                      | stride=[stride_h, stride_w], padding=[pad_h, pad_w], input_shape=[batch_size, num_channels, height_image, width_image], pool_size=[height_pool, width_pool]                      [...]
+| bias_add                                    | input, bias                                         | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                  [...]
+| bias_multiply                               | input, bias                                         | [batch_size X num_channels* height_image* width_image]    | [num_channels X 1]                                        | [batch_size X num_channels* height_image* width_image]                                      |                                                                                                                                                                                  [...]
+| lstm                                        | X,  W, bias, out0, c0                               | [N X T*D]                                                 | [D+M X 4M]                                                | [N X T*M] if given_sequences is true else [ N X M ]                                         | return_sequences                                                                                                                                                                 [...]
+| lstm_backward                               | X, W, b, out0, c0, given_sequences, dout, dc, state | [N X T*M] if given_sequences is true else [ N X M]        | [N X M]                                                   | [N X T*D]                                                                                   | return_sequences                                                                                                                                                                 [...]
 
 Note: the builtin functions `batch_norm2d` and `batch_norm2d_backward` are deprecated and will be removed in the next release. The `lstm` builtin function is in experimental phase and is only supported for the GPU backend. 
 
diff --git a/release-process.md b/release-process.md
index 3798ec7..dec6b15 100644
--- a/release-process.md
+++ b/release-process.md
@@ -255,22 +255,19 @@ this OS X example.
 
 ## Python Tests
 
-For Spark 1.*, the Python tests at (`src/main/python/tests`) can be executed in the following manner:
+Compile SystemML distribution:
 
-	PYSPARK_PYTHON=python3 pyspark --driver-class-path SystemML.jar test_matrix_agg_fn.py
-	PYSPARK_PYTHON=python3 pyspark --driver-class-path SystemML.jar test_matrix_binary_op.py
-	PYSPARK_PYTHON=python3 pyspark --driver-class-path SystemML.jar test_mlcontext.py
-	PYSPARK_PYTHON=python3 pyspark --driver-class-path SystemML.jar test_mllearn_df.py
-	PYSPARK_PYTHON=python3 pyspark --driver-class-path SystemML.jar test_mllearn_numpy.py
+	mvn package -P distribution
+	cd src/main/python/tests/
 
-For Spark 2.*, pyspark can't be used to run the Python tests, so they can be executed using
-spark-submit:
+For Spark 2.*, the Python tests at (`src/main/python/tests`) can be executed in the following manner:
 
-	spark-submit --driver-class-path SystemML.jar test_matrix_agg_fn.py
-	spark-submit --driver-class-path SystemML.jar test_matrix_binary_op.py
-	spark-submit --driver-class-path SystemML.jar test_mlcontext.py
-	spark-submit --driver-class-path SystemML.jar test_mllearn_df.py
-	spark-submit --driver-class-path SystemML.jar test_mllearn_numpy.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_agg_fn.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_binary_op.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mlcontext.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_df.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_numpy.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_nn_numpy.py
 
 
 ## Check LICENSE and NOTICE Files
@@ -385,7 +382,7 @@ file and remove all the `@Ignore` annotations from all the tests. Then run the N
 # Run other GPU Unit Tests 
 
 	rm result.txt
-	for t in AggregateUnaryOpTests  BinaryOpTests  MatrixMatrixElementWiseOpTests  RightIndexingTests AppendTest  MatrixMultiplicationOpTest ReorgOpTests ScalarMatrixElementwiseOpTests UnaryOpTests
+	for t in AggregateUnaryOpTests  BinaryOpTests  MatrixMatrixElementWiseOpTests  RightIndexingTests AppendTest  MatrixMultiplicationOpTest ReorgOpTests ScalarMatrixElementwiseOpTests UnaryOpTests LstmTest LstmCPUTest
 	do
 		mvn -Dit.test="org.apache.sysml.test.gpu."$t verify -PgpuTests &> tmp.txt
 		SUCCESS=`grep "BUILD SUCCESS" tmp.txt`


[systemml] 02/05: [SYSTEMML-2499] Built-in functions for binomial distribution

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git

commit 0599687e5c561736870e9ce8df20db7c05b84542
Author: Berthold Reinwald <re...@us.ibm.com>
AuthorDate: Thu Nov 29 17:32:19 2018 -0800

    [SYSTEMML-2499] Built-in functions for binomial distribution
---
 dml-language-reference.md | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/dml-language-reference.md b/dml-language-reference.md
index cdcc529..6f1c854 100644
--- a/dml-language-reference.md
+++ b/dml-language-reference.md
@@ -691,7 +691,7 @@ moment() | Returns the kth central moment of values in a column matrix V, where
 colSums() <br/> colMeans() <br/> colVars() <br/> colSds() <br/> colMaxs() <br/> colMins() | Column-wise computations -- for each column, compute the sum/mean/variance/stdDev/max/min of cell values | Input: matrix <br/> Output: (1 x n) matrix | colSums(X) <br/> colMeans(X) <br/> colVars(X) <br/> colSds(X) <br/> colMaxs(X) <br/>colMins(X)
 cov() | Returns the covariance between two 1-dimensional column matrices X and Y. The function takes an optional weights parameter W. All column matrices X, Y, and W (when specified) must have the exact same dimension. | Input: (X &lt;(n x 1) matrix&gt;, Y &lt;(n x 1) matrix&gt; [, W &lt;(n x 1) matrix&gt;)]) <br/> Output: &lt;scalar&gt; | cov(X,Y) <br/> cov(X,Y,W)
 table() | Returns the contingency table of two vectors A and B. The resulting table F consists of max(A) rows and max(B) columns. <br/> More precisely, F[i,j] = \\|{ k \\| A[k] = i and B[k] = j, 1 ≤ k ≤ n }\\|, where A and B are two n-dimensional vectors. <br/> This function supports multiple other variants, which can be found below, at the end of this Table 7. | Input: (&lt;(n x 1) matrix&gt;, &lt;(n x 1) matrix&gt;), [&lt;(n x 1) matrix&gt;]) <br/> Output: &lt;matrix&gt; | F = table(A, [...]
-cdf()<br/> pnorm()<br/> pexp()<br/> pchisq()<br/> pf()<br/> pt()<br/> icdf()<br/> qnorm()<br/> qexp()<br/> qchisq()<br/> qf()<br/> qt() | p=cdf(target=q, ...) returns the cumulative probability P[X &lt;= q]. <br/> q=icdf(target=p, ...) returns the inverse cumulative probability i.e., it returns q such that the given target p = P[X&lt;=q]. <br/> For more details, please see the section "Probability Distribution Functions" below Table 7. | Input: (target=&lt;scalar&gt;, dist="...", ...) <b [...]
+cdf()<br/> pnorm()<br/> pbinomial()<br/>pexp()<br/> pchisq()<br/> pf()<br/> pt()<br/> icdf()<br/> qnorm()<br/> qbinomial()<br/>qexp()<br/> qchisq()<br/> qf()<br/> qt() | p=cdf(target=q, ...) returns the cumulative probability P[X &lt;= q]. <br/> q=icdf(target=p, ...) returns the inverse cumulative probability i.e., it returns q such that the given target p = P[X&lt;=q]. <br/> For more details, please see the section "Probability Distribution Functions" below Table 7. | Input: (target=&lt [...]
 aggregate() | Splits/groups the values from X according to the corresponding values from G, and then applies the function fn on each group. <br/> The result F is a column matrix, in which each row contains the value computed from a distinct group in G. More specifically, F[k,1] = fn( {X[i,1] \\| 1&lt;=i&lt;=n and G[i,1] = k} ), where n = nrow(X) = nrow(G). <br/> Note that the distinct values in G are used as row indexes in the result matrix F. Therefore, nrow(F) = max(G). It is thus reco [...]
 interQuartileMean() | Returns the mean of all x in X such that x&gt;quantile(X, 0.25) and x&lt;=quantile(X, 0.75). X, W are column matrices (vectors) of the same size. W contains the weights for data in X. | Input: (X &lt;(n x 1) matrix&gt; [, W &lt;(n x 1) matrix&gt;)]) <br/> Output: &lt;scalar&gt; | interQuartileMean(X) <br/> interQuartileMean(X, W)
 quantile () | The p-quantile for a random variable X is the value x such that Pr[X&lt;x] &lt;= p and Pr[X&lt;= x] &gt;= p <br/> let n=nrow(X), i=ceiling(p*n), quantile() will return X[i]. p is a scalar (0&lt;p&lt;1) that specifies the quantile to be computed. Optionally, a weight vector may be provided for X. | Input: (X &lt;(n x 1) matrix&gt;, [W &lt;(n x 1) matrix&gt;),] p &lt;scalar&gt;) <br/> Output: &lt;scalar&gt; | quantile(X, p) <br/> quantile(X, W, p)
@@ -749,6 +749,7 @@ This computes the cumulative probability at the given quantile i.e., P[X&lt;=q],
   * `dist`: name of the distribution specified as a string. Valid values are "normal" (for Normal or Gaussian distribution), "f" (for F distribution), "t" (for Student t-distribution), "chisq" (for Chi Squared distribution), and "exp" (for Exponential distribution). This is a mandatory argument.
   * `...`: parameters of the distribution
     * For `dist="normal"`, valid parameters are mean and sd that specify the mean and standard deviation of the normal distribution. The default values for mean and sd are 0.0 and 1.0, respectively.
+    * For `dist="binomial"`, valid parameters are trials and p that specify the number of trials and probability of success. Both parameters are mandatory.
     * For `dist="f"`, valid parameters are df1 and df2 that specify two degrees of freedom. Both these parameters are mandatory.
     * For `dist="t"`, and dist="chisq", valid parameter is df that specifies the degrees of freedom. This parameter is mandatory.
     * For `dist="exp"`, valid parameter is rate that specifies the rate at which events occur. Note that the mean of exponential distribution is 1.0/rate. The default value is 1.0.
@@ -763,7 +764,7 @@ This computes the inverse cumulative probability i.e., it computes a quantile q
   * `dist`: name of the distribution specified as a string. Same as that in cdf().
   * `...`: parameters of the distribution. Same as those in cdf().
 
-Alternative to `cdf()` and `icdf()`, users can also use distribution-specific functions. The functions `pnorm()`, `pf()`, `pt()`, `pchisq()`, and `pexp()` computes the cumulative probabilities for Normal, F, t, Chi Squared, and Exponential distributions, respectively. Appropriate distribution parameters must be provided for each function. Similarly, `qnorm()`, `qf()`, `qt()`, `qchisq()`, and `qexp()` compute the inverse cumulative probabilities for Normal, F, t, Chi Squared, and Exponent [...]
+Alternative to `cdf()` and `icdf()`, users can also use distribution-specific functions. The functions `pnorm()`, `pbinomial()`, `pf()`, `pt()`, `pchisq()`, and `pexp()` computes the cumulative probabilities for Normal, Binomial, F, t, Chi Squared, and Exponential distributions, respectively. Appropriate distribution parameters must be provided for each function. Similarly, `qnorm()`, `qbinomial()`, `qf()`, `qt()`, `qchisq()`, and `qexp()` compute the inverse cumulative probabilities for [...]
 
 Following pairs of DML statements are equivalent.
 
@@ -771,6 +772,10 @@ Following pairs of DML statements are equivalent.
 is same as
 `p=pnorm(target=q, mean=1.5, sd=2);`
 
+`p = cdf(target=q, dist="binomial", trials=20, p=0.25);`
+is same as
+`p=pbinomial(target=q, trials=20, p=0.25);`
+
 `p = cdf(target=q, dist="exp", rate=5);`
 is same as
 `pexp(target=q,rate=5);`
@@ -801,6 +806,10 @@ Examples of icdf():
 is same as
 `q=qnorm(target=p, mean=0,sd=1);`
 
+`q=icdf(target=p, dist="binomial", trials=20, p=0.25);`
+is same as
+`q=qbinomial(target=p, trials=20, p=0.25);`
+
 `q=icdf(target=p, dist="exp");`
 is same as
 `q=qexp(target=p, rate=1);`


[systemml] 04/05: [SYSTEMML-2520] Add documentation search with Algolia service

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git

commit 834ee0465777e4ac9288b81ff2cd4f0244ecfa37
Author: Janardhan <ja...@gmail.com>
AuthorDate: Tue Mar 19 13:08:10 2019 -0700

    [SYSTEMML-2520] Add documentation search with Algolia service
    
    Algolia is an api based service, indexes the documentation every 24h.
    - When we query a keyword, the results would be rendered in a dropdown form.
    
    Also, navigation header fix for the dropdown in iphone, and on minimize screen on
    normal screens.
    
    Closes #855.
---
 _layouts/global.html | 31 ++++++++++++++++++++++++++++++-
 css/main.css         | 30 +++++++++++++++++++++++++++---
 2 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/_layouts/global.html b/_layouts/global.html
index 4286c9c..734b2a0 100644
--- a/_layouts/global.html
+++ b/_layouts/global.html
@@ -15,10 +15,13 @@
         <link rel="stylesheet" href="css/main.css">
         <link rel="stylesheet" href="css/pygments-default.css">
         <link rel="shortcut icon" href="img/favicon.png">
+        <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.css" /> 
     </head>
     <body>
         <!--[if lt IE 7]>
-            <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
+            <p class="chromeframe">The present browser may not be up-to-date. <a href="http://browsehappy.com/">
+                Please consider upgrading to the latest version</a> or <a href="http://www.google.com/chromeframe/?redirect=true">
+                    install Google Chrome Frame</a> better browsing experience.</p>
         <![endif]-->
 
         <header class="navbar navbar-default navbar-fixed-top" id="topbar">
@@ -93,6 +96,16 @@
                                 {% endif %}
                             </ul>
                         </li>
+                        <!-- How Algolia search works? 
+                        * 1. This service runs the crawler on the docs every 24 hrs and creates index.
+                        * 2. When the user inputs a keyword into this input with `id="s-bar"`, 
+                             a. the keyword will be found out, with the javascript functions resting in cdn. 
+                             b. and related items populate on a nicely formatted dropdown whose styling lies in the cdn.
+                        * 3. When the user clicks on an intersted item in the dropdown link, one will end up at the anchor
+                             link of the item.
+
+                        -->
+                        <li><input id="s-bar" placeholder="Search Docs.."style="margin-top: 20px;"></input></li>
                     </ul>
                 </nav>
             </div>
@@ -254,5 +267,21 @@
                 d.getElementsByTagName('head')[0].appendChild(script);
             }(document));
         </script>
+        <!-- Algolia search section -->
+        <script type="text/javascript" src="https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.js"></script>
+        <script>
+            // Crawler configuration for the search indexing is available at:
+            // https://github.com/algolia/docsearch-configs/blob/master/configs/apache_systemml.json
+
+            docsearch({ 
+                apiKey: '78c19564c220d4642a41197baae304ef', 
+                indexName: 'apache_systemml', 
+                inputSelector: "#s-bar", 
+                // For custom styling for the dropdown, please set debug to true
+                // so that the dropdown won't disappear when the inspect tools are 
+                // open.
+                debug: false 
+            });
+        </script>        
     </body>
 </html>
diff --git a/css/main.css b/css/main.css
index 8a7426b..3dd758b 100644
--- a/css/main.css
+++ b/css/main.css
@@ -61,6 +61,7 @@ h1, h2, h3, h4, h5, h6 {
 pre {
   background-color: #FFF
 }
+
 /* Branding */
 .brand {
   font-weight: normal !important;
@@ -81,7 +82,7 @@ img.logo {
 /* Navigation Bar */
 .navbar {
   background-color: rgba(0, 0, 0, 0.9);
-  height: 68px;
+  /*height: 68px;*/
 }
 
 .navbar-brand {
@@ -96,12 +97,28 @@ img.logo {
   height: 100%;
 }
 
+.navbar-collapse {
+  /*height: 67px !important;*/
+  background: rgba(0,0,0,0);
+}
+
 .navbar-collapse.collapse {
-  height: 67px !important;
+  background: rgba(0, 0, 0, 0);
+  border-top: 0px;
+}
+
+.navbar-collapse.collapsing {
+  background: rgba(0, 0, 0, 0);
+  border-top: 0px;
+}
+
+.navbar-toggle {
+ border-radius: 1px;
 }
 
 .navbar-header {
-  padding-top: 10px;
+  padding-top: 0px;
+  padding-bottom: 10px;
 }
 
 .navbar .container {
@@ -159,6 +176,13 @@ img.logo {
 }
 
 /**
+ * Search bar
+ */
+input#s-bar {
+  margin-left: 10px;
+}
+
+/**
  * MathJax (embedded latex formulas)
  */
 .MathJax .mo { color: inherit }


[systemml] 05/05: [SYSTEMML-2520][DOC] Specified the steps required to update Crawler configuration for the search indexing in our release documentation.

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

niketanpansare pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git

commit ff681d850806ac0d1043345fea6a620365c59b15
Author: Niketan Pansare <np...@us.ibm.com>
AuthorDate: Tue Mar 19 13:22:18 2019 -0700

    [SYSTEMML-2520][DOC] Specified the steps required to update Crawler configuration for the search indexing in our release documentation.
---
 release-process.md | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/release-process.md b/release-process.md
index dec6b15..8ef4693 100644
--- a/release-process.md
+++ b/release-process.md
@@ -494,3 +494,11 @@ Commit the update to `documentation.html` to publish the website update.
 
 The versioned project documentation is now deployed to the main website, and the
 [Documentation Page](http://systemml.apache.org/documentation) contains a link to the versioned documentation.
+
+## Update Crawler configuration for the search indexing
+
+Create a PR or an issue to update the version number in the crawler configuration. 
+Please see the `start_urls` tag in the file [https://github.com/algolia/docsearch-configs/blob/master/configs/apache_systemml.json](https://github.com/algolia/docsearch-configs/blob/master/configs/apache_systemml.json).
+If the Algolia team provides us an updated `apiKey` or `indexName` credentials, then please update the corresponding entries in the file 
+[https://github.com/apache/systemml/blob/master/docs/_layouts/global.html](https://github.com/apache/systemml/blob/master/docs/_layouts/global.html) 
+(see for `Algolia search section` in the previously mentioned HTML file).
\ No newline at end of file