You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by ja...@apache.org on 2020/04/13 17:23:11 UTC

[systemml] branch gh-pages updated: [MINOR][DOC] Name Refactor from SystemML to SystemDS

This is an automated email from the ASF dual-hosted git repository.

janardhan pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/systemml.git


The following commit(s) were added to refs/heads/gh-pages by this push:
     new 9727f50  [MINOR][DOC] Name Refactor from SystemML to SystemDS
9727f50 is described below

commit 9727f50b2925d60c7a9ff15307d231d2b824b155
Author: Sebastian <ba...@tugraz.at>
AuthorDate: Tue Apr 7 13:14:22 2020 +0200

    [MINOR][DOC] Name Refactor from SystemML to SystemDS
    
    - Chosen version for the documentation is 2.0.0 Snapshot
    - Change developer-tools-systemml
    - Contributing to systmDS refactor
      - Note we have an additional Contributing guide on Master
    
    Latex changes:
    
    - alg-ref latex files (Maybe other pull request)
    - git ignore for tex alg-ref
    - Fixed comment in main latex file
    
    Closes #877.
---
 .gitignore                                         |   4 +-
 _config.yml                                        |   2 +-
 _layouts/global.html                               |  18 +--
 alg-ref/.gitignore                                 |   6 +
 alg-ref/BinarySVM.tex                              |   4 +-
 alg-ref/Cox.tex                                    |   2 +-
 alg-ref/DecisionTrees.tex                          |   2 +-
 alg-ref/DescriptiveBivarStats.tex                  |   2 +-
 alg-ref/DescriptiveStratStats.tex                  |   2 +-
 alg-ref/DescriptiveUnivarStats.tex                 |   2 +-
 alg-ref/GLM.tex                                    |   2 +-
 alg-ref/GLMpredict.tex                             |   2 +-
 alg-ref/KaplanMeier.tex                            |   2 +-
 alg-ref/Kmeans.tex                                 |   4 +-
 alg-ref/LinReg.tex                                 |   2 +-
 alg-ref/LogReg.tex                                 |   2 +-
 alg-ref/MultiSVM.tex                               |   4 +-
 alg-ref/NaiveBayes.tex                             |   4 +-
 alg-ref/PCA.tex                                    |   4 +-
 alg-ref/RandomForest.tex                           |   2 +-
 alg-ref/StepGLM.tex                                |   2 +-
 alg-ref/StepLinRegDS.tex                           |   2 +-
 ...rence.bib => SystemDS_Algorithms_Reference.bib} |   0
 alg-ref/SystemDS_Algorithms_Reference.pdf          | Bin 0 -> 639517 bytes
 ...rence.tex => SystemDS_Algorithms_Reference.tex} |  48 ++++---
 alg-ref/SystemML_Algorithms_Reference.pdf          | Bin 1266909 -> 0 bytes
 algorithms-bibliography.md                         |   4 +-
 algorithms-classification.md                       | 148 ++++++++++-----------
 algorithms-clustering.md                           |  50 +++----
 algorithms-descriptive-statistics.md               |  48 +++----
 algorithms-factorization-machines.md               |  16 +--
 algorithms-matrix-factorization.md                 |  64 ++++-----
 algorithms-reference.md                            |   6 +-
 algorithms-regression.md                           | 122 ++++++++---------
 algorithms-survival-analysis.md                    |  56 ++++----
 beginners-guide-caffe2dml.md                       |   2 +-
 beginners-guide-keras2dml.md                       |   8 +-
 beginners-guide-python.md                          |  28 ++--
 beginners-guide-to-dml-and-pydml.md                |  26 ++--
 ...g-to-systemml.md => contributing-to-systemds.md |  44 +++---
 debugger-guide.md                                  | 126 +++++++++---------
 deep-learning.md                                   |  18 +--
 devdocs/MatrixMultiplicationOperators.txt          |   4 +-
 devdocs/deep-learning.md                           |  10 +-
 devdocs/gpu-backend.md                             |   6 +-
 devdocs/python_api.html                            |  34 ++---
 ...ools-systemml.md => developer-tools-systemds.md |  20 +--
 dml-language-reference.md                          |  60 ++++-----
 engine-dev-guide.md                                |  34 ++---
 gpu.md                                             |  36 ++---
 hadoop-batch-mode.md                               | 132 +++++++++---------
 index.md                                           |  54 ++++----
 jmlc.md                                            |  34 ++---
 lang-ref/README_HADOOP_CONFIG.txt                  |  18 +--
 native-backend.md                                  |  32 ++---
 python-performance-test.md                         |   8 +-
 python-reference.md                                |  40 +++---
 reference-guide-caffe2dml.md                       |  20 +--
 reference-guide-keras2dml.md                       |   6 +-
 release-creation-process.md                        |  14 +-
 release-process.md                                 |  46 +++----
 spark-batch-mode.md                                |  32 ++---
 spark-mlcontext-programming-guide.md               | 116 ++++++++--------
 standalone-guide.md                                |  70 +++++-----
 troubleshooting-guide.md                           |  18 +--
 65 files changed, 869 insertions(+), 865 deletions(-)

diff --git a/.gitignore b/.gitignore
index 49c9ab1..6361115 100644
--- a/.gitignore
+++ b/.gitignore
@@ -9,7 +9,7 @@ Gemfile.lock
 .settings/
 .vscode/
 
-## Inproper copy when switching branch
+## Improper copy when switching branch
 
 target/
-src/ 
\ No newline at end of file
+src/ 
diff --git a/_config.yml b/_config.yml
index d2b483c..5cf4f2f 100644
--- a/_config.yml
+++ b/_config.yml
@@ -15,7 +15,7 @@ exclude:
   - lang-ref
 
 # These allow the documentation to be updated with newer releases
-SYSTEMML_VERSION: 1.3.0-SNAPSHOT
+SYSTEMDS_VERSION: 2.0.0-SNAPSHOT
 
 # if 'analytics_on' is true, analytics section will be rendered on the HTML pages
 analytics_on: true
diff --git a/_layouts/global.html b/_layouts/global.html
index 734b2a0..859e396 100644
--- a/_layouts/global.html
+++ b/_layouts/global.html
@@ -4,7 +4,7 @@
 <!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
 <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
     <head>
-        <title>{{ page.title }} - SystemML {{site.SYSTEMML_VERSION}}</title>
+        <title>{{ page.title }} - SystemDS {{site.SYSTEMML_VERSION}}</title>
         <meta charset="utf-8">
         <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
         {% if page.description %}
@@ -28,10 +28,10 @@
             <div class="container">
                 <div class="navbar-header">
                     <div class="navbar-brand brand projectlogo">
-                        <a href="http://systemml.apache.org/"><img class="logo" src="img/systemml-logo.png" alt="Apache SystemML" title="Apache SystemML"/></a>
+                        <a href="http://systemml.apache.org/"><img class="logo" src="img/systemml-logo.png" alt="Apache SystemDS" title="Apache SystemDS"/></a>
                     </div>
                     <div class="navbar-brand brand projecttitle">
-                        <a href="http://systemml.apache.org/">Apache SystemML<sup id="trademark">™</sup></a><br/>
+                        <a href="http://systemml.apache.org/">Apache SystemDS<sup id="trademark">™</sup></a><br/>
                         <span class="version">{{site.SYSTEMML_VERSION}}</span>
                     </div>
                     <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target=".navbar-collapse">
@@ -48,8 +48,8 @@
                         <li class="dropdown">
                             <a href="#" class="dropdown-toggle" data-toggle="dropdown">Documentation<b class="caret"></b></a>
                             <ul class="dropdown-menu" role="menu">
-                                <li><b>Running SystemML:</b></li>
-                                <li><a href="https://github.com/apache/systemml">SystemML GitHub README</a></li>
+                                <li><b>Running SystemDS:</b></li>
+                                <li><a href="https://github.com/apache/systemml">SystemDS GitHub README</a></li>
                                 <li><a href="spark-mlcontext-programming-guide.html">Spark MLContext</a></li>
                                 <li><a href="spark-batch-mode.html">Spark Batch Mode</a>
                                 <li><a href="hadoop-batch-mode.html">Hadoop Batch Mode</a>
@@ -67,10 +67,10 @@
                                 <li class="divider"></li>
                                 <li><b>Tools:</b></li>
                                 <li><a href="debugger-guide.html">Debugger Guide</a></li>
-                                <li><a href="developer-tools-systemml.html">IDE Guide</a></li>
+                                <li><a href="developer-tools-systemds.html">IDE Guide</a></li>
                                 <li class="divider"></li>
                                 <li><b>Other:</b></li>
-                                <li><a href="contributing-to-systemml.html">Contributing to SystemML</a></li>
+                                <li><a href="contributing-to-systemds.html">Contributing to SystemDS</a></li>
                                 <li><a href="engine-dev-guide.html">Engine Developer Guide</a></li>
                                 <li><a href="troubleshooting-guide.html">Troubleshooting Guide</a></li>
                                 <li><a href="release-process.html">Release Process</a></li>
@@ -89,7 +89,7 @@
                             <a href="#" class="dropdown-toggle" data-toggle="dropdown">Issues<b class="caret"></b></a>
                             <ul class="dropdown-menu" role="menu">
                                 <li><b>JIRA:</b></li>
-                                <li><a href="https://issues.apache.org/jira/browse/SYSTEMML">SystemML JIRA</a></li>
+                                <li><a href="https://issues.apache.org/jira/browse/SYSTEMML">SystemDS JIRA</a></li>
                                 {% if site.FEEDBACK_LINKS == true %}
                                 <li><a href="#" id="feedback-link-improvement" title="Click to file a JIRA improvement about this page.">Improve this Page</a></li>
                                 <li><a href="#" id="feedback-link-bug" title="Click to file a JIRA bug about this page.">Fix this Page</a></li>
@@ -127,7 +127,7 @@
             <!-- Use GET method so that if a user is not already logged on to JIRA, after the user logs in, the
                  user will be redirected to the form that pre-populates the fields based on the URL parameters -->
             <form name="feedback" action="https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa" method="GET" id="feedback">
-                <input name="pid" type="hidden" value="12319522" /> <!-- SystemML Project ID -->
+                <input name="pid" type="hidden" value="12319522" /> <!-- SystemDS Project ID -->
                 <input name="priority" type="hidden" value="4" /> <!-- Minor -->
                 <input name="components" type="hidden" value="12328679" /> <!-- Documentation -->
                 <input id="feedback-issuetype" name="issuetype" type="hidden" value="" />
diff --git a/alg-ref/.gitignore b/alg-ref/.gitignore
new file mode 100644
index 0000000..2b1359d
--- /dev/null
+++ b/alg-ref/.gitignore
@@ -0,0 +1,6 @@
+*.aux
+*.fdb_latexmk
+*.fls
+*.log
+*.out
+*.synctex.gz
\ No newline at end of file
diff --git a/alg-ref/BinarySVM.tex b/alg-ref/BinarySVM.tex
index 7ff5b06..783539e 100644
--- a/alg-ref/BinarySVM.tex
+++ b/alg-ref/BinarySVM.tex
@@ -141,7 +141,7 @@ accuracy and confusion matrix in the output format specified.
 \noindent{\bf Examples}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f l2-svm.dml -nvargs X=/user/biadmin/X.mtx 
+hadoop jar SystemDS.jar -f l2-svm.dml -nvargs X=/user/biadmin/X.mtx 
                                               Y=/user/biadmin/y.mtx 
                                               icpt=0 tol=0.001 fmt=csv
                                               reg=1.0 maxiter=100 
@@ -150,7 +150,7 @@ hadoop jar SystemML.jar -f l2-svm.dml -nvargs X=/user/biadmin/X.mtx
 \end{verbatim}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f l2-svm-predict.dml -nvargs X=/user/biadmin/X.mtx 
+hadoop jar SystemDS.jar -f l2-svm-predict.dml -nvargs X=/user/biadmin/X.mtx 
                                                       Y=/user/biadmin/y.mtx 
                                                       icpt=0 fmt=csv
                                                       model=/user/biadmin/weights.csv
diff --git a/alg-ref/Cox.tex b/alg-ref/Cox.tex
index a355df7..756482f 100644
--- a/alg-ref/Cox.tex
+++ b/alg-ref/Cox.tex
@@ -98,7 +98,7 @@ Maximum number of inner (conjugate gradient) iterations, or~0 if no maximum
 limit provided
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/DecisionTrees.tex b/alg-ref/DecisionTrees.tex
index cea26a4..a69a0eb 100644
--- a/alg-ref/DecisionTrees.tex
+++ b/alg-ref/DecisionTrees.tex
@@ -115,7 +115,7 @@ tree in parallel.\\
 	Location (on HDFS) to write the mappings from the categorical feature-ids to the global feature-ids in $X$ (see below for details). Note that this argument is optional.
 	\item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 	Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-	see read/write functions in SystemML Language Reference for details.
+	see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/DescriptiveBivarStats.tex b/alg-ref/DescriptiveBivarStats.tex
index a2d3db1..ccfdede 100644
--- a/alg-ref/DescriptiveBivarStats.tex
+++ b/alg-ref/DescriptiveBivarStats.tex
@@ -98,7 +98,7 @@ statistics will be stored.  The matrices' file names and format are defined
 in Table~\ref{table:bivars}.
 % \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 % Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-% see read/write functions in SystemML Language Reference for details.
+% see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 \begin{table}[t]\hfil
diff --git a/alg-ref/DescriptiveStratStats.tex b/alg-ref/DescriptiveStratStats.tex
index be0cffd..954899a 100644
--- a/alg-ref/DescriptiveStratStats.tex
+++ b/alg-ref/DescriptiveStratStats.tex
@@ -120,7 +120,7 @@ The index number of the stratum column in~$S$
 Location to store the output matrix defined in Table~\ref{table:stratoutput}
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/DescriptiveUnivarStats.tex b/alg-ref/DescriptiveUnivarStats.tex
index 5838e3e..0d91456 100644
--- a/alg-ref/DescriptiveUnivarStats.tex
+++ b/alg-ref/DescriptiveUnivarStats.tex
@@ -68,7 +68,7 @@ will be stored.  The format of the output matrix is defined by
 Table~\ref{table:univars}.
 % \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 % Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-% see read/write functions in SystemML Language Reference for details.
+% see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 \begin{table}[t]\hfil
diff --git a/alg-ref/GLM.tex b/alg-ref/GLM.tex
index 8555a5b..8f9ed85 100644
--- a/alg-ref/GLM.tex
+++ b/alg-ref/GLM.tex
@@ -98,7 +98,7 @@ Location to store the estimated regression parameters (the $\beta_j$'s), with th
 intercept parameter~$\beta_0$ at position {\tt B[}$m\,{+}\,1$, {\tt 1]} if available
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \item[{\tt O}:] (default:\mbox{ }{\tt " "})
 Location to write certain summary statistics described in Table~\ref{table:GLM:stats},
 by default it is standard output.
diff --git a/alg-ref/GLMpredict.tex b/alg-ref/GLMpredict.tex
index ceb249d..71c3f15 100644
--- a/alg-ref/GLMpredict.tex
+++ b/alg-ref/GLMpredict.tex
@@ -168,7 +168,7 @@ function; {\tt lpow=0.0} gives the log link $\eta = \log\mu$.  Common power link
 Dispersion value, when available; must be positive
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix {\tt M} file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/KaplanMeier.tex b/alg-ref/KaplanMeier.tex
index 6ea6fbc..f6a378d 100644
--- a/alg-ref/KaplanMeier.tex
+++ b/alg-ref/KaplanMeier.tex
@@ -87,7 +87,7 @@ If survival data for multiple groups is available specifies which test to perfor
 survival data across multiple groups: "none", "log-rank" or "wilcoxon" test
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/Kmeans.tex b/alg-ref/Kmeans.tex
index 2b5492c..42ed50d 100644
--- a/alg-ref/Kmeans.tex
+++ b/alg-ref/Kmeans.tex
@@ -156,7 +156,7 @@ records to clusters (defined by the output centroids)
 {\tt 0} = do not write matrix~$Y$,  {\tt 1} = write~$Y$
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \item[{\tt verb}:] (default:\mbox{ }{\tt 0})
 {\tt 0} = do not print per-iteration statistics for each run, {\tt 1} = print them
 (the ``verbose'' option)
@@ -182,7 +182,7 @@ NOTE: No prior correspondence is assumed between the predicted
 cluster labels and the externally specified categories
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format for {\tt prY}, such as {\tt text}, {\tt mm},
-or {\tt csv}; see read/write functions in SystemML Language Reference
+or {\tt csv}; see read/write functions in SystemDS Language Reference
 for details
 \item[{\tt O}:] (default:\mbox{ }{\tt " "})
 Location to write the output statistics defined in 
diff --git a/alg-ref/LinReg.tex b/alg-ref/LinReg.tex
index 67273c2..9184cc3 100644
--- a/alg-ref/LinReg.tex
+++ b/alg-ref/LinReg.tex
@@ -125,7 +125,7 @@ Maximum number of conjugate gradient iterations, or~0 if no maximum
 limit provided
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/LogReg.tex b/alg-ref/LogReg.tex
index 43d4e15..ea5aaab 100644
--- a/alg-ref/LogReg.tex
+++ b/alg-ref/LogReg.tex
@@ -157,7 +157,7 @@ Maximum number of inner (conjugate gradient) iterations, or~0 if no maximum
 limit provided
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/MultiSVM.tex b/alg-ref/MultiSVM.tex
index 87880a9..c9244d2 100644
--- a/alg-ref/MultiSVM.tex
+++ b/alg-ref/MultiSVM.tex
@@ -139,7 +139,7 @@ of scores, accuracy and confusion matrix in the output format specified.
 %%
 \noindent{\bf Examples}
 \begin{verbatim}
-hadoop jar SystemML.jar -f m-svm.dml -nvargs X=/user/biadmin/X.mtx 
+hadoop jar SystemDS.jar -f m-svm.dml -nvargs X=/user/biadmin/X.mtx 
                                              Y=/user/biadmin/y.mtx 
                                              icpt=0 tol=0.001
                                              reg=1.0 maxiter=100 fmt=csv 
@@ -148,7 +148,7 @@ hadoop jar SystemML.jar -f m-svm.dml -nvargs X=/user/biadmin/X.mtx
 \end{verbatim}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f m-svm-predict.dml -nvargs X=/user/biadmin/X.mtx 
+hadoop jar SystemDS.jar -f m-svm-predict.dml -nvargs X=/user/biadmin/X.mtx 
                                                      Y=/user/biadmin/y.mtx 
                                                      icpt=0 fmt=csv
                                                      model=/user/biadmin/weights.csv
diff --git a/alg-ref/NaiveBayes.tex b/alg-ref/NaiveBayes.tex
index b5f721d..4ffc8b8 100644
--- a/alg-ref/NaiveBayes.tex
+++ b/alg-ref/NaiveBayes.tex
@@ -125,7 +125,7 @@ output format specified.
 \noindent{\bf Examples}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f naive-bayes.dml -nvargs 
+hadoop jar SystemDS.jar -f naive-bayes.dml -nvargs 
                            X=/user/biadmin/X.mtx 
                            Y=/user/biadmin/y.mtx 
                            laplace=1 fmt=csv
@@ -135,7 +135,7 @@ hadoop jar SystemML.jar -f naive-bayes.dml -nvargs
 \end{verbatim}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f naive-bayes-predict.dml -nvargs 
+hadoop jar SystemDS.jar -f naive-bayes-predict.dml -nvargs 
                            X=/user/biadmin/X.mtx 
                            Y=/user/biadmin/y.mtx 
                            prior=/user/biadmin/prior.csv
diff --git a/alg-ref/PCA.tex b/alg-ref/PCA.tex
index cef750e..4e5cb91 100644
--- a/alg-ref/PCA.tex
+++ b/alg-ref/PCA.tex
@@ -119,7 +119,7 @@ When MODEL is provided, INPUT data is rotated according to the coordinate system
 \noindent{\bf Examples}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f PCA.dml -nvargs 
+hadoop jar SystemDS.jar -f PCA.dml -nvargs 
             INPUT=/user/biuser/input.mtx  K=10
             CENTER=1  SCALE=1
             OFMT=csv PROJDATA=1
@@ -128,7 +128,7 @@ hadoop jar SystemML.jar -f PCA.dml -nvargs
 \end{verbatim}
 
 \begin{verbatim}
-hadoop jar SystemML.jar -f PCA.dml -nvargs 
+hadoop jar SystemDS.jar -f PCA.dml -nvargs 
             INPUT=/user/biuser/test_input.mtx  K=10
             CENTER=1  SCALE=1
             OFMT=csv PROJDATA=1
diff --git a/alg-ref/RandomForest.tex b/alg-ref/RandomForest.tex
index f9b47f3..26766d0 100644
--- a/alg-ref/RandomForest.tex
+++ b/alg-ref/RandomForest.tex
@@ -129,7 +129,7 @@ This implementation is well-suited to handle large-scale data and builds a rando
 	Location (on HDFS) to write the mappings from the categorical feature-ids to the global feature-ids in $X$ (see below for details). Note that this argument is optional.
 	\item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 	Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-	see read/write functions in SystemML Language Reference for details.
+	see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/StepGLM.tex b/alg-ref/StepGLM.tex
index 3869990..95c4137 100644
--- a/alg-ref/StepGLM.tex
+++ b/alg-ref/StepGLM.tex
@@ -96,7 +96,7 @@ Our stepwise generalized linear regression script selects a model based on the A
 	no further features are being checked and the algorithm stops.
 	\item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 	Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-	see read/write functions in SystemML Language Reference for details.
+	see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/StepLinRegDS.tex b/alg-ref/StepLinRegDS.tex
index 8c29fb1..8272af3 100644
--- a/alg-ref/StepLinRegDS.tex
+++ b/alg-ref/StepLinRegDS.tex
@@ -72,7 +72,7 @@ Threshold to stop the algorithm: if the decrease in the value of the AIC falls b
 no further features are being checked and the algorithm stops.
 \item[{\tt fmt}:] (default:\mbox{ }{\tt "text"})
 Matrix file output format, such as {\tt text}, {\tt mm}, or {\tt csv};
-see read/write functions in SystemML Language Reference for details.
+see read/write functions in SystemDS Language Reference for details.
 \end{Description}
 
 
diff --git a/alg-ref/SystemML_Algorithms_Reference.bib b/alg-ref/SystemDS_Algorithms_Reference.bib
similarity index 100%
rename from alg-ref/SystemML_Algorithms_Reference.bib
rename to alg-ref/SystemDS_Algorithms_Reference.bib
diff --git a/alg-ref/SystemDS_Algorithms_Reference.pdf b/alg-ref/SystemDS_Algorithms_Reference.pdf
new file mode 100644
index 0000000..2d9f401
Binary files /dev/null and b/alg-ref/SystemDS_Algorithms_Reference.pdf differ
diff --git a/alg-ref/SystemML_Algorithms_Reference.tex b/alg-ref/SystemDS_Algorithms_Reference.tex
similarity index 80%
rename from alg-ref/SystemML_Algorithms_Reference.tex
rename to alg-ref/SystemDS_Algorithms_Reference.tex
index 75308c9..e5ae89e 100644
--- a/alg-ref/SystemML_Algorithms_Reference.tex
+++ b/alg-ref/SystemDS_Algorithms_Reference.tex
@@ -1,23 +1,21 @@
-\begin{comment}
 
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
+%  Licensed to the Apache Software Foundation (ASF) under one
+%  or more contributor license agreements.  See the NOTICE file
+%  distributed with this work for additional information
+%  regarding copyright ownership.  The ASF licenses this file
+%  to you under the Apache License, Version 2.0 (the
+%  "License"); you may not use this file except in compliance
+%  with the License.  You may obtain a copy of the License at
+% 
+%    http://www.apache.org/licenses/LICENSE-2.0
+% 
+%  Unless required by applicable law or agreed to in writing,
+%  software distributed under the License is distributed on an
+%  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+%  KIND, either express or implied.  See the License for the
+%  specific language governing permissions and limitations
+%  under the License.
 
-   http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.  See the License for the
- specific language governing permissions and limitations
- under the License.
-
-\end{comment}
 
 \documentclass[letter]{article}
 \usepackage{graphicx,amsmath,amssymb,amsthm,subfigure,color,url,multirow,rotating,comment}
@@ -33,8 +31,8 @@
     pdfmenubar=true,        % show Acrobat&#146;s menu?
     pdffitwindow=true,      % window fit to page when opened
     pdfstartview={FitV},    % fits the width of the page to the window
-    pdftitle={SystemML Algorithms Reference},    % title
-    pdfauthor={SystemML Team}, % author
+    pdftitle={SystemDS Algorithms Reference},    % title
+    pdfauthor={SystemDS Team}, % author
     pdfsubject={Documentation},   % subject of the document
     pdfkeywords={},         % list of keywords
     pdfnewwindow=true,      % links in new window
@@ -61,8 +59,8 @@
 }{\end{description}\vspace{-0.5ex}}
 
 
-\newcommand{\SystemML}{\texttt{SystemML} }
-\newcommand{\hml}{\texttt{hadoop jar SystemML.jar} }
+\newcommand{\SystemDS}{\texttt{SystemDS} }
+\newcommand{\hml}{\texttt{hadoop jar SystemDS.jar} }
 \newcommand{\pxp}{\mathbin{\texttt{\%\textasteriskcentered\%}}}
 \newcommand{\todo}[1]{{{\color{red}TODO: #1}}}
 \newcommand{\Normal}{\ensuremath{\mathop{\mathrm{Normal}}\nolimits}}
@@ -83,7 +81,7 @@
 % header
 %%%%%%%%%%%%%%%%%%%%%
 
-\title{\LARGE{{\SystemML Algorithms Reference}}} 
+\title{\LARGE{{\SystemDS Algorithms Reference}}} 
 \date{\today}
 
 %%%%%%%%%%%%%%%%%%%%%
@@ -142,7 +140,7 @@
 \section{Matrix Factorization}
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
-\input{pca}
+\input{PCA}
 
 \input{ALS.tex}
 
@@ -163,7 +161,7 @@
 
 \bibliographystyle{abbrv}
 
-\bibliography{SystemML_ALgorithms_Reference}
+\bibliography{SystemDS_ALgorithms_Reference}
 
 	
 %%%%%%%%%%%%%%%%%%%%%
diff --git a/alg-ref/SystemML_Algorithms_Reference.pdf b/alg-ref/SystemML_Algorithms_Reference.pdf
deleted file mode 100644
index 4087ba5..0000000
Binary files a/alg-ref/SystemML_Algorithms_Reference.pdf and /dev/null differ
diff --git a/algorithms-bibliography.md b/algorithms-bibliography.md
index e18b4e8..b77ed1d 100644
--- a/algorithms-bibliography.md
+++ b/algorithms-bibliography.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Bibliography
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Bibliography
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
diff --git a/algorithms-classification.md b/algorithms-classification.md
index 62e40e7..97b0d3d 100644
--- a/algorithms-classification.md
+++ b/algorithms-classification.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Classification
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Classification
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -147,7 +147,7 @@ val prediction = model.transform(X_test_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f MultiLogReg.dml
+    hadoop jar SystemDS.jar -f MultiLogReg.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -163,9 +163,9 @@ val prediction = model.transform(X_test_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f MultiLogReg.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -227,7 +227,7 @@ if no maximum limit provided
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 Please see [mllearn documentation](https://apache.github.io/systemml/python-reference#mllearn-api) for
 more details on the Python API. 
@@ -318,7 +318,7 @@ prediction.show()
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f MultiLogReg.dml
+    hadoop jar SystemDS.jar -f MultiLogReg.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -334,9 +334,9 @@ prediction.show()
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f MultiLogReg.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -515,7 +515,7 @@ val model = svm.fit(X_train_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f l2-svm.dml
+    hadoop jar SystemDS.jar -f l2-svm.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     icpt=[int]
@@ -530,9 +530,9 @@ val model = svm.fit(X_train_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f l2-svm.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -563,7 +563,7 @@ val prediction = model.transform(X_test_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f l2-svm-predict.dml
+    hadoop jar SystemDS.jar -f l2-svm-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     icpt=[int]
@@ -577,9 +577,9 @@ val prediction = model.transform(X_test_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f l2-svm-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -626,7 +626,7 @@ while training.
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **scores**: Location (on HDFS) to store scores for a held-out test set.
 Note that this is an optional argument.
@@ -646,7 +646,7 @@ more details on the Python API.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f l2-svm.dml
+    hadoop jar SystemDS.jar -f l2-svm.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     icpt=0
@@ -661,9 +661,9 @@ more details on the Python API.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f l2-svm.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -681,7 +681,7 @@ more details on the Python API.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f l2-svm-predict.dml
+    hadoop jar SystemDS.jar -f l2-svm-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     icpt=0
@@ -695,9 +695,9 @@ more details on the Python API.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f l2-svm-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -785,7 +785,7 @@ val model = svm.fit(X_train_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f m-svm.dml
+    hadoop jar SystemDS.jar -f m-svm.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     icpt=[int]
@@ -800,9 +800,9 @@ val model = svm.fit(X_train_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f m-svm.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -833,7 +833,7 @@ val prediction = model.transform(X_test_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f m-svm-predict.dml
+    hadoop jar SystemDS.jar -f m-svm-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     icpt=[int]
@@ -847,9 +847,9 @@ val prediction = model.transform(X_test_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f m-svm-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -897,7 +897,7 @@ val prediction = model.transform(X_test_df)
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **scores**: Location (on HDFS) to store scores for a held-out test set.
     Note that this is an optional argument.
@@ -997,7 +997,7 @@ prediction.show()
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f m-svm.dml
+    hadoop jar SystemDS.jar -f m-svm.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     icpt=0
@@ -1012,9 +1012,9 @@ prediction.show()
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f m-svm.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -1032,7 +1032,7 @@ prediction.show()
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f m-svm-predict.dml
+    hadoop jar SystemDS.jar -f m-svm-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     icpt=0
@@ -1046,9 +1046,9 @@ prediction.show()
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f m-svm-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -1138,7 +1138,7 @@ val model = nb.fit(X_train_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f naive-bayes.dml
+    hadoop jar SystemDS.jar -f naive-bayes.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     laplace=[double]
@@ -1151,9 +1151,9 @@ val model = nb.fit(X_train_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f naive-bayes.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -1182,7 +1182,7 @@ val prediction = model.transform(X_test_df)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f naive-bayes-predict.dml
+    hadoop jar SystemDS.jar -f naive-bayes-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     prior=<file>
@@ -1196,9 +1196,9 @@ val prediction = model.transform(X_test_df)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f naive-bayes-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -1233,7 +1233,7 @@ val prediction = model.transform(X_test_df)
 
 **fmt** (default: `"text"`): Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **probabilities**: Location (on HDFS) to store class membership
     probabilities for a held-out test set.
@@ -1274,7 +1274,7 @@ metrics.f1_score(newsgroups_test.target, pred, average='weighted')
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f naive-bayes.dml
+    hadoop jar SystemDS.jar -f naive-bayes.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     laplace=1
@@ -1287,9 +1287,9 @@ metrics.f1_score(newsgroups_test.target, pred, average='weighted')
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f naive-bayes.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -1305,7 +1305,7 @@ metrics.f1_score(newsgroups_test.target, pred, average='weighted')
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f naive-bayes-predict.dml
+    hadoop jar SystemDS.jar -f naive-bayes-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/y.mtx
                                     prior=/user/ml/prior.csv
@@ -1319,9 +1319,9 @@ metrics.f1_score(newsgroups_test.target, pred, average='weighted')
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f naive-bayes-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/y.mtx
@@ -1399,7 +1399,7 @@ implementation is well-suited to handle large-scale data and builds a
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f decision-tree.dml
+    hadoop jar SystemDS.jar -f decision-tree.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     R=[file]
@@ -1418,9 +1418,9 @@ implementation is well-suited to handle large-scale data and builds a
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f decision-tree.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -1442,7 +1442,7 @@ implementation is well-suited to handle large-scale data and builds a
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f decision-tree-predict.dml
+    hadoop jar SystemDS.jar -f decision-tree-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     R=[file]
@@ -1456,9 +1456,9 @@ implementation is well-suited to handle large-scale data and builds a
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f decision-tree-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -1531,7 +1531,7 @@ Note that this argument is optional.
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
@@ -1540,7 +1540,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f decision-tree.dml
+    hadoop jar SystemDS.jar -f decision-tree.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     R=/user/ml/R.csv
@@ -1556,9 +1556,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f decision-tree.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -1577,7 +1577,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f decision-tree-predict.dml
+    hadoop jar SystemDS.jar -f decision-tree-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     R=/user/ml/R.csv
@@ -1591,9 +1591,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f decision-tree-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -1804,7 +1804,7 @@ for classification in parallel.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f random-forest.dml
+    hadoop jar SystemDS.jar -f random-forest.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     R=[file]
@@ -1826,9 +1826,9 @@ for classification in parallel.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f random-forest.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -1853,7 +1853,7 @@ for classification in parallel.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f random-forest-predict.dml
+    hadoop jar SystemDS.jar -f random-forest-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     R=[file]
@@ -1869,9 +1869,9 @@ for classification in parallel.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f random-forest-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -1966,7 +1966,7 @@ Note that this argument is optional.
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
@@ -1975,7 +1975,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f random-forest.dml
+    hadoop jar SystemDS.jar -f random-forest.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     R=/user/ml/R.csv
@@ -1992,9 +1992,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f random-forest.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -2016,7 +2016,7 @@ To compute predictions:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f random-forest-predict.dml
+    hadoop jar SystemDS.jar -f random-forest-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     R=/user/ml/R.csv
@@ -2030,9 +2030,9 @@ To compute predictions:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f random-forest-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
diff --git a/algorithms-clustering.md b/algorithms-clustering.md
index 358a53a..538f53a 100644
--- a/algorithms-clustering.md
+++ b/algorithms-clustering.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Clustering
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Clustering
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -115,7 +115,7 @@ apart is a "false negative" etc.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans.dml
+    hadoop jar SystemDS.jar -f Kmeans.dml
                             -nvargs X=<file>
                                     C=[file]
                                     k=<int>
@@ -132,9 +132,9 @@ apart is a "false negative" etc.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          C=[file]
@@ -154,7 +154,7 @@ apart is a "false negative" etc.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans-predict.dml
+    hadoop jar SystemDS.jar -f Kmeans-predict.dml
                             -nvargs X=[file]
                                     C=[file]
                                     spY=[file]
@@ -166,9 +166,9 @@ apart is a "false negative" etc.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=[file]
                                          C=[file]
@@ -207,7 +207,7 @@ centroids)
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **verb**: (default: `FALSE`) Do not print per-iteration statistics for
 each run
@@ -235,7 +235,7 @@ categories
 
 **fmt**: (default: `"text"`) Matrix file output format for `prY`, such as
 `text`, `mm`, or `csv`; see read/write
-functions in SystemML Language Reference for details.
+functions in SystemDS Language Reference for details.
 
 **0**: (default: `" "`) Location to write the output statistics defined in
 [**Table 6**](algorithms-clustering.html#table6), by default print them to the
@@ -248,7 +248,7 @@ standard output
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans.dml
+    hadoop jar SystemDS.jar -f Kmeans.dml
                             -nvargs X=/user/ml/X.mtx
                                     k=5
                                     C=/user/ml/centroids.mtx
@@ -258,9 +258,9 @@ standard output
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          k=5
@@ -271,7 +271,7 @@ standard output
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans.dml
+    hadoop jar SystemDS.jar -f Kmeans.dml
                             -nvargs X=/user/ml/X.mtx
                                     k=5
                                     runs=100
@@ -287,9 +287,9 @@ standard output
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          k=5
@@ -310,7 +310,7 @@ To predict Y given X and C:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans-predict.dml
+    hadoop jar SystemDS.jar -f Kmeans-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     C=/user/ml/C.mtx
                                     prY=/user/ml/PredY.mtx
@@ -320,9 +320,9 @@ To predict Y given X and C:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          C=/user/ml/C.mtx
@@ -336,7 +336,7 @@ given X and C:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans-predict.dml
+    hadoop jar SystemDS.jar -f Kmeans-predict.dml
                             -nvargs X=/user/ml/X.mtx
                                     C=/user/ml/C.mtx
                                     spY=/user/ml/Y.mtx
@@ -346,9 +346,9 @@ given X and C:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          C=/user/ml/C.mtx
@@ -362,7 +362,7 @@ labels prY:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Kmeans-predict.dml
+    hadoop jar SystemDS.jar -f Kmeans-predict.dml
                             -nvargs spY=/user/ml/Y.mtx
                                     prY=/user/ml/PredY.mtx
                                     O=/user/ml/stats.csv
@@ -371,9 +371,9 @@ labels prY:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Kmeans-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs spY=/user/ml/Y.mtx
                                          prY=/user/ml/PredY.mtx
diff --git a/algorithms-descriptive-statistics.md b/algorithms-descriptive-statistics.md
index 1c86368..9c7d615 100644
--- a/algorithms-descriptive-statistics.md
+++ b/algorithms-descriptive-statistics.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Descriptive Statistics
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Descriptive Statistics
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -119,7 +119,7 @@ to compute the mean of a categorical attribute like ‘Hair Color’.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Univar-Stats.dml
+    hadoop jar SystemDS.jar -f Univar-Stats.dml
                             -nvargs X=<file>
                                     TYPES=<file>
                                     STATS=<file>
@@ -128,9 +128,9 @@ to compute the mean of a categorical attribute like ‘Hair Color’.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Univar-Stats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          TYPES=<file>
@@ -158,7 +158,7 @@ be stored. The format of the output matrix is defined by
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Univar-Stats.dml
+    hadoop jar SystemDS.jar -f Univar-Stats.dml
                             -nvargs X=/user/ml/X.mtx
                                     TYPES=/user/ml/types.mtx
                                     STATS=/user/ml/stats.mtx
@@ -167,9 +167,9 @@ be stored. The format of the output matrix is defined by
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Univar-Stats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          TYPES=/user/ml/types.mtx
@@ -576,7 +576,7 @@ attributes like ‘Hair Color’.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f bivar-stats.dml
+    hadoop jar SystemDS.jar -f bivar-stats.dml
                             -nvargs X=<file>
                                     index1=<file>
                                     index2=<file>
@@ -588,9 +588,9 @@ attributes like ‘Hair Color’.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f bivar-stats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          index1=<file>
@@ -645,7 +645,7 @@ are defined in [**Table 2**](algorithms-descriptive-statistics.html#table2).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f bivar-stats.dml
+    hadoop jar SystemDS.jar -f bivar-stats.dml
                             -nvargs X=/user/ml/X.mtx
                                     index1=/user/ml/S1.mtx
                                     index2=/user/ml/S2.mtx
@@ -657,9 +657,9 @@ are defined in [**Table 2**](algorithms-descriptive-statistics.html#table2).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f bivar-stats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          index1=/user/ml/S1.mtx
@@ -1136,7 +1136,7 @@ becomes reversed and amplified (from $+0.1$ to $-0.5$) if we ignore the months.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f stratstats.dml
+    hadoop jar SystemDS.jar -f stratstats.dml
                             -nvargs X=<file>
                                     Xcid=[file]
                                     Y=[file]
@@ -1150,9 +1150,9 @@ becomes reversed and amplified (from $+0.1$ to $-0.5$) if we ignore the months.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f stratstats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Xcid=[file]
@@ -1196,7 +1196,7 @@ $X$ in place of $S$"
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 * * *
@@ -1344,7 +1344,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f stratstats.dml
+    hadoop jar SystemDS.jar -f stratstats.dml
                             -nvargs X=/user/ml/X.mtx
                                     Xcid=/user/ml/Xcid.mtx
                                     Y=/user/ml/Y.mtx
@@ -1358,9 +1358,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f stratstats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Xcid=/user/ml/Xcid.mtx
@@ -1375,7 +1375,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f stratstats.dml
+    hadoop jar SystemDS.jar -f stratstats.dml
                             -nvargs X=/user/ml/Data.mtx
                                     Xcid=/user/ml/Xcid.mtx
                                     Ycid=/user/ml/Ycid.mtx
@@ -1386,9 +1386,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f stratstats.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/Data.mtx
                                          Xcid=/user/ml/Xcid.mtx
diff --git a/algorithms-factorization-machines.md b/algorithms-factorization-machines.md
index 3a380d3..ebde2d6 100644
--- a/algorithms-factorization-machines.md
+++ b/algorithms-factorization-machines.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Factorization Machines
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Factorization Machines
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -180,16 +180,16 @@ predict = function(matrix[double] X, matrix[double] w0, matrix[double] W, matrix
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ./scripts/nn/examples/fm-regression-dummy-data.dml
+    hadoop jar SystemDS.jar -f ./scripts/nn/examples/fm-regression-dummy-data.dml
 
 </div>
 <div data-lang="Spark" markdown="1">
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ./scripts/nn/examples/fm-regression-dummy-data.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
 </div>
 </div>
@@ -205,16 +205,16 @@ predict = function(matrix[double] X, matrix[double] w0, matrix[double] W, matrix
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ./scripts/nn/examples/fm-binclass-dummy-data.dml
+    hadoop jar SystemDS.jar -f ./scripts/nn/examples/fm-binclass-dummy-data.dml
 
 </div>
 <div data-lang="Spark" markdown="1">
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ./scripts/nn/examples/fm-binclass-dummy-data.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
 </div>
 </div>
diff --git a/algorithms-matrix-factorization.md b/algorithms-matrix-factorization.md
index b559cb5..1c4a447 100644
--- a/algorithms-matrix-factorization.md
+++ b/algorithms-matrix-factorization.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Matrix Factorization
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Matrix Factorization
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -45,7 +45,7 @@ top-$K$ (for a given value of $K$) principal components.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f PCA.dml
+    hadoop jar SystemDS.jar -f PCA.dml
                             -nvargs INPUT=<file>
                                     K=<int>
                                     CENTER=[int]
@@ -59,9 +59,9 @@ top-$K$ (for a given value of $K$) principal components.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f PCA.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs INPUT=<file>
                                          K=<int>
@@ -96,7 +96,7 @@ top-$K$ (for a given value of $K$) principal components.
 
 **OFMT**: (default: `"csv"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **MODEL**: Either the location (on HDFS) where the computed model is
     stored; or the location of an existing model.
@@ -109,7 +109,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f PCA.dml 
+    hadoop jar SystemDS.jar -f PCA.dml 
                             -nvargs INPUT=/user/ml/input.mtx
                                     K=10
                                     CENTER=1
@@ -122,9 +122,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f PCA.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs INPUT=/user/ml/input.mtx
                                          K=10
@@ -138,7 +138,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f PCA.dml
+    hadoop jar SystemDS.jar -f PCA.dml
                             -nvargs INPUT=/user/ml/test_input.mtx
                                     K=10
                                     CENTER=1
@@ -152,9 +152,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f PCA.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs INPUT=/user/ml/test_input.mtx
                                          K=10
@@ -244,7 +244,7 @@ problems.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS.dml
+    hadoop jar SystemDS.jar -f ALS.dml
                             -nvargs V=<file>
                                     L=<file>
                                     R=<file>
@@ -260,9 +260,9 @@ problems.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs V=<file>
                                          L=<file>
@@ -281,7 +281,7 @@ problems.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS_predict.dml
+    hadoop jar SystemDS.jar -f ALS_predict.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     L=<file>
@@ -294,9 +294,9 @@ problems.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS_predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -312,7 +312,7 @@ problems.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS_topk_predict.dml
+    hadoop jar SystemDS.jar -f ALS_topk_predict.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     L=<file>
@@ -325,9 +325,9 @@ problems.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS_topk_predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -370,7 +370,7 @@ iterations falls below threshold `thr`; if
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Arguments - ALS Prediction/Top-K Prediction
@@ -409,7 +409,7 @@ format:
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
@@ -418,7 +418,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS.dml
+    hadoop jar SystemDS.jar -f ALS.dml
                             -nvargs V=/user/ml/V
                                     L=/user/ml/L
                                     R=/user/ml/R
@@ -434,9 +434,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs V=/user/ml/V
                                          L=/user/ml/L
@@ -457,7 +457,7 @@ To compute predicted ratings for a given list of users and items:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS_predict.dml
+    hadoop jar SystemDS.jar -f ALS_predict.dml
                             -nvargs X=/user/ml/X
                                     Y=/user/ml/Y
                                     L=/user/ml/L
@@ -470,9 +470,9 @@ To compute predicted ratings for a given list of users and items:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS_predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X
                                          Y=/user/ml/Y
@@ -491,7 +491,7 @@ predicted ratings for a given list of users:
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f ALS_topk_predict.dml
+    hadoop jar SystemDS.jar -f ALS_topk_predict.dml
                             -nvargs X=/user/ml/X
                                     Y=/user/ml/Y
                                     L=/user/ml/L
@@ -504,9 +504,9 @@ predicted ratings for a given list of users:
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f ALS_topk_predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X
                                          Y=/user/ml/Y
diff --git a/algorithms-reference.md b/algorithms-reference.md
index 9319093..efb4076 100644
--- a/algorithms-reference.md
+++ b/algorithms-reference.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-title: SystemML Algorithms Reference
-description: SystemML Algorithms Reference
-displayTitle: SystemML Algorithms Reference
+title: SystemDS Algorithms Reference
+description: SystemDS Algorithms Reference
+displayTitle: SystemDS Algorithms Reference
 ---
 <!--
 {% comment %}
diff --git a/algorithms-regression.md b/algorithms-regression.md
index 18640b8..c2ccc04 100644
--- a/algorithms-regression.md
+++ b/algorithms-regression.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Regression
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Regression
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -92,7 +92,7 @@ y_test = lr.fit(df_train)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f LinearRegDS.dml
+    hadoop jar SystemDS.jar -f LinearRegDS.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -105,9 +105,9 @@ y_test = lr.fit(df_train)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f LinearRegDS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -134,7 +134,7 @@ y_test = lr.fit(df_train)
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f LinearRegCG.dml
+    hadoop jar SystemDS.jar -f LinearRegCG.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -150,9 +150,9 @@ y_test = lr.fit(df_train)
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f LinearRegCG.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -210,7 +210,7 @@ gradient iterations, or `0` if no maximum limit provided
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 Please see [mllearn documentation](https://apache.github.io/systemml/python-reference#mllearn-api) for
 more details on the Python API. 
@@ -244,7 +244,7 @@ print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) -
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f LinearRegDS.dml
+    hadoop jar SystemDS.jar -f LinearRegDS.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -257,9 +257,9 @@ print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) -
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f LinearRegDS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -298,7 +298,7 @@ print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) -
 {% endhighlight %}
 </div>
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f LinearRegCG.dml
+    hadoop jar SystemDS.jar -f LinearRegCG.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -314,9 +314,9 @@ print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) -
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f LinearRegCG.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -541,7 +541,7 @@ lowest AIC is computed.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f StepLinearRegDS.dml
+    hadoop jar SystemDS.jar -f StepLinearRegDS.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -555,9 +555,9 @@ lowest AIC is computed.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f StepLinearRegDS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -605,14 +605,14 @@ checked and the algorithm stops.
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f StepLinearRegDS.dml
+    hadoop jar SystemDS.jar -f StepLinearRegDS.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -626,9 +626,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f StepLinearRegDS.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -735,7 +735,7 @@ distributions and link functions, see below for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM.dml
+    hadoop jar SystemDS.jar -f GLM.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -758,9 +758,9 @@ distributions and link functions, see below for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -795,7 +795,7 @@ B\[$m\,{+}\,1$, 1\] if available
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 **O**: (default: `" "`) Location to write certain summary statistics described 
 in [**Table 9**](algorithms-regression.html#table9), 
@@ -875,7 +875,7 @@ if no maximum limit provided
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM.dml
+    hadoop jar SystemDS.jar -f GLM.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -896,9 +896,9 @@ if no maximum limit provided
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -1213,7 +1213,7 @@ distribution family is supported (see below for details).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f StepGLM.dml
+    hadoop jar SystemDS.jar -f StepGLM.dml
                             -nvargs X=<file>
                                     Y=<file>
                                     B=<file>
@@ -1233,9 +1233,9 @@ distribution family is supported (see below for details).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f StepGLM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=<file>
@@ -1312,14 +1312,14 @@ checked and the algorithm stops.
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f StepGLM.dml
+    hadoop jar SystemDS.jar -f StepGLM.dml
                             -nvargs X=/user/ml/X.mtx
                                     Y=/user/ml/Y.mtx
                                     B=/user/ml/B.mtx
@@ -1338,9 +1338,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f StepGLM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          Y=/user/ml/Y.mtx
@@ -1467,7 +1467,7 @@ this step outside the scope of `GLM-predict.dml` for now.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs X=<file>
                                     Y=[file]
                                     B=<file>
@@ -1484,9 +1484,9 @@ this step outside the scope of `GLM-predict.dml` for now.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          Y=[file]
@@ -1592,7 +1592,7 @@ $\eta = \log\mu$. Common power links:
 
 **fmt**: (default: `"text"`) Matrix M file output format, such as
 `text`, `mm`, or `csv`; see read/write
-functions in SystemML Language Reference for details.
+functions in SystemDS Language Reference for details.
 
 
 ### Examples
@@ -1606,7 +1606,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=1
                                     vpow=0.0
                                     link=1
@@ -1623,9 +1623,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=1
                                          vpow=0.0
@@ -1645,7 +1645,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=1
                                     vpow=0.0
                                     link=1
@@ -1659,9 +1659,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=1
                                          vpow=0.0
@@ -1678,7 +1678,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=2
                                     link=2
                                     disp=3.0004464
@@ -1693,9 +1693,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=2
                                          link=2
@@ -1713,7 +1713,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=2
                                     link=3
                                     disp=3.0004464
@@ -1728,9 +1728,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=2
                                          link=3
@@ -1748,7 +1748,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=3 
                                     X=/user/ml/X.mtx
                                     B=/user/ml/B.mtx
@@ -1761,9 +1761,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=3
                                          X=/user/ml/X.mtx
@@ -1779,7 +1779,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=1
                                     vpow=1.0
                                     link=1
@@ -1796,9 +1796,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=1
                                          vpow=1.0
@@ -1818,7 +1818,7 @@ unknown (which sets it to `1.0`).
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f GLM-predict.dml
+    hadoop jar SystemDS.jar -f GLM-predict.dml
                             -nvargs dfam=1
                                     vpow=2.0
                                     link=1
@@ -1835,9 +1835,9 @@ unknown (which sets it to `1.0`).
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f GLM-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs dfam=1
                                          vpow=2.0
diff --git a/algorithms-survival-analysis.md b/algorithms-survival-analysis.md
index 943d4d7..7fedbbd 100644
--- a/algorithms-survival-analysis.md
+++ b/algorithms-survival-analysis.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Algorithms Reference - Survival Analysis
-displayTitle: <a href="algorithms-reference.html">SystemML Algorithms Reference</a>
+title: SystemDS Algorithms Reference - Survival Analysis
+displayTitle: <a href="algorithms-reference.html">SystemDS Algorithms Reference</a>
 ---
 <!--
 {% comment %}
@@ -42,7 +42,7 @@ censored and uncensored survival times.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f KM.dml
+    hadoop jar SystemDS.jar -f KM.dml
                             -nvargs X=<file>
                                     TE=<file>
                                     GI=<file>
@@ -60,9 +60,9 @@ censored and uncensored survival times.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f KM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          TE=<file>
@@ -132,14 +132,14 @@ groups: `none`, `log-rank` or `wilcoxon` test
 
 **fmt**: (default:`"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f KM.dml
+    hadoop jar SystemDS.jar -f KM.dml
                             -nvargs X=/user/ml/X.mtx
                                     TE=/user/ml/TE
                                     GI=/user/ml/GI
@@ -155,9 +155,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f KM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          TE=/user/ml/TE
@@ -174,7 +174,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f KM.dml
+    hadoop jar SystemDS.jar -f KM.dml
                             -nvargs X=/user/ml/X.mtx
                                     TE=/user/ml/TE
                                     GI=/user/ml/GI
@@ -192,9 +192,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f KM.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          TE=/user/ml/TE
@@ -442,7 +442,7 @@ may be categorical (ordinal or nominal) as well as continuous-valued.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Cox.dml
+    hadoop jar SystemDS.jar -f Cox.dml
                             -nvargs X=<file>
                                     TE=<file>
                                     F=<file>
@@ -464,9 +464,9 @@ may be categorical (ordinal or nominal) as well as continuous-valued.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Cox.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          TE=<file>
@@ -492,7 +492,7 @@ may be categorical (ordinal or nominal) as well as continuous-valued.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Cox-predict.dml
+    hadoop jar SystemDS.jar -f Cox-predict.dml
                             -nvargs X=<file>
                                     RT=<file>
                                     M=<file>
@@ -506,9 +506,9 @@ may be categorical (ordinal or nominal) as well as continuous-valued.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Cox-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=<file>
                                          RT=<file>
@@ -591,7 +591,7 @@ if no maximum limit provided
 
 **fmt**: (default: `"text"`) Matrix file output format, such as `text`,
 `mm`, or `csv`; see read/write functions in
-SystemML Language Reference for details.
+SystemDS Language Reference for details.
 
 
 ### Examples
@@ -600,7 +600,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Cox.dml
+    hadoop jar SystemDS.jar -f Cox.dml
                             -nvargs X=/user/ml/X.mtx
                                     TE=/user/ml/TE
                                     F=/user/ml/F
@@ -615,9 +615,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Cox.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          TE=/user/ml/TE
@@ -633,7 +633,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Cox.dml
+    hadoop jar SystemDS.jar -f Cox.dml
                             -nvargs X=/user/ml/X.mtx
                                     TE=/user/ml/TE
                                     F=/user/ml/F
@@ -654,9 +654,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Cox.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X.mtx
                                          TE=/user/ml/TE
@@ -680,7 +680,7 @@ SystemML Language Reference for details.
 
 <div class="codetabs">
 <div data-lang="Hadoop" markdown="1">
-    hadoop jar SystemML.jar -f Cox-predict.dml
+    hadoop jar SystemDS.jar -f Cox-predict.dml
                             -nvargs X=/user/ml/X-sorted.mtx
                                     RT=/user/ml/recoded-timestamps.csv
                                     M=/user/ml/model.csv
@@ -694,9 +694,9 @@ SystemML Language Reference for details.
     $SPARK_HOME/bin/spark-submit --master yarn
                                  --deploy-mode cluster
                                  --conf spark.driver.maxResultSize=0
-                                 SystemML.jar
+                                 SystemDS.jar
                                  -f Cox-predict.dml
-                                 -config SystemML-config.xml
+                                 -config SystemDS-config.xml
                                  -exec hybrid_spark
                                  -nvargs X=/user/ml/X-sorted.mtx
                                          RT=/user/ml/recoded-timestamps.csv
diff --git a/beginners-guide-caffe2dml.md b/beginners-guide-caffe2dml.md
index db74feb..a30c426 100644
--- a/beginners-guide-caffe2dml.md
+++ b/beginners-guide-caffe2dml.md
@@ -183,7 +183,7 @@ new_lenet.score(X_test, y_test)
 
 # Loading a pretrained caffemodel
 
-We provide a converter utility to convert `.caffemodel` trained using Caffe to SystemML format.
+We provide a converter utility to convert `.caffemodel` trained using Caffe to SystemDS format.
 
 ```python
 # First download deploy file and caffemodel
diff --git a/beginners-guide-keras2dml.md b/beginners-guide-keras2dml.md
index 788a489..eb12304 100644
--- a/beginners-guide-keras2dml.md
+++ b/beginners-guide-keras2dml.md
@@ -32,7 +32,7 @@ limitations under the License.
 Keras2DML converts a Keras specification to DML through the intermediate Caffe2DML module. 
 It is designed to fit well into the mllearn framework and hence supports NumPy, Pandas as well as PySpark DataFrame.
 
-First, install SystemML and other dependencies for the below demo:
+First, install SystemDS and other dependencies for the below demo:
 
 ```
 pip install systemml keras tensorflow
@@ -48,7 +48,7 @@ Download the MNIST dataset using [mlxtend package](https://pypi.python.org/pypi/
 ```python
 # pyspark --driver-memory 20g
 
-# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemML runtime
+# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemDS runtime
 import os
 os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
 os.environ['CUDA_VISIBLE_DEVICES'] = ''
@@ -95,7 +95,7 @@ scale = 0.00390625
 X_train = X_train*scale
 X_test = X_test*scale
 
-# Train Lenet using SystemML
+# Train Lenet using SystemDS
 from systemml.mllearn import Keras2DML
 sysml_model = Keras2DML(spark, keras_model, weights='weights_dir')
 # sysml_model.setConfigProperty("sysml.native.blas", "auto")
@@ -108,7 +108,7 @@ sysml_model.score(X_test, y_test)
 
 ```python
 # pyspark --driver-memory 20g
-# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemML runtime
+# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemDS runtime
 import os
 os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
 os.environ['CUDA_VISIBLE_DEVICES'] = ''
diff --git a/beginners-guide-python.md b/beginners-guide-python.md
index 53620e5..19186ed 100644
--- a/beginners-guide-python.md
+++ b/beginners-guide-python.md
@@ -29,20 +29,20 @@ limitations under the License.
 
 ## Introduction
 
-SystemML enables flexible, scalable machine learning. This flexibility is achieved through the specification of a high-level declarative machine learning language that comes in two flavors, 
+SystemDS enables flexible, scalable machine learning. This flexibility is achieved through the specification of a high-level declarative machine learning language that comes in two flavors, 
 one with an R-like syntax (DML) and one with a Python-like syntax (PyDML).
 
 Algorithm scripts written in DML and PyDML can be run on Hadoop, on Spark, or in Standalone mode. 
-No script modifications are required to change between modes. SystemML automatically performs advanced optimizations 
+No script modifications are required to change between modes. SystemDS automatically performs advanced optimizations 
 based on data and cluster characteristics, so much of the need to manually tweak algorithms is largely reduced or eliminated.
 To understand more about DML and PyDML, we recommend that you read [Beginner's Guide to DML and PyDML](https://apache.github.io/systemml/beginners-guide-to-dml-and-pydml.html).
 
-For convenience of Python users, SystemML exposes several language-level APIs that allow Python users to use SystemML
+For convenience of Python users, SystemDS exposes several language-level APIs that allow Python users to use SystemDS
 and its algorithms without the need to know DML or PyDML. We explain these APIs in the below sections with example usecases.
 
 ## Download & Setup
 
-Before you get started on SystemML, make sure that your environment is set up and ready to go.
+Before you get started on SystemDS, make sure that your environment is set up and ready to go.
 
 ### Install Java (need Java 8) and Apache Spark
 
@@ -67,9 +67,9 @@ brew install apache-spark
 </div>
 </div>
 
-### Install SystemML
+### Install SystemDS
 
-To install released SystemML, please use following commands:
+To install released SystemDS, please use following commands:
 
 <div class="codetabs">
 <div data-lang="Python 2" markdown="1">
@@ -106,8 +106,8 @@ pip3 install target/systemml-1.0.0-SNAPSHOT-python.tar.gz
 </div>
 </div>
 
-### Uninstall SystemML
-To uninstall SystemML, please use following command:
+### Uninstall SystemDS
+To uninstall SystemDS, please use following command:
 
 <div class="codetabs">
 <div data-lang="Python 2" markdown="1">
@@ -141,7 +141,7 @@ PYSPARK_PYTHON=python3 pyspark
 
 ## Matrix operations
 
-To get started with SystemML, let's try few elementary matrix multiplication operations:
+To get started with SystemDS, let's try few elementary matrix multiplication operations:
 
 ```python
 import systemml as sml
@@ -200,14 +200,14 @@ will use `mllearn` API described in the next section.
 
 ---
 
-## Invoke SystemML's algorithms
+## Invoke SystemDS's algorithms
 
-SystemML also exposes a subpackage [mllearn](https://apache.github.io/systemml/python-reference#mllearn-api). This subpackage allows Python users to invoke SystemML algorithms
+SystemDS also exposes a subpackage [mllearn](https://apache.github.io/systemml/python-reference#mllearn-api). This subpackage allows Python users to invoke SystemDS algorithms
 using Scikit-learn or MLPipeline API.  
 
 ### Scikit-learn interface
 
-In the below example, we invoke SystemML's [Linear Regression](https://apache.github.io/systemml/algorithms-regression.html#linear-regression)
+In the below example, we invoke SystemDS's [Linear Regression](https://apache.github.io/systemml/algorithms-regression.html#linear-regression)
 algorithm.
  
 ```python
@@ -240,7 +240,7 @@ Residual sum of squares: 6991.17
 
 As expected, by adding intercept and regularizer the residual error drops significantly.
 
-Here is another example that where we invoke SystemML's [Logistic Regression](https://apache.github.io/systemml/algorithms-classification.html#multinomial-logistic-regression)
+Here is another example that where we invoke SystemDS's [Logistic Regression](https://apache.github.io/systemml/algorithms-classification.html#multinomial-logistic-regression)
 algorithm on digits datasets.
 
 ```python
@@ -308,7 +308,7 @@ LogisticRegression score: 0.922222
 
 ### MLPipeline interface
 
-In the below example, we demonstrate how the same `LogisticRegression` class can allow SystemML to fit seamlessly into 
+In the below example, we demonstrate how the same `LogisticRegression` class can allow SystemDS to fit seamlessly into 
 large data pipelines.
 
 ```python
diff --git a/beginners-guide-to-dml-and-pydml.md b/beginners-guide-to-dml-and-pydml.md
index 442c07b..46dc595 100644
--- a/beginners-guide-to-dml-and-pydml.md
+++ b/beginners-guide-to-dml-and-pydml.md
@@ -30,13 +30,13 @@ limitations under the License.
 
 # Overview
 
-SystemML enables *flexible*, scalable machine learning. This flexibility is achieved
+SystemDS enables *flexible*, scalable machine learning. This flexibility is achieved
 through the specification of a high-level declarative machine learning language
 that comes in two flavors, one with an R-like syntax (DML) and one with
 a Python-like syntax (PyDML).
 
 Algorithm scripts written in DML and PyDML can be run on Spark, on Hadoop, or
-in Standalone mode. SystemML also features an MLContext API that allows SystemML
+in Standalone mode. SystemDS also features an MLContext API that allows SystemDS
 to be accessed via Scala or Python from a Spark Shell, a Jupyter Notebook, or a Zeppelin Notebook.
 
 This Beginner's Guide serves as a starting point for writing DML and PyDML
@@ -50,18 +50,18 @@ DML and PyDML scripts can be invoked in a variety of ways. Suppose that we have
 
 	print('hello ' + $1)
 
-One way to begin working with SystemML is to [download a binary distribution of SystemML](http://systemml.apache.org/download.html)
-and use the `runStandaloneSystemML.sh` and `runStandaloneSystemML.bat` scripts to run SystemML in standalone
+One way to begin working with SystemDS is to [download a binary distribution of SystemDS](http://systemml.apache.org/download.html)
+and use the `runStandaloneSystemDS.sh` and `runStandaloneSystemDS.bat` scripts to run SystemDS in standalone
 mode. The name of the DML or PyDML script is passed as the first argument to these scripts,
 along with a variety of arguments. Note that PyDML invocation can be forced with the addition of a `-python` flag.
 
-	./runStandaloneSystemML.sh hello.dml -args world
-	./runStandaloneSystemML.sh hello.pydml -args world
+	./runStandaloneSystemDS.sh hello.dml -args world
+	./runStandaloneSystemDS.sh hello.pydml -args world
 
 
 # Data Types
 
-SystemML has four value data types. In DML, these are: **double**, **integer**,
+SystemDS has four value data types. In DML, these are: **double**, **integer**,
 **string**, and **boolean**. In PyDML, these are: **float**, **int**,
 **str**, and **bool**. In normal usage, the data type of a variable is implicit
 based on its value. Mathematical operations typically operate on
@@ -239,7 +239,7 @@ the [Other Built-In Functions](dml-language-reference.html#other-built-in-functi
 
 ## Saving a Matrix
 
-A matrix can be saved using the **`write()`** function in DML and the **`save()`** function in PyDML. SystemML supports four
+A matrix can be saved using the **`write()`** function in DML and the **`save()`** function in PyDML. SystemDS supports four
 different formats: **`text`** (`i,j,v`), **`mm`** (`Matrix Market`), **`csv`** (`delimiter-separated values`), and **`binary`**.
 
 <div class="codetabs2">
@@ -290,7 +290,7 @@ is 0-based.*
 	    "cols": 3,
 	    "nnz": 6,
 	    "format": "text",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -323,7 +323,7 @@ is 0-based.*
 	    "format": "csv",
 	    "header": false,
 	    "sep": ",",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -342,7 +342,7 @@ is 0-based.*
 	    "cols_in_block": 1000,
 	    "nnz": 6,
 	    "format": "binary",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -352,7 +352,7 @@ is 0-based.*
 
 ## Loading a Matrix
 
-A matrix can be loaded using the **`read()`** function in DML and the **`load()`** function in PyDML. As with saving, SystemML supports four
+A matrix can be loaded using the **`read()`** function in DML and the **`load()`** function in PyDML. As with saving, SystemDS supports four
 formats: **`text`** (`i,j,v`), **`mm`** (`Matrix Market`), **`csv`** (`delimiter-separated values`), and **`binary`**. To read a file, a corresponding
 metadata file is required, except for the Matrix Market format. A metadata file is not required if a `format` parameter is specified to the **`read()`**
 or **`load()`** functions.
@@ -639,7 +639,7 @@ parfor(i in 0:nrow(A)-1):
 
 # User-Defined Functions
 
-Functions encapsulate useful functionality in SystemML. In addition to built-in functions, users can define their own functions.
+Functions encapsulate useful functionality in SystemDS. In addition to built-in functions, users can define their own functions.
 Functions take 0 or more parameters and return 0 or more values.
 
 <div class="codetabs2">
diff --git a/contributing-to-systemml.md b/contributing-to-systemds.md
similarity index 88%
rename from contributing-to-systemml.md
rename to contributing-to-systemds.md
index bc83661..fa1f80d 100644
--- a/contributing-to-systemml.md
+++ b/contributing-to-systemds.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-displayTitle: Contributing to SystemML
-title: Contributing to SystemML
-description: Contributing to SystemML
+displayTitle: Contributing to SystemDS
+title: Contributing to SystemDS
+description: Contributing to SystemDS
 ---
 <!--
 {% comment %}
@@ -23,7 +23,7 @@ limitations under the License.
 {% endcomment %}
 -->
 
-There are many ways to become involved with SystemML:
+There are many ways to become involved with SystemDS:
 
 * This will become a table of contents (this text will be scraped).
 {:toc}
@@ -33,7 +33,7 @@ There are many ways to become involved with SystemML:
 
 ### Development Mailing List
 
-Perhaps the easiest way to obtain help and contribute to SystemML is to join the SystemML Development
+Perhaps the easiest way to obtain help and contribute to SystemDS is to join the SystemDS Development
 mailing list (dev@systemml.apache.org). You can subscribe to this list by sending an email to
 [dev-subscribe@systemml.apache.org](mailto:dev-subscribe@systemml.apache.org).
 You can unsubscribe from this list by sending an email to [dev-unsubscribe@systemml.apache.org](mailto:dev-unsubscribe@systemml.apache.org). The dev mailing list archive can be found
@@ -60,24 +60,24 @@ To unsubscribe from the issues list, send an email to
 
 ## Issue Tracker
 
-Have you found a bug in SystemML? Have you thought of a way to improve SystemML? Are
-you interested in working on SystemML itself? If so, the SystemML
+Have you found a bug in SystemDS? Have you thought of a way to improve SystemDS? Are
+you interested in working on SystemDS itself? If so, the SystemDS
 [JIRA Issue Tracker](https://issues.apache.org/jira/browse/SYSTEMML) is the place to go.
 
 
-## SystemML on GitHub
+## SystemDS on GitHub
 
-Have you found an issue on the SystemML [JIRA Issue Tracker](https://issues.apache.org/jira/browse/SYSTEMML)
+Have you found an issue on the SystemDS [JIRA Issue Tracker](https://issues.apache.org/jira/browse/SYSTEMML)
 that you are interested in working on?
 If so, add a comment to the issue asking to be assigned the issue. If you don't hear back in a timely
 fashion, please contact us on the dev mailing list and we will be happy to help you.
 
 Once you have an issue to work on, how do you go about doing your work? The first thing you need is a GitHub
-account. Once you have a GitHub account, go to the Apache SystemML GitHub site at
+account. Once you have a GitHub account, go to the Apache SystemDS GitHub site at
 [https://github.com/apache/systemml](https://github.com/apache/systemml) and
-click the Fork button to fork a personal remote copy of the SystemML repository to your GitHub account.
+click the Fork button to fork a personal remote copy of the SystemDS repository to your GitHub account.
 
-The next step is to clone your SystemML fork to your local machine.
+The next step is to clone your SystemDS fork to your local machine.
 
 	$ git clone https://github.com/YOUR_GITHUB_NAME/systemml.git
 
@@ -88,13 +88,13 @@ to set the `push.default` property to `simple`. You only need to execute these c
 	$ git config --global user.email "yourname@yourhost.com"
 	$ git config --global push.default simple
 
-Next, reference the main SystemML repository as a remote repository. By convention, you can
+Next, reference the main SystemDS repository as a remote repository. By convention, you can
 call this `upstream`. You only need to add the remote `upstream` repository once.
 
 	$ git remote add upstream https://github.com/apache/systemml.git
 
-After this, you should have an `origin` repository, which references your personal forked SystemML
-repository on GitHub, and the `upstream` repository, which references the main SystemML repository
+After this, you should have an `origin` repository, which references your personal forked SystemDS
+repository on GitHub, and the `upstream` repository, which references the main SystemDS repository
 on GitHub.
 
 	$ git remote -v
@@ -147,13 +147,13 @@ When ready, push your changes on this branch to your remote GitHub fork:
 
 At this stage, you can go to your GitHub web page and file a Pull Request for the work
 that you did on this branch. A Pull Request is a request for project committers (who have
-write access to Apache SystemML) to review your code and integrate your code into the project.
+write access to Apache SystemDS) to review your code and integrate your code into the project.
 Typically, you will see a green button to allow you to file a Pull Request.
 
-Once your Pull Request is opened at [SystemML Pull Requests](https://github.com/apache/systemml/pulls),
+Once your Pull Request is opened at [SystemDS Pull Requests](https://github.com/apache/systemml/pulls),
 typically Jenkins will automatically build the project to see
 if all tests pass when run for your particular branch. These automatic builds
-can be seen [here](https://sparktc.ibmcloud.com/jenkins/job/SystemML-PullRequestBuilder/).
+can be seen [here](https://sparktc.ibmcloud.com/jenkins/job/SystemDS-PullRequestBuilder/).
 
 A conversation typically will proceed with regards to your Pull Request. Project committers and
 potentially others will give you useful feedback and potentially request that some changes be made
@@ -163,7 +163,7 @@ the changes to your remote branch. These updates will automatically appear in th
 
 When your changes are accepted (a committer will write "Looks good to me", "LGTM", or something
 similar), a committer will attempt to incorporate your changes into the
-SystemML project. Typically this is done by squashing all of your commits into a single commit
+SystemDS project. Typically this is done by squashing all of your commits into a single commit
 and then rebasing your changes into the master branch. Rebasing gives a linear commit history
 to the project.
 
@@ -177,7 +177,7 @@ the Pull Request, and the issue can be resolved and closed.
 
 ## Documentation
 
-Documentation is one useful way to become involved with SystemML. SystemML online documentation
+Documentation is one useful way to become involved with SystemDS. SystemDS online documentation
 is generated from markdown using Jekyll. For more information, please see GitHub's
 [Using Jekyll as a static site generator with GitHub Pages](https://help.github.com/articles/using-jekyll-as-a-static-site-generator-with-github-pages/).
 
@@ -209,7 +209,7 @@ branch and perform the `subtree` command again.
 
 ### Java Code Format
 
-Java in SystemML should be formatted using a standard format. The "SystemML Format" at
+Java in SystemDS should be formatted using a standard format. The "SystemDS Format" at
 [`dev/code-style/systemml-style-eclipse.xml`](https://github.com/apache/systemml/blob/master/dev/code-style/systemml-style-eclipse.xml)
 can be imported into Eclipse and
 [`dev/code-style/systemml-style-intellij.xml`](https://github.com/apache/systemml/blob/master/dev/code-style/systemml-style-intellij.xml)
@@ -221,5 +221,5 @@ for this option.
 
 ### DML Code Format
 
-DML in SystemML should be formatted according to a standard format. Indentation in DML
+DML in SystemDS should be formatted according to a standard format. Indentation in DML
 files should be two spaces.
diff --git a/debugger-guide.md b/debugger-guide.md
index 8032910..8069e6b 100644
--- a/debugger-guide.md
+++ b/debugger-guide.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: SystemML Debugger Guide
-description: SystemML Debugger Guide
+title: SystemDS Debugger Guide
+description: SystemDS Debugger Guide
 ---
 <!--
 {% comment %}
@@ -24,14 +24,14 @@ limitations under the License.
 
 ## Overview
 
-SystemML supports DML script-level debugging through a command line interface.  The SystemML debugger provides functionality typically found in a debugging environment like setting breakpoints, controlling program execution, and inspecting variables.  To run a script in debug mode, specify the '-debug' option as shown in below example.
+SystemDS supports DML script-level debugging through a command line interface.  The SystemDS debugger provides functionality typically found in a debugging environment like setting breakpoints, controlling program execution, and inspecting variables.  To run a script in debug mode, specify the '-debug' option as shown in below example.
 
-    hadoop jar SystemML.jar -f test.dml -debug
+    hadoop jar SystemDS.jar -f test.dml -debug
 
 
 ## Debugger Commands
 
-After starting a SystemML debug session, a list of available commands is automatically displayed.  Debugger commands can be entered at the SystemML debugger prompt (SystemMLdb).
+After starting a SystemDS debug session, a list of available commands is automatically displayed.  Debugger commands can be entered at the SystemDS debugger prompt (SystemDSdb).
 The following sections describe each command along with example usage.
 
   * [Help](#help)
@@ -56,9 +56,9 @@ The following sections describe each command along with example usage.
 
 Type h for help to display a summary of available debugger commands.
 
-    (SystemMLdb) h
+    (SystemDSdb) h
 
-    SystemMLdb commands:
+    SystemDSdb commands:
     h,help                                                 list debugger functions
     r,run                                                  start your DML script
     q,quit                                                 exit debug mode
@@ -79,7 +79,7 @@ Type h for help to display a summary of available debugger commands.
     si,stepi                                               next runtime instruction rather than DML source lines (for advanced
                                                            users)
 
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -91,7 +91,7 @@ To exit a debug session, simply type q.
 
 This returns control to the terminal or console shell which was used to launch the session.
 
-    (SystemMLdb) q
+    (SystemDSdb) q
     $
 
 
@@ -117,7 +117,7 @@ After initially launching a debug session, the script is loaded and ready to be
 
 Without specifying any options, the list shows up to the next 10 lines of the script.  For example:
 
-    (SystemMLdb) l
+    (SystemDSdb) l
     line    1: A = rand (rows=10, cols=5);
     line    2: B = rand (rows=5, cols=4);
     line    3: D = sum(A);
@@ -136,10 +136,10 @@ Each line of the script can be stepped through using the s command.
 
 So continuing with the example from previous section, typing s executes the current line 1:
 
-    (SystemMLdb) s
+    (SystemDSdb) s
     Step reached at .defaultNS::main: (line 2).
     2    B = rand (rows=5, cols=4);
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 As can be seen from the output, the debugger executed line 1 and advanced to the next line in script.  The current line is automatically displayed.
 
@@ -155,9 +155,9 @@ To execute a group of instructions up to a specific line, breakpoints can be use
 
 Continuing the example from step command, the current line was 2.  The below command sets a breakpoint at script source line number 4.
 
-    (SystemMLdb) b 4    
+    (SystemDSdb) b 4    
     Breakpoint added at .defaultNS::main, line 4.
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -170,15 +170,15 @@ Use the d command to remove a breakpoint.
 
 Below is sample output when removing a breakpoint.
 
-    (SystemMLdb) d 4
+    (SystemDSdb) d 4
     Breakpoint updated at .defaultNS::main, line 4.
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 If no breakpoint was set at the specified line number, then an appropriate message is displayed.
 
-    (SystemMLdb) d 4
+    (SystemDSdb) d 4
     Sorry, a breakpoint cannot be deleted at line 4. Please try a different line number.
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -191,10 +191,10 @@ To see a list of breakpoints, use the i command with the break option.
 
 Below is sample output after setting breakpoints at lines 2 and 4 of test.dml script.
 
-    (SystemMLdb) i break
+    (SystemDSdb) i break
     Breakpoint  1, at line    2 (enabled)
     Breakpoint  2, at line    4 (enabled)
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 The info command also has a frame option which is discussed in the section related to inspecting script variables.
 
@@ -211,48 +211,48 @@ The continue command resumes script execution from the current line up to the ne
 
 Since the previous section set a breakpoint at line number 4, typing c to continue executes from the current line (2) up to but not including line 4 (i.e., the line with the breakpoint).
 
-    (SystemMLdb) c
+    (SystemDSdb) c
     Resuming DML script execution ...
     Breakpoint reached at .defaultNS::main instID 1: (line 4).
     4    print("Sum(A)=" + D);
-    (SystemMLdb) 
+    (SystemDSdb) 
 
-Note that continue is not a valid command if the SystemML runtime has not been started.
+Note that continue is not a valid command if the SystemDS runtime has not been started.
 
-    (SystemMLdb) c
+    (SystemDSdb) c
     Runtime has not been started. Try "r" to start DML runtime execution.
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
 
 ### Run
 
-There are two ways of starting the SystemML runtime for a debug session - the step command or the run command.  A common scenario is to set breakpoint(s) in the beginning of a debug session, then use r to start the runtime and run until the breakpoint is reached or script completion.
+There are two ways of starting the SystemDS runtime for a debug session - the step command or the run command.  A common scenario is to set breakpoint(s) in the beginning of a debug session, then use r to start the runtime and run until the breakpoint is reached or script completion.
 
     r,run                                                  start your DML script
 
 Using the same script from the previous example, the r command can be used in the beginning of the session to run the script up to a breakpoint or program completion if no breakpoint were set or reached.
 
-    (SystemMLdb) l
+    (SystemDSdb) l
     line    1: A = rand (rows=10, cols=5);
     line    2: B = rand (rows=5, cols=4);
     line    3: D = sum(A);
     line    4: print("Sum(A)=" + D);
     line    5: C = A %*% B;
     line    6: write(C, "output.csv", format="csv");
-    (SystemMLdb) b 4
+    (SystemDSdb) b 4
     Breakpoint added at .defaultNS::main, line 4.
-    (SystemMLdb) r
+    (SystemDSdb) r
     Breakpoint reached at .defaultNS::main instID 1: (line 4).
     4    print("Sum(A)=" + D);
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 Note the run command is not valid if the runtime has already been started.  In that case, use continue or step to execute line(s) of the script.
 
-    (SystemMLdb) r
+    (SystemDSdb) r
     Runtime has already started. Try "s" to go to next line, or "c" to continue running your DML script.
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 ## Debugger Commands for inspecting or modifying script variables
@@ -278,13 +278,13 @@ To display the type of a variable, use the whatis command.
 
 Given sample test.dml script with current line 4, then the metadata of variables A, B, D can be shown.
 
-    (SystemMLdb) whatis A
+    (SystemDSdb) whatis A
     Metadata of A: matrix[rows = 10, cols = 5, rpb = 1000, cpb = 1000]
-    (SystemMLdb) whatis B
+    (SystemDSdb) whatis B
     Metadata of B: matrix[rows = 5, cols = 4, rpb = 1000, cpb = 1000]
-    (SystemMLdb) whatis D
+    (SystemDSdb) whatis D
     Metadata of D: DataType.SCALAR
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -298,7 +298,7 @@ To view the contents of a variable, use the p command.
 
 Below is sample print output for the same variables used in previous section.
 
-    (SystemMLdb) p A
+    (SystemDSdb) p A
     0.6911	0.0533	0.7659	0.9130	0.1196	
     0.8153	0.6145	0.5440	0.2916	0.7330	
     0.0520	0.9484	0.2044	0.5571	0.6952	
@@ -309,29 +309,29 @@ Below is sample print output for the same variables used in previous section.
     0.6778	0.8078	0.5075	0.0085	0.5159	
     0.8835	0.5621	0.7637	0.4362	0.4392	
     0.6108	0.5600	0.6140	0.0163	0.8640	
-    (SystemMLdb) p B
+    (SystemDSdb) p B
     0.4141	0.9905	0.1642	0.7545	
     0.5733	0.1489	0.1204	0.5375	
     0.5202	0.9833	0.3421	0.7099	
     0.5846	0.7585	0.9751	0.1174	
     0.8431	0.5806	0.4122	0.3694	
-    (SystemMLdb) p D
+    (SystemDSdb) p D
     D = 25.28558886582987
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 To display a specific element of a matrix, use [row,column] notation.
 
-    (SystemMLdb) p A[1,1]
+    (SystemDSdb) p A[1,1]
     0.6911
-    (SystemMLdb) p A[10,5]
+    (SystemDSdb) p A[10,5]
     0.8640
-    (SystemMLdb)  
+    (SystemDSdb)  
 
 Specific rows or columns of a matrix can also be displayed.  The below examples show the first row and the fifth column of matrix A.
 
-    (SystemMLdb) p A[1,]
+    (SystemDSdb) p A[1,]
     0.6911	0.0533	0.7659	0.9130	0.1196	
-    (SystemMLdb) p A[,5]
+    (SystemDSdb) p A[,5]
     0.1196	
     0.7330	
     0.6952	
@@ -342,7 +342,7 @@ Specific rows or columns of a matrix can also be displayed.  The below examples
     0.5159	
     0.4392	
     0.8640	
-    (SystemMLdb)
+    (SystemDSdb)
 
 
 
@@ -356,15 +356,15 @@ The set command is used for modifying variable contents.
 
 The following example modifies the first cell in matrix A.
 
-    (SystemMLdb) set A[1,1] 0.3299
+    (SystemDSdb) set A[1,1] 0.3299
     A[1,1] = 0.3299
-    (SystemMLdb)  
+    (SystemDSdb)  
 
 This example updates scalar D.  Note an equals sign is not needed when setting a variable.
 
-    (SystemMLdb) set D 25.0
+    (SystemDSdb) set D 25.0
     D = 25.0
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -377,7 +377,7 @@ In addition to being used for displaying breakpoints, the i command is used for
 
 So if our test.xml script was executed up to line 4, then the following frame information is shown.
 
-    (SystemMLdb) i frame
+    (SystemDSdb) i frame
     Current frame id: 0
       Current program counter at .defaultNS::main instID -1: (line 4)
       Local variables:
@@ -385,7 +385,7 @@ So if our test.xml script was executed up to line 4, then the following frame in
 	    A                                        Matrix: scratch_space//_p48857_9.30.252.162//_t0/temp1_1, [10 x 5, nnz=50, blocks (1000 x 1000)], binaryblock, dirty
 	    B                                        Matrix: scratch_space//_p48857_9.30.252.162//_t0/temp2_2, [5 x 4, nnz=20, blocks (1000 x 1000)], binaryblock, dirty
 	    D                                        25.28558886582987                       
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 Note only variables that are in scope are included (e.g., the variable C is not part of the frame since not yet in scope).
 
@@ -413,7 +413,7 @@ The li command can be used to display lower-level instructions along with the so
 
 For example:
 
-    (SystemMLdb) li
+    (SystemDSdb) li
     line    1: A = rand (rows=10, cols=5);
 		 id   -1: CP createvar _mVar1 scratch_space//_p1939_9.30.252.162//_t0/temp1 true binaryblock 10 5 1000 1000 50
 		 id   -1: CP rand 10 5 1000 1000 0.0 1.0 1.0 -1 uniform 1.0 4 _mVar1.MATRIX.DOUBLE
@@ -444,7 +444,7 @@ For example:
     line    6: write(C, "output.csv", format="csv");
 		 id   -1: CP write C.MATRIX.DOUBLE output.csv.SCALAR.STRING.true csv.SCALAR.STRING.true false , false
 		 id   -1: CP rmvar C
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 
 
@@ -459,35 +459,35 @@ The si command can be used to step through the lower level instructions of an in
 The first DML source line in test.dml consists of four instructions.
 
 
-    (SystemMLdb) li next 0
+    (SystemDSdb) li next 0
     line    1: A = rand (rows=10, cols=5);
 		 id   -1: CP createvar _mVar1 scratch_space//_p34473_9.30.252.162//_t0/temp1 true binaryblock 10 5 1000 1000 50
 		 id   -1: CP rand 10 5 1000 1000 0.0 1.0 1.0 -1 uniform 1.0 4 _mVar1.MATRIX.DOUBLE
 		 id   -1: CP cpvar _mVar1 A
 		 id   -1: CP rmvar _mVar1
-    (SystemMLdb) 
+    (SystemDSdb) 
 
 Type si to step through each individual instruction.
 
-    (SystemMLdb) si
+    (SystemDSdb) si
     Step instruction reached at .defaultNS::main instID -1: (line 1).
     1    A = rand (rows=10, cols=5);
-    (SystemMLdb) si
+    (SystemDSdb) si
     Step instruction reached at .defaultNS::main instID -1: (line 1).
     1    A = rand (rows=10, cols=5);
-    (SystemMLdb) si
+    (SystemDSdb) si
     Step instruction reached at .defaultNS::main instID -1: (line 1).
     1    A = rand (rows=10, cols=5);
-    (SystemMLdb) si
+    (SystemDSdb) si
     Step instruction reached at .defaultNS::main instID -1: (line 1).
     1    A = rand (rows=10, cols=5);
-    (SystemMLdb)
+    (SystemDSdb)
 
 Typing si again starts executing instructions of the next DML source line.
 
-    (SystemMLdb) si
+    (SystemDSdb) si
     Step instruction reached at .defaultNS::main instID -1: (line 2).
     2    B = rand (rows=5, cols=4);
-    (SystemMLdb)
+    (SystemDSdb)
 
 * * *
diff --git a/deep-learning.md b/deep-learning.md
index 968c959..fe6a4e5 100644
--- a/deep-learning.md
+++ b/deep-learning.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: Deep Learning with SystemML
-description: Deep Learning with SystemML
+title: Deep Learning with SystemDS
+description: Deep Learning with SystemDS
 ---
 <!--
 {% comment %}
@@ -29,10 +29,10 @@ limitations under the License.
 
 # Introduction
 
-There are three different ways to implement a Deep Learning model in SystemML:
+There are three different ways to implement a Deep Learning model in SystemDS:
 1. Using the [DML-bodied NN library](https://github.com/apache/systemml/tree/master/scripts/nn): This library allows the user to exploit full flexibility of [DML language](http://apache.github.io/systemml/dml-language-reference) to implement your neural network.
-2. Using the experimental [Caffe2DML API](http://apache.github.io/systemml/beginners-guide-caffe2dml.html): This API allows a model expressed in Caffe's proto format to be imported into SystemML. This API **doesnot** require Caffe to be installed on your SystemML.
-3. Using the experimental [Keras2DML API](http://apache.github.io/systemml/beginners-guide-keras2dml.html): This API allows a model expressed in Keras's API to be imported into SystemML. However, this API requires Keras to be installed on your driver.
+2. Using the experimental [Caffe2DML API](http://apache.github.io/systemml/beginners-guide-caffe2dml.html): This API allows a model expressed in Caffe's proto format to be imported into SystemDS. This API **doesnot** require Caffe to be installed on your SystemDS.
+3. Using the experimental [Keras2DML API](http://apache.github.io/systemml/beginners-guide-keras2dml.html): This API allows a model expressed in Keras's API to be imported into SystemDS. However, this API requires Keras to be installed on your driver.
 
 |                                                                                                      | NN library                                                                                                 | Caffe2DML                                                                                                     | Keras2DML                                                                       |
 |------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
@@ -43,7 +43,7 @@ There are three different ways to implement a Deep Learning model in SystemML:
 | Can be invoked using spark-shell                                                                     | Yes. Please see [Scala MLContext API](http://apache.github.io/systemml/spark-mlcontext-programming-guide)  | Limited support                                                                                               | No                                                                              |
 | Can be invoked via command-line or JMLC API                                                          | Yes                                                                                                        | No                                                                                                            | No                                                                              |
 | GPU and [native BLAS](http://apache.github.io/systemml/native-backend.html) support                  | Yes                                                                                                        | Yes                                                                                                           | Yes                                                                             |
-| Part of SystemML's [mllearn](http://apache.github.io/systemml/python-reference.html#mllearn-api) API | No                                                                                                         | Yes                                                                                                           | Yes                                                                             |
+| Part of SystemDS's [mllearn](http://apache.github.io/systemml/python-reference.html#mllearn-api) API | No                                                                                                         | Yes                                                                                                           | Yes                                                                             |
 
 ## mllearn API
 
@@ -84,7 +84,7 @@ model.transform(df_test)
 </div>
 </div>
 
-Please note that when training using mllearn API (i.e. `model.fit(X_df)`), SystemML 
+Please note that when training using mllearn API (i.e. `model.fit(X_df)`), SystemDS 
 expects that labels have been converted to 1-based value.
 This avoids unnecessary decoding overhead for large dataset if the label columns has already been decoded.
 For scikit-learn API, there is no such requirement.
@@ -184,7 +184,7 @@ lenet.score(X_test, y_test)
 
 <div data-lang="Keras2DML" markdown="1">
 {% highlight python %}
-# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemML runtime
+# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemDS runtime
 import os
 os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
 os.environ['CUDA_VISIBLE_DEVICES'] = ''
@@ -245,7 +245,7 @@ Will be added soon ...
 
 <div data-lang="Keras2DML" markdown="1">
 {% highlight python %}
-# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemML runtime
+# Disable Tensorflow from using GPU to avoid unnecessary evictions by SystemDS runtime
 import os
 os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
 os.environ['CUDA_VISIBLE_DEVICES'] = ''
diff --git a/devdocs/MatrixMultiplicationOperators.txt b/devdocs/MatrixMultiplicationOperators.txt
index b06e69d..cf4673b 100644
--- a/devdocs/MatrixMultiplicationOperators.txt
+++ b/devdocs/MatrixMultiplicationOperators.txt
@@ -3,11 +3,11 @@ NOTE: This information has been moved to docs/engine-dev-guide.md by:
 It should be removed in the future.
 
 #####################################################################
-# TITLE: An Overview of Matrix Multiplication Operators in SystemML #
+# TITLE: An Overview of Matrix Multiplication Operators in SystemDS #
 # DATE MODIFIED: 03/18/2016                                         #
 #####################################################################
 
-In the following, we give an overview of backend-specific physical matrix multiplication operators in SystemML as well as their internally used matrix multiplication block operations.
+In the following, we give an overview of backend-specific physical matrix multiplication operators in SystemDS as well as their internally used matrix multiplication block operations.
 
 A) BASIC MATRIX MULT OPERATORS 
 -------------------------------
diff --git a/devdocs/deep-learning.md b/devdocs/deep-learning.md
index 329c6c8..89938b1 100644
--- a/devdocs/deep-learning.md
+++ b/devdocs/deep-learning.md
@@ -19,7 +19,7 @@ limitations under the License.
 
 # Initial prototype for Deep Learning
 
-## Representing tensor and images in SystemML
+## Representing tensor and images in SystemDS
 
 In this prototype, we represent a tensor as a matrix stored in a row-major format,
 where first dimension of tensor and matrix are exactly the same. For example, a tensor (with all zeros)
@@ -41,7 +41,7 @@ Following operators work out-of-the box when both tensors X and Y have same shap
 * Element-wise addition: `X + Y`
 * Element-wise subtraction: `X - Y`
 
-SystemML does not support implicit broadcast for above tensor operations, however one can write a DML-bodied function to do so.
+SystemDS does not support implicit broadcast for above tensor operations, however one can write a DML-bodied function to do so.
 For example: to perform the above operations with broadcasting on second dimensions, one can use the below `rep(Z, n)` function:
 ``` python
 rep = function(matrix[double] Z, int C) return (matrix[double] ret) {
@@ -52,9 +52,9 @@ rep = function(matrix[double] Z, int C) return (matrix[double] ret) {
 }
 ```
 Using the above `rep(Z, n)` function, we can realize the element-wise arithmetic operation with broadcasting. Here are some examples:
-* X of shape [N, C, H, W] and Y of shape [1, C, H, W]: `X + Y` (Note: SystemML does implicit broadcasting in this case because of the way 
+* X of shape [N, C, H, W] and Y of shape [1, C, H, W]: `X + Y` (Note: SystemDS does implicit broadcasting in this case because of the way 
 it represents the tensor)
-* X of shape [1, C, H, W] and Y of shape [N, C, H, W]: `X + Y` (Note: SystemML does implicit broadcasting in this case because of the way 
+* X of shape [1, C, H, W] and Y of shape [N, C, H, W]: `X + Y` (Note: SystemDS does implicit broadcasting in this case because of the way 
 it represents the tensor)
 * X of shape [N, C, H, W] and Y of shape [N, 1, H, W]: `X + rep(Y, C)`
 * X of shape [N, C, H, W] and Y of shape [1, 1, H, W]: `X + rep(Y, C)`
@@ -63,7 +63,7 @@ it represents the tensor)
 
 TODO: Map the NumPy tensor calls to DML expressions.
 
-## Representing images in SystemML
+## Representing images in SystemDS
 
 The images are assumed to be stored NCHW format, where N = batch size, C = #channels, H = height of image and W = width of image. 
 Hence, the images are internally represented as a matrix with dimension (N, C * H * W).
diff --git a/devdocs/gpu-backend.md b/devdocs/gpu-backend.md
index 63da844..1957bdd 100644
--- a/devdocs/gpu-backend.md
+++ b/devdocs/gpu-backend.md
@@ -28,7 +28,7 @@ Currently, an active instance of the `GPUContext` class is made available global
 of the allocated blocks on the GPU. A count is kept per block for the number of instructions that need it.
 When the count is 0, the block may be evicted on a call to `GPUObject.evict()`.
 
-A `GPUObject` (like RDDObject and BroadcastObject) is stored in CacheableData object. It gets call-backs from SystemML's bufferpool on following methods
+A `GPUObject` (like RDDObject and BroadcastObject) is stored in CacheableData object. It gets call-backs from SystemDS's bufferpool on following methods
 1. void acquireDeviceRead()
 2. void acquireDeviceModifyDense()
 3. void acquireDeviceModifySparse
@@ -37,7 +37,7 @@ A `GPUObject` (like RDDObject and BroadcastObject) is stored in CacheableData ob
 6. void releaseInput()
 7. void releaseOutput()
 
-Sparse matrices on GPU are represented in `CSR` format. In the SystemML runtime, they are represented in `MCSR` or modified `CSR` format.
+Sparse matrices on GPU are represented in `CSR` format. In the SystemDS runtime, they are represented in `MCSR` or modified `CSR` format.
 A conversion cost is incurred when sparse matrices are sent back and forth between host and device memory.
 
 Concrete classes `JCudaContext` and `JCudaObject` (which extend `GPUContext` & `GPUObject` respectively) contain references to `org.jcuda.*`.
@@ -51,7 +51,7 @@ Some functions in `LibMatrixCUDA` need finer control over GPU memory management
 1. Follow the instructions from `https://developer.nvidia.com/cuda-downloads` and install CUDA 8.0.
 2. Follow the instructions from `https://developer.nvidia.com/cudnn` and install CuDNN v5.1.
 
-To use SystemML's GPU backend when using the jar or uber-jar
+To use SystemDS's GPU backend when using the jar or uber-jar
 1. Add JCuda's jar into the classpath.
 2. Use `-gpu` flag.
 
diff --git a/devdocs/python_api.html b/devdocs/python_api.html
index 93ec624..84757c9 100644
--- a/devdocs/python_api.html
+++ b/devdocs/python_api.html
@@ -58,7 +58,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.estimators.LinearRegression">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.estimators.</code><code class="descname">LinearRegression</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>solver='newton-cg'</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#LinearRegression"><span class="viewcode-link" [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLRegressor</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSRegressor</span></code></p>
 <p>Performs linear regression to model the relationship between one numerical response variable and one or more explanatory (feature) variables.</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span>
@@ -87,7 +87,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.estimators.LogisticRegression">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.estimators.</code><code class="descname">LogisticRegression</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>penalty='l2'</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>max_inner_iter=0</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>solver='newton-cg'</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimator [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs both binomial and multinomial logistic regression.</p>
 <p>Scikit-learn way</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span><span class="p">,</span> <span class="n">neighbors</span>
@@ -145,7 +145,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.estimators.SVM">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.estimators.</code><code class="descname">SVM</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>is_multi_class=False</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#SVM"><span class="viewcode-link">[source]</span></a><a c [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs both binary-class and multiclass SVM (Support Vector Machines).</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span><span class="p">,</span> <span class="n">neighbors</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">systemml.mllearn</span> <span class="k">import</span> <span class="n">SVM</span>
@@ -168,7 +168,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.estimators.NaiveBayes">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.estimators.</code><code class="descname">NaiveBayes</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>laplace=1.0</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#NaiveBayes"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#systemml.mllearn.estimators.NaiveBayes" title="Permalink t [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs Naive Bayes.</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn.datasets</span> <span class="k">import</span> <span class="n">fetch_20newsgroups</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn.feature_extraction.text</span> <span class="k">import</span> <span class="n">TfidfVectorizer</span>
@@ -195,7 +195,7 @@
 <div class="section" id="module-systemml.mllearn">
 <span id="module-contents"></span><h5>Module contents<a class="headerlink" href="#module-systemml.mllearn" title="Permalink to this headline">¶</a></h5>
 <div class="section" id="systemml-algorithms">
-<h6>SystemML Algorithms<a class="headerlink" href="#systemml-algorithms" title="Permalink to this headline">¶</a></h6>
+<h6>SystemDS Algorithms<a class="headerlink" href="#systemml-algorithms" title="Permalink to this headline">¶</a></h6>
 <table border="1" class="docutils">
 <colgroup>
 <col width="26%" />
@@ -235,7 +235,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.LinearRegression">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.</code><code class="descname">LinearRegression</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>solver='newton-cg'</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#LinearRegression"><span class="viewcode-link">[source]</ [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLRegressor</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSRegressor</span></code></p>
 <p>Performs linear regression to model the relationship between one numerical response variable and one or more explanatory (feature) variables.</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span>
@@ -264,7 +264,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.LogisticRegression">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.</code><code class="descname">LogisticRegression</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>penalty='l2'</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>max_inner_iter=0</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>solver='newton-cg'</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#Logi [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs both binomial and multinomial logistic regression.</p>
 <p>Scikit-learn way</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span><span class="p">,</span> <span class="n">neighbors</span>
@@ -322,7 +322,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.SVM">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.</code><code class="descname">SVM</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>fit_intercept=True</em>, <em>max_iter=100</em>, <em>tol=1e-06</em>, <em>C=1.0</em>, <em>is_multi_class=False</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#SVM"><span class="viewcode-link">[source]</span></a><a class="heade [...]
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs both binary-class and multiclass SVM (Support Vector Machines).</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span><span class="p">,</span> <span class="n">neighbors</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">systemml.mllearn</span> <span class="k">import</span> <span class="n">SVM</span>
@@ -345,7 +345,7 @@
 <dl class="class">
 <dt id="systemml.mllearn.NaiveBayes">
 <em class="property">class </em><code class="descclassname">systemml.mllearn.</code><code class="descname">NaiveBayes</code><span class="sig-paren">(</span><em>sqlCtx</em>, <em>laplace=1.0</em>, <em>transferUsingDF=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mllearn/estimators.html#NaiveBayes"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#systemml.mllearn.NaiveBayes" title="Permalink to this definition">¶</a></dt>
-<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemMLClassifier</span></code></p>
+<dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">systemml.mllearn.estimators.BaseSystemDSClassifier</span></code></p>
 <p>Performs Naive Bayes.</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn.datasets</span> <span class="k">import</span> <span class="n">fetch_20newsgroups</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn.feature_extraction.text</span> <span class="k">import</span> <span class="n">TfidfVectorizer</span>
@@ -596,12 +596,12 @@ and Pandas DataFrame).</p>
 <li>Global statistical built-In functions: exp, log, abs, sqrt, round, floor, ceil, sin, cos, tan, asin, acos, atan, sign, solve</li>
 </ol>
 <p>Note: an evaluated matrix contains a data field computed by eval method as DataFrame or NumPy array.</p>
-<div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemML</span> <span class="k">as</span> <span class="nn">sml</span>
+<div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemDS</span> <span class="k">as</span> <span class="nn">sml</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">sml</span><span class="o">.</span><span class="n">setSparkContext</span><span class="p">(</span><span class="n">sc</span><span class="p">)</span>
 </pre></div>
 </div>
-<p>Welcome to Apache SystemML!</p>
+<p>Welcome to Apache SystemDS!</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="n">m1</span> <span class="o">=</span> <span class="n">sml</span><span class="o">.</span><span class="n">matrix</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> <span class="o">+</span> <sp [...]
 <span class="gp">&gt;&gt;&gt; </span><span class="n">m2</span> <span class="o">=</span> <span class="n">sml</span><span class="o">.</span><span class="n">matrix</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> <span class="o">+</span> <span class="mi">3</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">m2</span> <span class="o">=</span> <span class="n">m1</span> <span class="o">*</span> <span class="p">(</span><span class="n">m2</span> <span class="o">+</span> <span class="n">m1</span><span class="p">)</span>
@@ -837,7 +837,7 @@ outputDF: back the data of matrix as PySpark DataFrame</p>
 <dd><p>Computes the least squares solution for system of linear equations A %*% x = b</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span>
-<span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemML</span> <span class="k">as</span> <span class="nn">sml</span>
+<span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemDS</span> <span class="k">as</span> <span class="nn">sml</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">pyspark.sql</span> <span class="k">import</span> <span class="n">SQLContext</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">diabetes</span> <span class="o">=</span> <span class="n">datasets</span><span class="o">.</span><span class="n">load_diabetes</span><span class="p">()</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">diabetes_X</span> <span class="o">=</span> <span class="n">diabetes</span><span class="o">.</span><span class="n">data</span><span class="p">[:,</span> <span class="n">np</span><span class="o">.</span><span class="n">newaxis</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
@@ -969,7 +969,7 @@ outputDF: back the data of matrix as PySpark DataFrame</p>
 <dt id="systemml.mlcontext.MLContext">
 <em class="property">class </em><code class="descclassname">systemml.mlcontext.</code><code class="descname">MLContext</code><span class="sig-paren">(</span><em>sc</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mlcontext.html#MLContext"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#systemml.mlcontext.MLContext" title="Permalink to this definition">¶</a></dt>
 <dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">object</span></code></p>
-<p>Wrapper around the new SystemML MLContext.</p>
+<p>Wrapper around the new SystemDS MLContext.</p>
 <dl class="docutils">
 <dt>sc: SparkContext</dt>
 <dd>SparkContext</dd>
@@ -1132,7 +1132,7 @@ are double, string, dataframe, rdd, and list of such object.</dd>
 <dt id="systemml.MLContext">
 <em class="property">class </em><code class="descclassname">systemml.</code><code class="descname">MLContext</code><span class="sig-paren">(</span><em>sc</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/systemml/mlcontext.html#MLContext"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#systemml.MLContext" title="Permalink to this definition">¶</a></dt>
 <dd><p>Bases: <code class="xref py py-class docutils literal"><span class="pre">object</span></code></p>
-<p>Wrapper around the new SystemML MLContext.</p>
+<p>Wrapper around the new SystemDS MLContext.</p>
 <dl class="docutils">
 <dt>sc: SparkContext</dt>
 <dd>SparkContext</dd>
@@ -1271,12 +1271,12 @@ and Pandas DataFrame).</p>
 <li>Global statistical built-In functions: exp, log, abs, sqrt, round, floor, ceil, sin, cos, tan, asin, acos, atan, sign, solve</li>
 </ol>
 <p>Note: an evaluated matrix contains a data field computed by eval method as DataFrame or NumPy array.</p>
-<div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemML</span> <span class="k">as</span> <span class="nn">sml</span>
+<div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemDS</span> <span class="k">as</span> <span class="nn">sml</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">sml</span><span class="o">.</span><span class="n">setSparkContext</span><span class="p">(</span><span class="n">sc</span><span class="p">)</span>
 </pre></div>
 </div>
-<p>Welcome to Apache SystemML!</p>
+<p>Welcome to Apache SystemDS!</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="n">m1</span> <span class="o">=</span> <span class="n">sml</span><span class="o">.</span><span class="n">matrix</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> <span class="o">+</span> <sp [...]
 <span class="gp">&gt;&gt;&gt; </span><span class="n">m2</span> <span class="o">=</span> <span class="n">sml</span><span class="o">.</span><span class="n">matrix</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> <span class="o">+</span> <span class="mi">3</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">m2</span> <span class="o">=</span> <span class="n">m1</span> <span class="o">*</span> <span class="p">(</span><span class="n">m2</span> <span class="o">+</span> <span class="n">m1</span><span class="p">)</span>
@@ -1512,7 +1512,7 @@ outputDF: back the data of matrix as PySpark DataFrame</p>
 <dd><p>Computes the least squares solution for system of linear equations A %*% x = b</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">sklearn</span> <span class="k">import</span> <span class="n">datasets</span>
-<span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemML</span> <span class="k">as</span> <span class="nn">sml</span>
+<span class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span class="nn">SystemDS</span> <span class="k">as</span> <span class="nn">sml</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">pyspark.sql</span> <span class="k">import</span> <span class="n">SQLContext</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">diabetes</span> <span class="o">=</span> <span class="n">datasets</span><span class="o">.</span><span class="n">load_diabetes</span><span class="p">()</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">diabetes_X</span> <span class="o">=</span> <span class="n">diabetes</span><span class="o">.</span><span class="n">data</span><span class="p">[:,</span> <span class="n">np</span><span class="o">.</span><span class="n">newaxis</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
diff --git a/developer-tools-systemml.md b/developer-tools-systemds.md
similarity index 90%
rename from developer-tools-systemml.md
rename to developer-tools-systemds.md
index f37c5b5..9561c39 100644
--- a/developer-tools-systemml.md
+++ b/developer-tools-systemds.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-displayTitle: SystemML Developer Tools
-title: SystemML Developer Tools
-description: SystemML Developer Tools
+displayTitle: SystemDS Developer Tools
+title: SystemDS Developer Tools
+description: SystemDS Developer Tools
 ---
 <!--
 {% comment %}
@@ -23,7 +23,7 @@ limitations under the License.
 {% endcomment %}
 -->
 
-Useful Tools for Developing SystemML:
+Useful Tools for Developing SystemDS:
 
 * This will become a table of contents (this text will be scraped).
 {:toc}
@@ -32,7 +32,7 @@ Useful Tools for Developing SystemML:
 
 IntelliJ can be used since it provides great support for mixed Java and Scala projects as described [here](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-IntelliJ).
 
-### Import SystemML project to IntelliJ
+### Import SystemDS project to IntelliJ
 
  1. Download IntelliJ and install the Scala plug-in for IntelliJ.
  2. Go to "File -> Import Project", locate the systemml source directory, and select "Maven Project".
@@ -40,9 +40,9 @@ IntelliJ can be used since it provides great support for mixed Java and Scala pr
 
 ## Eclipse
 
-Eclipse [Luna SR2](https://eclipse.org/downloads/packages/release/luna/sr2) can be used for an integrated development environment with SystemML code.  Maven integration is required which is included in the [Eclipse IDE for Java Developers](https://eclipse.org/downloads/packages/eclipse-ide-java-developers/lunasr2) package.
+Eclipse [Luna SR2](https://eclipse.org/downloads/packages/release/luna/sr2) can be used for an integrated development environment with SystemDS code.  Maven integration is required which is included in the [Eclipse IDE for Java Developers](https://eclipse.org/downloads/packages/eclipse-ide-java-developers/lunasr2) package.
 
-To get started in Eclipse, import SystemML's pom.xml file as an existing Maven project.  After import is completed, the resulting Eclipse installation should include two maven connectors.
+To get started in Eclipse, import SystemDS's pom.xml file as an existing Maven project.  After import is completed, the resulting Eclipse installation should include two maven connectors.
 
 ![About Eclipse](img/developer-tools/about-eclipse.png "About Eclipse")
 
@@ -70,7 +70,7 @@ Note the corresponding Eclipse project needs to include the Scala nature.  Typic
 
 ### Eclipse Java Only (How to skip Scala)
 
-Since the core SystemML code is written in Java, developers may prefer not to use Eclipse in a mixed Java/Scala environment.  To configure Eclipse to skip the Scala code of SystemML and avoid installing any Scala-related components, Maven lifecycle mappings can be created.  The simplest way to create these mappings is to use Eclipse's quick fix option to resolve the following pom.xml errors which occur if Maven Integration for Scala is not present.
+Since the core SystemDS code is written in Java, developers may prefer not to use Eclipse in a mixed Java/Scala environment.  To configure Eclipse to skip the Scala code of SystemDS and avoid installing any Scala-related components, Maven lifecycle mappings can be created.  The simplest way to create these mappings is to use Eclipse's quick fix option to resolve the following pom.xml errors which occur if Maven Integration for Scala is not present.
 
 ![Scala pom errors](img/developer-tools/pom-scala-errors.png "Scala pom errors")
 
@@ -80,11 +80,11 @@ The lifecycle mappings are stored in a workspace metadata file as specified in E
 
 ## Troubleshooting
 
-Please see below tips for resolving some compilation issues that might occur after importing the SystemML project.
+Please see below tips for resolving some compilation issues that might occur after importing the SystemDS project.
 
 ### Invalid cross-compiled libraries error
 
-Since Scala IDE bundles the latest versions (2.10.5 and 2.11.6 at this point), you need to add one in Eclipse Preferences -> Scala -> Installations by pointing to the <code>lib</code> directory of your Scala 2.10.4 distribution. Once this is done, select SystemML project, right-click, choose Scala -> Set Scala Installation and point to the 2.10.4 installation. This should clear all errors about invalid cross-compiled libraries. A clean build should succeed now.
+Since Scala IDE bundles the latest versions (2.10.5 and 2.11.6 at this point), you need to add one in Eclipse Preferences -> Scala -> Installations by pointing to the <code>lib</code> directory of your Scala 2.10.4 distribution. Once this is done, select SystemDS project, right-click, choose Scala -> Set Scala Installation and point to the 2.10.4 installation. This should clear all errors about invalid cross-compiled libraries. A clean build should succeed now.
 
 ### Incompatible Scala version error
 
diff --git a/dml-language-reference.md b/dml-language-reference.md
index f64b6ea..15b330b 100644
--- a/dml-language-reference.md
+++ b/dml-language-reference.md
@@ -67,16 +67,16 @@ limitations under the License.
 
 ## Introduction
 
-SystemML compiles scripts written in Declarative Machine Learning (or DML for short) into mixed driver and distributed jobs. DML’s syntax closely follows R, thereby minimizing the learning curve to use SystemML. Before getting into detail, let’s start with a simple Hello World program in DML. Assuming that Spark is installed on your machine or cluster, place `SystemML.jar` into your directory. Now, create a text file `hello.dml` containing following code:
+SystemDS compiles scripts written in Declarative Machine Learning (or DML for short) into mixed driver and distributed jobs. DML’s syntax closely follows R, thereby minimizing the learning curve to use SystemDS. Before getting into detail, let’s start with a simple Hello World program in DML. Assuming that Spark is installed on your machine or cluster, place `SystemDS.jar` into your directory. Now, create a text file `hello.dml` containing following code:
 
     print("Hello World");
 
 To run this program on your machine, use following command:
 
-    spark-submit SystemML.jar -f hello.dml
+    spark-submit SystemDS.jar -f hello.dml
 
 The option `-f` in the above command refers to the path to the DML script. A detailed list of the
-available options can be found running `spark-submit SystemML.jar -help`.
+available options can be found running `spark-submit SystemDS.jar -help`.
 
 
 ## Variables
@@ -105,7 +105,7 @@ As seen in above example, there is no formal declaration of a variable. A variab
 
 ### Data Types
 
-Three data types (frame, matrix and scalar) and four value types (double, integer, string, and boolean) are supported. Matrices are 2-dimensional, and support the double value type (i.e., the cells in a matrix are of type double). The frame data type denotes the tabular data, potentially containing columns of value type numeric, string, and boolean.  Frame functions are described in [Frames](dml-language-reference.html#frames) and  [Data Pre-Processing Built-In Functions](dml-language-re [...]
+Three data types (frame, matrix and scalar) and four value types (double, integer, string, and boolean) are supported. Matrices are 2-dimensional, and support the double value type (i.e., the cells in a matrix are of type double). The frame data type denotes the tabular data, potentially containing columns of value type numeric, string, and boolean.  Frame functions are described in [Frames](dml-language-reference.html#frames) and  [Data Pre-Processing Built-In Functions](dml-language-re [...]
 
     # Spoiler alert: matrix() is a built-in function to
     # create matrix, which will be discussed later
@@ -143,7 +143,7 @@ Now that we have familiarized ourselves with variables and data type, let’s un
 
 ### Operators
 
-SystemML follows same associativity and precedence order as R as described in below table. The dimensions of the input matrices need to match the operator semantics, otherwise an exception will be raised at compile time. When one of the operands is a matrix and the other operand is a scalar value, the operation is performed cell-wise on the matrix using the scalar operand.
+SystemDS follows same associativity and precedence order as R as described in below table. The dimensions of the input matrices need to match the operator semantics, otherwise an exception will be raised at compile time. When one of the operands is a matrix and the other operand is a scalar value, the operation is performed cell-wise on the matrix using the scalar operand.
 
 **Table 1**: Operators
 
@@ -399,7 +399,7 @@ log             | INFO
 profile         | 0
 
 
-Of particular note is the `check` parameter. SystemML's `parfor` statement by default (`check = 1`) performs dependency analysis in an
+Of particular note is the `check` parameter. SystemDS's `parfor` statement by default (`check = 1`) performs dependency analysis in an
 attempt to guarantee result correctness for parallel execution. For example, the following `parfor` statement is **incorrect** because
 the iterations do not act independently, so they are not parallelizable. The iterations incorrectly try to increment the same `sum` variable.
 
@@ -409,7 +409,7 @@ the iterations do not act independently, so they are not parallelizable. The ite
 	}
 	print(sum)
 
-SystemML's `parfor` dependency analysis can occasionally result in false positives, as in the following example. This example creates a 2x30
+SystemDS's `parfor` dependency analysis can occasionally result in false positives, as in the following example. This example creates a 2x30
 matrix. It then utilizes a `parfor` loop to write 10 2x3 matrices into the 2x30 matrix. This `parfor` statement is parallelizable and correct,
 but the dependency analysis generates a false positive dependency error for the variable `ms`.
 
@@ -439,7 +439,7 @@ three ways:
 
 ### User-Defined Function (UDF)
 
-The UDF function declaration statement provides the function signature, which defines the formal parameters used to call the function and return values for the function. The function definition specifies the function implementation, and can either be a sequence of statements or external packages / libraries. If the UDF is implemented in a SystemML script, then UDF declaration and definition occur together.
+The UDF function declaration statement provides the function signature, which defines the formal parameters used to call the function and return values for the function. The function definition specifies the function implementation, and can either be a sequence of statements or external packages / libraries. If the UDF is implemented in a SystemDS script, then UDF declaration and definition occur together.
 
 The syntax for the UDF function declaration is given as follows. The function definition is stored as a list of statements in the function body. The explanation of the parameters is given below. Any statement can be placed inside a UDF definition except UDF function declaration statements. The variables specified in the return clause will be returned, and no explicit return statement within the function body is required.
 
@@ -579,13 +579,13 @@ In above script, `ifdef(\$nbrRows, 10)` function is a short-hand for "`ifdef(\$n
 
 Let’s assume that the above script is invoked using following the command line values:
 
-    spark-submit SystemML.jar -f test.dml -nvargs fname=test.mtx nbrRows=5 nbrCols=5
+    spark-submit SystemDS.jar -f test.dml -nvargs fname=test.mtx nbrRows=5 nbrCols=5
 
 In this case, the script will create a random matrix M with 5 rows and 5 columns and write it to the file "text.mtx" in csv format. After that it will print the message "Done creating and writing random matrix in test.mtx" on the standard output.
 
 If however, the above script is invoked from the command line using named arguments:
 
-    spark-submit SystemML.jar -f test.dml -nvargs fname=test.mtx nbrCols=5
+    spark-submit SystemDS.jar -f test.dml -nvargs fname=test.mtx nbrCols=5
 
 Then, the script will instead create a random matrix M with 10 rows (i.e. default value provided in the script) and 5 columns.
 
@@ -860,7 +860,7 @@ trace() | Return the sum of the cells of the main diagonal square matrix | Input
 The `read` and `write` functions support the reading and writing of matrices and scalars from/to the file system
 (local or HDFS). Typically, associated with each data file is a JSON-formatted metadata file (MTD) that stores
 metadata information about the content of the data file, such as the number of rows and columns.
-For data files written by SystemML, an MTD file will automatically be generated. The name of the
+For data files written by SystemDS, an MTD file will automatically be generated. The name of the
 MTD file associated with `<filename>` must be `<filename.mtd>`. In general, it is highly recommended
 that users provide MTD files for their own data as well.
 
@@ -869,7 +869,7 @@ that users provide MTD files for their own data as well.
 
 #### File formats and MTD files
 
-SystemML supports 4 file formats:
+SystemDS supports 4 file formats:
 
   * CSV (delimited)
   * Matrix Market (coordinate)
@@ -879,10 +879,10 @@ SystemML supports 4 file formats:
 The CSV format is a standard text-based format where columns are separated by delimiter characters, typically commas, and
 rows are represented on separate lines.
 
-SystemML supports the Matrix Market coordinate format, which is a text-based, space-separated format used to
+SystemDS supports the Matrix Market coordinate format, which is a text-based, space-separated format used to
 represent sparse matrices. Additional information about the Matrix Market format can be found at
 [http://math.nist.gov/MatrixMarket/formats.html#MMformat](http://math.nist.gov/MatrixMarket/formats.html#MMformat).
-SystemML does not currently support the Matrix Market array format for dense matrices. In the Matrix Market
+SystemDS does not currently support the Matrix Market array format for dense matrices. In the Matrix Market
 coordinate format, metadata (the number of rows, the number of columns, and the number of non-zero values) are
 included in the data file. Rows and columns index from 1. Matrix Market data must be in a single file, whereas the
 (i,j,v) text format can span multiple part files on HDFS. Therefore, for scalability reasons, the use of the (i,j,v) text and
@@ -893,7 +893,7 @@ of rowId, columnId, and cellValue, with the rowId and columnId indices being 1-b
 coordinate format, except metadata is stored in a separate file rather than in the data file itself, and the (i,j,v) text format
 can span multiple part files.
 
-The binary format can only be read and written by SystemML.
+The binary format can only be read and written by SystemDS.
 
 Let's look at a matrix and examples of its data represented in the supported formats with corresponding metadata. In the table below, we have
 a matrix consisting of 4 rows and 3 columns.
@@ -944,7 +944,7 @@ Below, we have examples of this matrix in the CSV, Matrix Market, IJV, and Binar
 	    "format": "csv",
 	    "header": false,
 	    "sep": ",",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -977,7 +977,7 @@ Below, we have examples of this matrix in the CSV, Matrix Market, IJV, and Binar
 	    "cols": 3,
 	    "nnz": 6,
 	    "format": "text",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -996,7 +996,7 @@ Below, we have examples of this matrix in the CSV, Matrix Market, IJV, and Binar
 	    "cols_in_block": 1000,
 	    "nnz": 6,
 	    "format": "binary",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 </div>
@@ -1010,7 +1010,7 @@ that contains the scalar value 2.0.
 	    "data_type": "scalar",
 	    "value_type": "double",
 	    "format": "text",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 
@@ -1030,7 +1030,7 @@ Parameter Name | Description | Optional | Permissible values | Data type valid f
 `nnz` | Number of non-zero values | Yes | any integer &gt; `0` | `matrix`
 `format` | Data file format | Yes. Default value is `text` | `csv`, `mm`, `text`, `binary` | `matrix`, `scalar`. Formats `csv` and `mm` are applicable only to matrices
 `description` | Description of the data | Yes | Any valid JSON string or object | `matrix`, `scalar`
-`author` | User that created the metadata file, defaults to `SystemML` | N/A | N/A | N/A
+`author` | User that created the metadata file, defaults to `SystemDS` | N/A | N/A | N/A
 `created` | Date/time when metadata file was written | N/A | N/A | N/A
 
 
@@ -1069,11 +1069,11 @@ The user has the option of specifying each parameter value in the MTD file, the
 **However, parameter values specified in both the `read` invocation and the MTD file must have the same value. Also, if a scalar value is being read,
 then `format` cannot be specified.**
 
-The `read` invocation in SystemML is parameterized as follows during compilation.
+The `read` invocation in SystemDS is parameterized as follows during compilation.
 
   1. Default values are assigned to parameters.
   2. Parameters provided in `read()` either fill in values or override defaults.
-  3. SystemML will look for the MTD file at compile time in the specified location (at the same path as the data file, where the filename of the MTD file is the same name as the data file with the extension `.mtd`).
+  3. SystemDS will look for the MTD file at compile time in the specified location (at the same path as the data file, where the filename of the MTD file is the same name as the data file with the extension `.mtd`).
   4. If all non-optional parameters aren't specified or conflicting values are detected, then an exception is thrown.
 
 
@@ -1113,7 +1113,7 @@ Additionally, `readMM()` and `read.csv()` are supported and can be used instead
 
 The `write` method is used to persist `scalar` and `matrix` data to files in the local file system or HDFS. The syntax of `write` is shown below.
 The parameters are described in Table 13. Note that the set of supported parameters for `write` is NOT the same as for `read`.
-SystemML writes an MTD file for the written data.
+SystemDS writes an MTD file for the written data.
 
     write(identifier, "outputfile", [additional parameters])
 
@@ -1143,7 +1143,7 @@ Example content of `out/file.ijv.mtd`:
         "cols": 8,
         "nnz": 4,
         "format": "text",
-        "author": "SystemML",
+        "author": "SystemDS",
         "created": "2017-01-01 00:00:01 PST"
     }
 
@@ -1162,7 +1162,7 @@ Example content of `out/file.mtd`:
         "rows_in_block": 1000,
         "cols_in_block": 1000,
         "format": "binary",
-        "author": "SystemML",
+        "author": "SystemDS",
         "created": "2017-01-01 00:00:01 PST"
     }
 
@@ -1181,7 +1181,7 @@ Example content of `n.csv.mtd`:
         "format": "csv",
         "header": true,
         "sep": ";",
-        "author": "SystemML",
+        "author": "SystemDS",
         "created": "2017-01-01 00:00:01 PST"
     }
 
@@ -1195,7 +1195,7 @@ Example content of `out/scalar_i.mtd`:
         "data_type": "scalar",
         "value_type": "int",
         "format": "text",
-        "author": "SystemML",
+        "author": "SystemDS",
         "created": "2017-01-01 00:00:01 PST"
     }
 
@@ -1224,7 +1224,7 @@ This will generate the following `mymatrix.csv.mtd` metadata file:
 	    "header": false,
 	    "sep": ",",
 	    "description": "my matrix",
-	    "author": "SystemML",
+	    "author": "SystemDS",
 	    "created": "2017-01-01 00:00:01 PST"
 	}
 
@@ -1510,7 +1510,7 @@ Note that the metadata generated during the training phase (located at `/user/ml
 
 ### Deep Learning Built-In Functions
 
-SystemML represent a tensor as a matrix stored in a row-major format,
+SystemDS represent a tensor as a matrix stored in a row-major format,
 where first dimension of tensor and matrix are exactly the same. For example, a tensor (with all zeros)
 of shape [3, 2, 4, 5] can be instantiated by following DML statement:
 ```sh
@@ -1548,7 +1548,7 @@ Examples:
 | bias_multiply        |                             | `ones = matrix(1, rows=1, cols=height*width); output = input * matrix(bias %*% ones, rows=1, cols=numChannels*height*width)`                                |
 
 ### Parameter Server Built-in Function
-Apart from data-parallel operations and task-parallel parfor loops, SystemML also supports a **data-parallel Parameter Server** via a built-in function **paramserv**. Currently both local multi-threaded and spark distributed backend are supported to execute the **paramserv** function. So far we only support a single parameter server with N workers as well as synchronous and asynchronous model updates per batch or epoch. For example, in order to train a model in local backend with update  [...]
+Apart from data-parallel operations and task-parallel parfor loops, SystemDS also supports a **data-parallel Parameter Server** via a built-in function **paramserv**. Currently both local multi-threaded and spark distributed backend are supported to execute the **paramserv** function. So far we only support a single parameter server with N workers as well as synchronous and asynchronous model updates per batch or epoch. For example, in order to train a model in local backend with update  [...]
 
 
     resultModel=paramserv(model=initModel, features=X, labels=Y, 
diff --git a/engine-dev-guide.md b/engine-dev-guide.md
index 557f864..64aa3c4 100644
--- a/engine-dev-guide.md
+++ b/engine-dev-guide.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-displayTitle: SystemML Engine Developer Guide
-title: SystemML Engine Developer Guide
-description: SystemML Engine Developer Guide
+displayTitle: SystemDS Engine Developer Guide
+title: SystemDS Engine Developer Guide
+description: SystemDS Engine Developer Guide
 ---
 <!--
 {% comment %}
@@ -25,24 +25,24 @@ limitations under the License.
 * This will become a table of contents (this text will be scraped).
 {:toc}
 
-## Building SystemML
+## Building SystemDS
 
-SystemML is built using [Apache Maven](http://maven.apache.org/).
-SystemML will build on Linux, MacOS, or Windows, and requires Maven 3 and Java 7 (or higher).
-To build SystemML, run:
+SystemDS is built using [Apache Maven](http://maven.apache.org/).
+SystemDS will build on Linux, MacOS, or Windows, and requires Maven 3 and Java 7 (or higher).
+To build SystemDS, run:
 
     mvn clean package
 
-To build the SystemML distributions, run:
+To build the SystemDS distributions, run:
 
     mvn clean package -P distribution
 
 
 * * *
 
-## Testing SystemML
+## Testing SystemDS
 
-SystemML features a comprehensive set of integration tests. To perform these tests, run:
+SystemDS features a comprehensive set of integration tests. To perform these tests, run:
 
     mvn verify
 
@@ -57,9 +57,9 @@ If required, please install the following packages in R:
 
 ## Development Environment
 
-SystemML itself is written in Java and is managed using Maven. As a result, SystemML can readily be
+SystemDS itself is written in Java and is managed using Maven. As a result, SystemDS can readily be
 imported into a standard development environment such as Eclipse and IntelliJ IDEA.
-The `DMLScript` class serves as the main entrypoint to SystemML. Executing
+The `DMLScript` class serves as the main entrypoint to SystemDS. Executing
 `DMLScript` with no arguments displays usage information. A script file can be specified using the `-f` argument.
 
 In Eclipse, a Debug Configuration can be created with `DMLScript` as the Main class and any arguments specified as
@@ -69,7 +69,7 @@ Suppose that we have a `hello.dml` script containing the following:
 
 	print('hello ' + $1)
 
-This SystemML script can be debugged in Eclipse using a Debug Configuration such as the following:
+This SystemDS script can be debugged in Eclipse using a Debug Configuration such as the following:
 
 <div class="codetabs2">
 
@@ -90,13 +90,13 @@ This SystemML script can be debugged in Eclipse using a Debug Configuration such
 
 When working with the Python MLContext API (see `src/main/python/systemml/mlcontext.py`) during development,
 it can be useful to install the Python MLContext API in editable mode (`-e`). This allows Python updates
-to take effect without requiring the SystemML python artifact to be built and installed.
+to take effect without requiring the SystemDS python artifact to be built and installed.
 
 {% highlight bash %}
 mvn clean
 pip3 install -e src/main/python
 mvn clean package
-PYSPARK_PYTHON=python3 pyspark --driver-class-path target/SystemML.jar
+PYSPARK_PYTHON=python3 pyspark --driver-class-path target/SystemDS.jar
 {% endhighlight %}
 
 <div class="codetabs">
@@ -132,7 +132,7 @@ SparkSession available as 'spark'.
 >>> from systemml import MLContext, dml
 >>> ml = MLContext(sc)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 
 >>> script = dml("print('hello world')")
 >>> ml.execute(script)
@@ -148,7 +148,7 @@ MLResults
 
 ## Matrix Multiplication Operators
 
-In the following, we give an overview of backend-specific physical matrix multiplication operators in SystemML as well as their internally used matrix multiplication block operations.
+In the following, we give an overview of backend-specific physical matrix multiplication operators in SystemDS as well as their internally used matrix multiplication block operations.
 
 ### Basic Matrix Multiplication Operators
 
diff --git a/gpu.md b/gpu.md
index c5cdb56..91e3bdd 100644
--- a/gpu.md
+++ b/gpu.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: Using SystemML with GPU
-description: Using SystemML with GPU
+title: Using SystemDS with GPU
+description: Using SystemDS with GPU
 ---
 <!--
 {% comment %}
@@ -29,7 +29,7 @@ limitations under the License.
 
 # User Guide
 
-To use SystemML on GPUs, please ensure that [CUDA 9](https://developer.nvidia.com/cuda-90-download-archive) and
+To use SystemDS on GPUs, please ensure that [CUDA 9](https://developer.nvidia.com/cuda-90-download-archive) and
 [CuDNN 7](https://developer.nvidia.com/cudnn) is installed on your system.
 
 ```
@@ -43,9 +43,9 @@ $ cat /usr/local/cuda/include/cudnn.h | grep "CUDNN_MAJOR\|CUDNN_MINOR"
 
 Depending on the API, the GPU backend can be enabled in different way:
 
-1. When invoking SystemML from command-line, the GPU backend can be enabled by providing the command-line `-gpu` flag.
-2. When invoking SystemML using the (Python or Scala) MLContext and MLLearn (includes Caffe2DML and Keras2DML) APIs, please use the `setGPU(enable)` method.
-3. When invoking SystemML using the JMLC API, please set the `useGpu` parameter in `org.apache.sysml.api.jmlc.Connection` class's `prepareScript` method.
+1. When invoking SystemDS from command-line, the GPU backend can be enabled by providing the command-line `-gpu` flag.
+2. When invoking SystemDS using the (Python or Scala) MLContext and DSLearn (includes Caffe2DML and Keras2DML) APIs, please use the `setGPU(enable)` method.
+3. When invoking SystemDS using the JMLC API, please set the `useGpu` parameter in `org.apache.sysml.api.jmlc.Connection` class's `prepareScript` method.
 
 Python users do not need to explicitly provide the jar during their invocation. 
 For all other APIs, please remember to include the `systemml-*-extra.jar` in the classpath as described below.
@@ -55,18 +55,18 @@ For all other APIs, please remember to include the `systemml-*-extra.jar` in the
 To enable the GPU backend via command-line, please provide `systemml-1.*-extra.jar` in the classpath and `-gpu` flag.
 
 ```
-spark-submit --jars systemml-*-extra.jar SystemML.jar -f myDML.dml -gpu
+spark-submit --jars systemml-*-extra.jar SystemDS.jar -f myDML.dml -gpu
 ``` 
 
 To skip memory-checking and force all GPU-enabled operations on the GPU, please provide `force` option to the `-gpu` flag.
 
 ```
-spark-submit --jars systemml-*-extra.jar SystemML.jar -f myDML.dml -gpu force
+spark-submit --jars systemml-*-extra.jar SystemDS.jar -f myDML.dml -gpu force
 ``` 
 
 ## Python users
 
-Please install SystemML using pip:
+Please install SystemDS using pip:
 - For released version: `pip install systemml`
 - For bleeding edge version: 
 ```
@@ -77,7 +77,7 @@ pip install target/systemml-*-SNAPSHOT-python.tar.gz
 ```
 
 Then you can use the `setGPU(True)` method of [MLContext](http://apache.github.io/systemml/spark-mlcontext-programming-guide.html) and 
-[MLLearn](http://apache.github.io/systemml/beginners-guide-python.html#invoke-systemmls-algorithms) APIs to enable the GPU usage.
+[DSLearn](http://apache.github.io/systemml/beginners-guide-python.html#invoke-systemmls-algorithms) APIs to enable the GPU usage.
 
 ```python
 from systemml.mllearn import Caffe2DML
@@ -98,16 +98,16 @@ To enable the GPU backend via command-line, please provide `systemml-*-extra.jar
 the `setGPU(True)` method of [MLContext](http://apache.github.io/systemml/spark-mlcontext-programming-guide.html) API to enable the GPU usage.
 
 ```
-spark-shell --jars systemml-*-extra.jar,SystemML.jar
+spark-shell --jars systemml-*-extra.jar,SystemDS.jar
 ``` 
 
 # Advanced Configuration
 
 ## Using single precision
 
-By default, SystemML uses double precision to store its matrices in the GPU memory.
+By default, SystemDS uses double precision to store its matrices in the GPU memory.
 To use single precision, the user needs to set the configuration property 'sysml.floating.point.precision'
-to 'single'. However, with exception of BLAS operations, SystemML always performs all CPU operations
+to 'single'. However, with exception of BLAS operations, SystemDS always performs all CPU operations
 in double precision.
 
 ## Training very deep network
@@ -117,12 +117,12 @@ To train very deep network with double precision, no additional configurations a
 But to train very deep network with single precision, the user can speed up the eviction by 
 using shadow buffer. The fraction of the driver memory to be allocated to the shadow buffer can  
 be set by using the configuration property 'sysml.gpu.eviction.shadow.bufferSize'.
-In the current version, the shadow buffer is currently not guarded by SystemML
+In the current version, the shadow buffer is currently not guarded by SystemDS
 and can potentially lead to OOM if the network is deep as well as wide.
 
 ### Unified memory allocator
 
-By default, SystemML uses CUDA's memory allocator and performs on-demand eviction
+By default, SystemDS uses CUDA's memory allocator and performs on-demand eviction
 using the eviction policy set by the configuration property 'sysml.gpu.eviction.policy'.
 To use CUDA's unified memory allocator that performs page-level eviction instead,
 please set the configuration property 'sysml.gpu.memory.allocator' to 'unified_memory'.
@@ -155,9 +155,9 @@ $ cat /usr/local/cuda/include/cudnn.h | grep "CUDNN_MAJOR\|CUDNN_MINOR"
 ```
 
 
-### How do I verify the CUDA and CuDNN version that SystemML depends on?
+### How do I verify the CUDA and CuDNN version that SystemDS depends on?
 
-- Check the `jcuda.version` property in SystemML's `pom.xml` file.
+- Check the `jcuda.version` property in SystemDS's `pom.xml` file.
 - Then find the CUDA dependency in [JCuda's documentation](http://www.jcuda.org/downloads/downloads.html).
 - For you reference, here are the corresponding CUDA and CuDNN versions for given JCuda version:
 
@@ -185,7 +185,7 @@ $ ./bin/x86_64/linux/release/deviceQuery
 $ ./bin/x86_64/linux/release/bandwidthTest 
 $ ./bin/x86_64/linux/release/matrixMulCUBLAS 
 ```
-- Test CUDA and CuDNN with SystemML
+- Test CUDA and CuDNN with SystemDS
 ```
 $ git clone https://github.com/apache/systemml.git
 $ cd systemml
diff --git a/hadoop-batch-mode.md b/hadoop-batch-mode.md
index 9b29d29..e534bb6 100644
--- a/hadoop-batch-mode.md
+++ b/hadoop-batch-mode.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: Invoking SystemML in Hadoop Batch Mode
-description: Invoking SystemML in Hadoop Batch Mode
+title: Invoking SystemDS in Hadoop Batch Mode
+description: Invoking SystemDS in Hadoop Batch Mode
 ---
 <!--
 {% comment %}
@@ -30,14 +30,14 @@ limitations under the License.
 
 # Overview
 
-Given that a primary purpose of SystemML is to perform machine learning on large distributed data sets,
-two of the most important ways to invoke SystemML are Hadoop Batch and Spark Batch modes.
-Here, we will look at SystemML's Hadoop Batch mode in more depth.
+Given that a primary purpose of SystemDS is to perform machine learning on large distributed data sets,
+two of the most important ways to invoke SystemDS are Hadoop Batch and Spark Batch modes.
+Here, we will look at SystemDS's Hadoop Batch mode in more depth.
 
-We will look at running SystemML with Standalone Hadoop, Pseudo-Distributed Hadoop, and Distributed Hadoop.
-We will first run SystemML on a single machine with Hadoop running in Standalone mode. Next, we'll run SystemML on HDFS
+We will look at running SystemDS with Standalone Hadoop, Pseudo-Distributed Hadoop, and Distributed Hadoop.
+We will first run SystemDS on a single machine with Hadoop running in Standalone mode. Next, we'll run SystemDS on HDFS
 in Hadoop's Pseudo-Distributed mode on a single machine, followed by Pseudo-Distributed mode with YARN.
-After that, we'll set up a 4-node Hadoop cluster and run SystemML on Distributed Hadoop with YARN.
+After that, we'll set up a 4-node Hadoop cluster and run SystemDS on Distributed Hadoop with YARN.
 
 Note that this tutorial does not address security. For security considerations with regards to Hadoop, please
 refer to the Hadoop documentation.
@@ -47,41 +47,41 @@ refer to the Hadoop documentation.
 
 # Hadoop Batch Mode Invocation Syntax
 
-SystemML can be invoked in Hadoop Batch mode using the following syntax:
+SystemDS can be invoked in Hadoop Batch mode using the following syntax:
 
-    hadoop jar SystemML.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
+    hadoop jar SystemDS.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
 
-The `SystemML.jar` file is specified to Hadoop using the `jar` option.
-The DML script to invoke is specified after the `-f` argument. Configuration settings can be passed to SystemML
+The `SystemDS.jar` file is specified to Hadoop using the `jar` option.
+The DML script to invoke is specified after the `-f` argument. Configuration settings can be passed to SystemDS
 using the optional `-config ` argument. DML scripts can optionally take named arguments (`-nvargs`) or positional
 arguments (`-args`). Named arguments are preferred over positional arguments. Positional arguments are considered
-to be deprecated. All the primary algorithm scripts included with SystemML use named arguments.
+to be deprecated. All the primary algorithm scripts included with SystemDS use named arguments.
 
 
 **Example #1: DML Invocation with Named Arguments**
 
-    hadoop jar systemml/SystemML.jar -f systemml/algorithms/Kmeans.dml -nvargs X=X.mtx k=5
+    hadoop jar systemml/SystemDS.jar -f systemml/algorithms/Kmeans.dml -nvargs X=X.mtx k=5
 
 
 **Example #2: DML Invocation with Positional Arguments**
 
-	hadoop jar systemml/SystemML.jar -f example/test/LinearRegression.dml -args "v" "y" 0.00000001 "w"
+	hadoop jar systemml/SystemDS.jar -f example/test/LinearRegression.dml -args "v" "y" 0.00000001 "w"
 
-In a clustered environment, it is *highly* recommended that SystemML configuration settings are specified
-in a `SystemML-config.xml` file. By default, SystemML will look for this file in the current working
-directory (`./SystemML-config.xml`). This location can be overridden by the `-config ` argument.
+In a clustered environment, it is *highly* recommended that SystemDS configuration settings are specified
+in a `SystemDS-config.xml` file. By default, SystemDS will look for this file in the current working
+directory (`./SystemDS-config.xml`). This location can be overridden by the `-config ` argument.
 
 **Example #3: DML Invocation with Configuration File Explicitly Specified and Named Arguments**
 
-	hadoop jar systemml/SystemML.jar -f systemml/algorithms/Kmeans.dml -config /conf/SystemML-config.xml -nvargs X=X.mtx k=5
+	hadoop jar systemml/SystemDS.jar -f systemml/algorithms/Kmeans.dml -config /conf/SystemDS-config.xml -nvargs X=X.mtx k=5
 
-For recommended SystemML configuration settings in a clustered environment, please see
+For recommended SystemDS configuration settings in a clustered environment, please see
 [Recommended Hadoop Cluster Configuration Settings](hadoop-batch-mode.html#recommended-hadoop-cluster-configuration-settings).
 
 
 * * *
 
-# SystemML with Standalone Hadoop
+# SystemDS with Standalone Hadoop
 
 In Standalone mode, Hadoop runs on a single machine as a single Java process.
 
@@ -132,13 +132,13 @@ To verify that Java and Hadoop were on the path, I used the `java -version` and
 	From source with checksum f9ebb94bf5bf9bec892825ede28baca
 	This command was run using /home/hadoop/hadoop-2.6.2/share/hadoop/common/hadoop-common-2.6.2.jar
 
-Next, I downloaded a SystemML release from the [downloads](http://systemml.apache.org/download.html) page.
+Next, I downloaded a SystemDS release from the [downloads](http://systemml.apache.org/download.html) page.
 Following this, I unpacked it.
 
 	[hadoop@host1 ~]$ tar -xvzf systemml-{{site.SYSTEMML_VERSION}}.tar.gz
 
 
-**Alternatively**, we could have built the SystemML distributed release using [Apache Maven](http://maven.apache.org) and unpacked it.
+**Alternatively**, we could have built the SystemDS distributed release using [Apache Maven](http://maven.apache.org) and unpacked it.
 
 	[hadoop@host1 ~]$ git clone https://github.com/apache/systemml.git
 	[hadoop@host1 ~]$ cd systemml
@@ -146,31 +146,31 @@ Following this, I unpacked it.
 	[hadoop@host1 systemml]$ tar -xvzf target/systemml-{{site.SYSTEMML_VERSION}}.tar.gz -C ..
 	[hadoop@host1 ~]$ cd ..
 
-I downloaded the `genLinearRegressionData.dml` script that is used in the SystemML README example.
+I downloaded the `genLinearRegressionData.dml` script that is used in the SystemDS README example.
 
 	[hadoop@host1 ~]$ wget https://raw.githubusercontent.com/apache/systemml/master/scripts/datagen/genLinearRegressionData.dml
 
 Next, I invoked the `genLinearRegressionData.dml` DML script in Hadoop Batch mode.
-Hadoop was executed with the `SystemML.jar` file specified by the hadoop `jar` option.
+Hadoop was executed with the `SystemDS.jar` file specified by the hadoop `jar` option.
 The `genLinearRegressionData.dml` was specified using the `-f` option. Named input
 arguments to the DML script were specified following the `-nvargs` option.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
 	15/11/11 15:56:21 INFO api.DMLScript: BEGIN DML run 11/11/2015 15:56:21
 	15/11/11 15:56:21 INFO api.DMLScript: HADOOP_HOME: /home/hadoop/hadoop-2.6.2
-	15/11/11 15:56:21 WARN conf.DMLConfig: No default SystemML config file (./SystemML-config.xml) found
+	15/11/11 15:56:21 WARN conf.DMLConfig: No default SystemDS config file (./SystemDS-config.xml) found
 	15/11/11 15:56:21 WARN conf.DMLConfig: Using default settings in DMLConfig
 	15/11/11 15:56:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
 	15/11/11 15:56:22 WARN hops.OptimizerUtils: Auto-disable multi-threaded text read for 'text' and 'csv' due to thread contention on JRE < 1.8 (java.version=1.7.0_79).
-	15/11/11 15:56:22 INFO api.DMLScript: SystemML Statistics:
+	15/11/11 15:56:22 INFO api.DMLScript: SystemDS Statistics:
 	Total execution time:		0.288 sec.
 	Number of executed MR Jobs:	0.
 
 	15/11/11 15:56:22 INFO api.DMLScript: END DML run 11/11/2015 15:56:22
 
-In the console output, we see a warning that no default SystemML config file was found in the current working directory.
-In a distributed environment on a large data set, it is highly advisable to specify configuration settings in a SystemML config file for
-optimal performance. The location of the SystemML config file can be explicitly specified using the `-config ` argument.
+In the console output, we see a warning that no default SystemDS config file was found in the current working directory.
+In a distributed environment on a large data set, it is highly advisable to specify configuration settings in a SystemDS config file for
+optimal performance. The location of the SystemDS config file can be explicitly specified using the `-config ` argument.
 
 The OptimizerUtils warning occurs because parallel multi-threaded text reads in Java versions less than 1.8 result
 in thread contention issues, so only a single thread reads matrix data in text formats.
@@ -200,9 +200,9 @@ To clean things up, I'll delete the files that were generated.
 
 * * *
 
-# SystemML with Pseudo-Distributed Hadoop
+# SystemDS with Pseudo-Distributed Hadoop
 
-Next, we'll look at running SystemML with Hadoop in Pseudo-Distributed mode. In Pseudo-Distributed mode, each Hadoop daemon
+Next, we'll look at running SystemDS with Hadoop in Pseudo-Distributed mode. In Pseudo-Distributed mode, each Hadoop daemon
 (such as NameNode and DataNode) runs in a separate Java process on a single machine.
 
 In the previous section about Hadoop Standalone mode, we set up the `JAVA_HOME` and `HADOOP_HOME` environment variables
@@ -330,14 +330,14 @@ If we look at our HDFS file system, we see that it currently doesn't contain any
 
 Let's go ahead and execute the `genLinearRegressionData.dml` script in Hadoop Pseudo-Distributed mode.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
 	15/11/11 18:16:33 INFO api.DMLScript: BEGIN DML run 11/11/2015 18:16:33
 	15/11/11 18:16:33 INFO api.DMLScript: HADOOP_HOME: /home/hadoop/hadoop-2.6.2
-	15/11/11 18:16:33 WARN conf.DMLConfig: No default SystemML config file (./SystemML-config.xml) found
+	15/11/11 18:16:33 WARN conf.DMLConfig: No default SystemDS config file (./SystemDS-config.xml) found
 	15/11/11 18:16:33 WARN conf.DMLConfig: Using default settings in DMLConfig
 	15/11/11 18:16:33 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
 	15/11/11 18:16:33 WARN hops.OptimizerUtils: Auto-disable multi-threaded text read for 'text' and 'csv' due to thread contention on JRE < 1.8 (java.version=1.7.0_79).
-	15/11/11 18:16:35 INFO api.DMLScript: SystemML Statistics:
+	15/11/11 18:16:35 INFO api.DMLScript: SystemDS Statistics:
 	Total execution time:		1.484 sec.
 	Number of executed MR Jobs:	0.
 
@@ -384,7 +384,7 @@ I'll stop HDFS using the `stop-dfs.sh` script and then verify that the Java proc
 
 * * *
 
-# SystemML with Pseudo-Distributed Hadoop and YARN
+# SystemDS with Pseudo-Distributed Hadoop and YARN
 
 To add YARN to Pseudo-Distributed Hadoop on the single machine, we need to take our setup from the
 previous example and update two configuration
@@ -453,20 +453,20 @@ We can now view YARN information via the web interface on port 8088 (http://host
 I'll execute the `genLinearRegressionData.dml` example that we've previously considered.
 
 	[hadoop@host1 hadoop]$ cd ~
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
 	15/11/12 11:57:04 INFO api.DMLScript: BEGIN DML run 11/12/2015 11:57:04
 	15/11/12 11:57:04 INFO api.DMLScript: HADOOP_HOME: /home/hadoop/hadoop-2.6.2
-	15/11/12 11:57:04 WARN conf.DMLConfig: No default SystemML config file (./SystemML-config.xml) found
+	15/11/12 11:57:04 WARN conf.DMLConfig: No default SystemDS config file (./SystemDS-config.xml) found
 	15/11/12 11:57:04 WARN conf.DMLConfig: Using default settings in DMLConfig
 	15/11/12 11:57:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
 	15/11/12 11:57:06 WARN hops.OptimizerUtils: Auto-disable multi-threaded text read for 'text' and 'csv' due to thread contention on JRE < 1.8 (java.version=1.7.0_79).
-	15/11/12 11:57:07 INFO api.DMLScript: SystemML Statistics:
+	15/11/12 11:57:07 INFO api.DMLScript: SystemDS Statistics:
 	Total execution time:		1.265 sec.
 	Number of executed MR Jobs:	0.
 
 	15/11/12 11:57:07 INFO api.DMLScript: END DML run 11/12/2015 11:57:07
 
-If we examine the HDFS file system, we see the files generated by the execution of the DML script by SystemML on Hadoop.
+If we examine the HDFS file system, we see the files generated by the execution of the DML script by SystemDS on Hadoop.
 
 	[hadoop@host1 ~]$ hdfs dfs -ls
 	Found 5 items
@@ -510,9 +510,9 @@ the next example.
 
 * * *
 
-# SystemML with Distributed Hadoop and YARN
+# SystemDS with Distributed Hadoop and YARN
 
-In our previous example, we ran SystemML on Hadoop in Pseudo-Distributed mode with YARN on a single machine.
+In our previous example, we ran SystemDS on Hadoop in Pseudo-Distributed mode with YARN on a single machine.
 This example will look at Distributed Hadoop with YARN on a 4-node cluster. Each server is running
 Red Hat Enterprise Linux Server, release 6.6.
 
@@ -737,19 +737,19 @@ If we look at the Hadoop (on port 50070) and YARN (on port 8088) web interfaces,
 
 * * *
 
-## SystemML with Distributed Hadoop and YARN: Linear Regression Example
+## SystemDS with Distributed Hadoop and YARN: Linear Regression Example
 
-Let's go ahead and run the SystemML example from the GitHub README.
+Let's go ahead and run the SystemDS example from the GitHub README.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f genLinearRegressionData.dml -nvargs numSamples=1000 numFeatures=50 maxFeatureValue=5 maxWeight=5 addNoise=FALSE b=0 sparsity=0.7 output=linRegData.csv format=csv perc=0.5
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/sample.dml -nvargs X=linRegData.csv sv=perc.csv O=linRegDataParts ofmt=csv
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/sample.dml -nvargs X=linRegData.csv sv=perc.csv O=linRegDataParts ofmt=csv
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/splitXY.dml -nvargs X=linRegDataParts/1 y=51 OX=linRegData.train.data.csv OY=linRegData.train.labels.csv ofmt=csv
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/splitXY.dml -nvargs X=linRegDataParts/1 y=51 OX=linRegData.train.data.csv OY=linRegData.train.labels.csv ofmt=csv
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/splitXY.dml -nvargs X=linRegDataParts/2 y=51 OX=linRegData.test.data.csv OY=linRegData.test.labels.csv ofmt=csv
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/utils/splitXY.dml -nvargs X=linRegDataParts/2 y=51 OX=linRegData.test.data.csv OY=linRegData.test.labels.csv ofmt=csv
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/LinearRegDS.dml -nvargs X=linRegData.train.data.csv Y=linRegData.train.labels.csv B=betas.csv fmt=csv
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/LinearRegDS.dml -nvargs X=linRegData.train.data.csv Y=linRegData.train.labels.csv B=betas.csv fmt=csv
 	...
 	BEGIN LINEAR REGRESSION SCRIPT
 	Reading X and Y...
@@ -768,11 +768,11 @@ Let's go ahead and run the SystemML example from the GitHub README.
 	ADJUSTED_R2_VS_0,1.0
 	Writing the output matrix...
 	END LINEAR REGRESSION SCRIPT
-	15/11/17 15:50:34 INFO api.DMLScript: SystemML Statistics:
+	15/11/17 15:50:34 INFO api.DMLScript: SystemDS Statistics:
 	Total execution time:		0.480 sec.
 	...
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/GLM-predict.dml -nvargs X=linRegData.test.data.csv Y=linRegData.test.labels.csv B=betas.csv fmt=csv
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/GLM-predict.dml -nvargs X=linRegData.test.data.csv Y=linRegData.test.labels.csv B=betas.csv fmt=csv
 	...
 	LOGLHOOD_Z,,FALSE,NaN
 	LOGLHOOD_Z_PVAL,,FALSE,NaN
@@ -799,12 +799,12 @@ Let's go ahead and run the SystemML example from the GitHub README.
 	ADJUSTED_R2,1,,1.0
 	R2_NOBIAS,1,,1.0
 	ADJUSTED_R2_NOBIAS,1,,1.0
-	15/11/17 15:51:17 INFO api.DMLScript: SystemML Statistics:
+	15/11/17 15:51:17 INFO api.DMLScript: SystemDS Statistics:
 	Total execution time:		0.269 sec.
 	...
 
 
-If we look at HDFS, we can see the files that were generated by the SystemML DML script executions.
+If we look at HDFS, we can see the files that were generated by the SystemDS DML script executions.
 
 	[hadoop@host1 ~]$ hdfs dfs -ls
 	Found 16 items
@@ -836,16 +836,16 @@ Before the next example, I'll delete the files created in HDFS by this example.
 
 * * *
 
-## SystemML with Distributed Hadoop and YARN: K-Means Clustering Example
+## SystemDS with Distributed Hadoop and YARN: K-Means Clustering Example
 
-Our previous example showed SystemML running in Hadoop Batch mode on a 4-node cluster with YARN.
+Our previous example showed SystemDS running in Hadoop Batch mode on a 4-node cluster with YARN.
 However, the size of the data used was trivial. In this example, we'll generate a slightly larger set
 of data and then analyze that data with the `Kmeans.dml` and `Kmeans-predict.dml` scripts.
-Information about the SystemML K-means clustering algorithm can be found in the
-[K-Means Clustering](algorithms-clustering.html#k-means-clustering) section of the [SystemML
+Information about the SystemDS K-means clustering algorithm can be found in the
+[K-Means Clustering](algorithms-clustering.html#k-means-clustering) section of the [SystemDS
 Algorithms Reference](algorithms-reference.html).
 
-I'm going to modify my `SystemML-config.xml` file.
+I'm going to modify my `SystemDS-config.xml` file.
 I updated the `numreducers` property to be 6, which is twice my number of data nodes.
 The `numreducers` property specifies the number of reduce tasks per MR job.
 
@@ -856,10 +856,10 @@ To begin, I'll download the `genRandData4Kmeans.dml` script that I'll use to gen
 	[hadoop@host1 ~]$ wget https://raw.githubusercontent.com/apache/systemml/master/scripts/datagen/genRandData4Kmeans.dml
 
 A description of the named arguments that can be passed in to this script can be found in the comment section at the top of the
-`genRandData4Kmeans.dml` file. For data, I'll generate a matrix `X.mtx` consisting of 1 million rows and 100 features. I'll explicitly reference my `SystemML-config.xml` file, since I'm
-executing SystemML in Hadoop from my home directory rather than from the SystemML project root directory.
+`genRandData4Kmeans.dml` file. For data, I'll generate a matrix `X.mtx` consisting of 1 million rows and 100 features. I'll explicitly reference my `SystemDS-config.xml` file, since I'm
+executing SystemDS in Hadoop from my home directory rather than from the SystemDS project root directory.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f genRandData4Kmeans.dml -config systemml-{{site.SYSTEMML_VERSION}}/SystemML-config.xml -nvargs nr=1000000 nf=100 nc=10 dc=10.0 dr=1.0 fbf=100.0 cbf=100.0 X=X.mtx C=C.mtx Y=Y.mtx YbyC=YbyC.mtx
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f genRandData4Kmeans.dml -config systemml-{{site.SYSTEMML_VERSION}}/SystemDS-config.xml -nvargs nr=1000000 nf=100 nc=10 dc=10.0 dr=1.0 fbf=100.0 cbf=100.0 X=X.mtx C=C.mtx Y=Y.mtx YbyC=YbyC.mtx
 
 After the data generation has finished, I'll check HDFS for the amount of space used. The 1M-row matrix `X.mtx`
 requires about 2.8GB of space.
@@ -895,7 +895,7 @@ Here we can see the `X.mtx` data files.
 
 Next, I'll run the `Kmeans.dml` algorithm on the 1M-row matrix `X.mtx`.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/Kmeans.dml -config /systemml-{{site.SYSTEMML_VERSION}}/SystemML-config.xml -nvargs X=X.mtx k=5 C=Centroids.mtx
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/Kmeans.dml -config /systemml-{{site.SYSTEMML_VERSION}}/SystemDS-config.xml -nvargs X=X.mtx k=5 C=Centroids.mtx
 
 We can see the `Centroids.mtx` data file has been written to HDFS.
 
@@ -916,7 +916,7 @@ We can see the `Centroids.mtx` data file has been written to HDFS.
 Now that we have trained our model, next we will test our model. We can do this with
 the `Kmeans-predict.dml` script.
 
-	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemML.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/Kmeans-predict.dml -config systemml-{{site.SYSTEMML_VERSION}}/SystemML-config.xml -nvargs X=X.mtx C=Centroids.mtx prY=PredY.mtx O=stats.txt
+	[hadoop@host1 ~]$ hadoop jar systemml-{{site.SYSTEMML_VERSION}}/SystemDS.jar -f systemml-{{site.SYSTEMML_VERSION}}/algorithms/Kmeans-predict.dml -config systemml-{{site.SYSTEMML_VERSION}}/SystemDS-config.xml -nvargs X=X.mtx C=Centroids.mtx prY=PredY.mtx O=stats.txt
 
 In the file system, we can see that the `PredY.mtx` matrix was created.
 The `stats.txt` file lists statistics about the results.
@@ -950,7 +950,7 @@ see in the resulting metadata file.
 	    ,"cols": 1
 	    ,"nnz": 1000000
 	    ,"format": "text"
-	    ,"description": { "author": "SystemML" }
+	    ,"description": { "author": "SystemDS" }
 	}
 
 The statistics generated from testing the method are displayed below.
@@ -970,7 +970,7 @@ The statistics generated from testing the method are displayed below.
 
 # Recommended Hadoop Cluster Configuration Settings
 
-Below are some recommended Hadoop configuration file settings that may be of assistance when running SystemML on Hadoop
+Below are some recommended Hadoop configuration file settings that may be of assistance when running SystemDS on Hadoop
 in a clustered environment.
 
 <table>
diff --git a/index.md b/index.md
index 3169b15..d65b54a 100644
--- a/index.md
+++ b/index.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-displayTitle: SystemML Documentation
-title: SystemML Documentation
-description: SystemML Documentation
+displayTitle: SystemDS Documentation
+title: SystemDS Documentation
+description: SystemDS Documentation
 ---
 <!--
 {% comment %}
@@ -23,24 +23,24 @@ limitations under the License.
 {% endcomment %}
 -->
 
-SystemML is a flexible, scalable machine learning system.
-SystemML's distinguishing characteristics are:
+SystemDS is a flexible, scalable machine learning system.
+SystemDS's distinguishing characteristics are:
 
   1. **Algorithm customizability via R-like and Python-like languages**.
   2. **Multiple execution modes**, including Spark MLContext, Spark Batch, Hadoop Batch, Standalone, and JMLC.
   3. **Automatic optimization** based on data and cluster characteristics to ensure both efficiency and scalability.
 
-The [SystemML GitHub README](https://github.com/apache/systemml) describes
-building, testing, and running SystemML. Please read [Contributing to SystemML](contributing-to-systemml)
-to find out how to help make SystemML even better!
+The [SystemDS GitHub README](https://github.com/apache/systemml) describes
+building, testing, and running SystemDS. Please read [Contributing to SystemDS](contributing-to-systemds)
+to find out how to help make SystemDS even better!
 
-To download SystemML, visit the [downloads](http://systemml.apache.org/download) page.
+To download SystemDS, visit the [downloads](http://systemml.apache.org/download) page.
 
-This version of SystemML supports: Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+, and Spark 2.1+.
+This version of SystemDS supports: Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+, and Spark 2.1+.
 
 ## Quick tour of the documentation
 
-* If you are new to SystemML, please refer to the [installation guide](http://systemml.apache.org/install-systemml.html) and try out our [sample notebooks](http://systemml.apache.org/get-started.html#sample-notebook)
+* If you are new to SystemDS, please refer to the [installation guide](http://systemml.apache.org/install-systemml.html) and try out our [sample notebooks](http://systemml.apache.org/get-started.html#sample-notebook)
 * If you want to invoke one of our [pre-implemented algorithms](algorithms-reference):
   * In Python, consider using 
     * the convenient [mllearn API](http://apache.github.io/systemml/python-reference.html#mllearn-api). The usage is described in our [beginner's guide](http://apache.github.io/systemml/beginners-guide-python.html#invoke-systemmls-algorithms)  
@@ -53,21 +53,21 @@ This version of SystemML supports: Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop
   * Specifying your network in [Keras](https://keras.io/) format and invoking it with [Keras2DML](beginners-guide-keras2dml) API
   * Or specifying your network in [Caffe](http://caffe.berkeleyvision.org/) format and invoking it with [Caffe2DML](beginners-guide-caffe2dml) API
   * Or using DML-bodied [NN library](https://github.com/apache/systemml/tree/master/scripts/nn). The usage is described in our [sample notebook](https://github.com/apache/systemml/blob/master/samples/jupyter-notebooks/Deep%20Learning%20Image%20Classification.ipynb)
-* Since training a deep neural network is often compute-bound, you may want to enable SystemML's
+* Since training a deep neural network is often compute-bound, you may want to enable SystemDS's
   * [native BLAS](native-backend)
   * Or [GPU backend](gpu)
 * If you want to implement a custom machine learning algorithm and you are familiar with:
   * R syntax, consider implementing your algorithm in [DML](dml-language-reference) (recommended)
   * Python syntax, you can implement your algorithm in [PyDML](beginners-guide-to-dml-and-pydml) or using the [matrix class](http://apache.github.io/systemml/python-reference.html#matrix-class)
-* If you want to try out SystemML on your laptop, consider
-  * using the above mentioned APIs with Apache Spark (recommended). Please refer to our [installation guide](http://systemml.apache.org/install-systemml.html) for instructions on how to setup SystemML on your laptop
-  * Or running SystemML in the [standalone mode](standalone-guide) with Java
+* If you want to try out SystemDS on your laptop, consider
+  * using the above mentioned APIs with Apache Spark (recommended). Please refer to our [installation guide](http://systemml.apache.org/install-systemml.html) for instructions on how to setup SystemDS on your laptop
+  * Or running SystemDS in the [standalone mode](standalone-guide) with Java
 
-## Running SystemML
+## Running SystemDS
 
 * [Beginner's Guide For Python Users](beginners-guide-python) - Beginner's Guide for Python users.
 * [Spark MLContext](spark-mlcontext-programming-guide) - Spark MLContext is a programmatic API
-for running SystemML from Spark via Scala, Python, or Java.
+for running SystemDS from Spark via Scala, Python, or Java.
   * [Spark Shell Example (Scala)](spark-mlcontext-programming-guide#spark-shell-example)
   * [Jupyter Notebook Example (PySpark)](spark-mlcontext-programming-guide#jupyter-pyspark-notebook-example---poisson-nonnegative-matrix-factorization)
 * [Spark Batch](spark-batch-mode) - Algorithms are automatically optimized to run across Spark clusters.
@@ -75,7 +75,7 @@ for running SystemML from Spark via Scala, Python, or Java.
 * [Standalone](standalone-guide) - Standalone mode allows data scientists to rapidly prototype algorithms on a single
 machine in R-like and Python-like declarative languages.
 * [JMLC](jmlc) - Java Machine Learning Connector.
-* [Deep Learning with SystemML](deep-learning)
+* [Deep Learning with SystemDS](deep-learning)
   * Keras2DML API for Deep Learning ([beginner's guide](beginners-guide-keras2dml), [reference guide](reference-guide-keras2dml)) - Converts a Keras model to DML.
   * Caffe2DML API for Deep Learning ([beginner's guide](beginners-guide-caffe2dml), [reference guide](reference-guide-caffe2dml)) - Converts a Caffe specification to DML.
 
@@ -92,19 +92,19 @@ An introduction to the basics of DML and PyDML.
 ## ML Algorithms
 
 * [Algorithms Reference](algorithms-reference) - The Algorithms Reference describes the
-machine learning algorithms included with SystemML in detail.
+machine learning algorithms included with SystemDS in detail.
 
 ## Tools
 
-* [Debugger Guide](debugger-guide) - SystemML supports DML script-level debugging through a
+* [Debugger Guide](debugger-guide) - SystemDS supports DML script-level debugging through a
 command-line interface.
-* [IDE Guide](developer-tools-systemml) - Useful IDE Guide for Developing SystemML.
+* [IDE Guide](developer-tools-systemds) - Useful IDE Guide for Developing SystemDS.
 
 ## Other
 
-* [Contributing to SystemML](contributing-to-systemml) - Describes ways to contribute to SystemML.
-* [Engine Developer Guide](engine-dev-guide) - Guide for internal SystemML engine development.
-* [Troubleshooting Guide](troubleshooting-guide) - Troubleshoot various issues related to SystemML.
-* [Release Process](release-process) - Description of the SystemML release process.
-* [Using Native BLAS](native-backend) in SystemML.
-* [Using GPU backend](gpu) in SystemML.
+* [Contributing to SystemDS](contributing-to-systemds) - Describes ways to contribute to SystemDS.
+* [Engine Developer Guide](engine-dev-guide) - Guide for internal SystemDS engine development.
+* [Troubleshooting Guide](troubleshooting-guide) - Troubleshoot various issues related to SystemDS.
+* [Release Process](release-process) - Description of the SystemDS release process.
+* [Using Native BLAS](native-backend) in SystemDS.
+* [Using GPU backend](gpu) in SystemDS.
diff --git a/jmlc.md b/jmlc.md
index e0d72ea..bf894da 100644
--- a/jmlc.md
+++ b/jmlc.md
@@ -25,27 +25,27 @@ limitations under the License.
 
 # Overview
 
-The `Java Machine Learning Connector (JMLC)` API is a programmatic interface for interacting with SystemML
-in an embedded fashion. To use JMLC, the small footprint "in-memory" SystemML jar file needs to be included on the
-classpath of the Java application, since JMLC invokes SystemML in an existing Java Virtual Machine. Because
-of this, JMLC allows access to SystemML's optimizations and fast linear algebra, but the bulk performance
-gain from running SystemML on a large Spark or Hadoop cluster is not available. However, this embeddable nature
-allows SystemML to be part of a production pipeline for tasks such as scoring.
+The `Java Machine Learning Connector (JMLC)` API is a programmatic interface for interacting with SystemDS
+in an embedded fashion. To use JMLC, the small footprint "in-memory" SystemDS jar file needs to be included on the
+classpath of the Java application, since JMLC invokes SystemDS in an existing Java Virtual Machine. Because
+of this, JMLC allows access to SystemDS's optimizations and fast linear algebra, but the bulk performance
+gain from running SystemDS on a large Spark or Hadoop cluster is not available. However, this embeddable nature
+allows SystemDS to be part of a production pipeline for tasks such as scoring.
 
 The primary purpose of JMLC is as a scoring API, where your scoring function is expressed using
-SystemML's DML (Declarative Machine Learning) language. Scoring occurs on a single machine in a single
+SystemDS's DML (Declarative Machine Learning) language. Scoring occurs on a single machine in a single
 JVM on a relatively small amount of input data which produces a relatively small amount of output data.
 For consistency, it is important to be able to express a scoring function in the same DML language used for
 training a model, since different implementations of linear algebra (for instance MATLAB and R) can deliver
 slightly different results.
 
-In addition to scoring, embedded SystemML can be used for tasks such as unsupervised learning (for
+In addition to scoring, embedded SystemDS can be used for tasks such as unsupervised learning (for
 example, clustering) in the context of a larger application running on a single machine.
 
 Performance penalties include startup costs, so JMLC has facilities to perform some startup tasks once,
 such as script precompilation. Due to startup costs, it tends to be best practice to do batch scoring, such
 as scoring 1000 records at a time. For large amounts of data, it is recommended to run DML in one
-of SystemML's distributed modes, such as Spark batch mode or Hadoop batch mode, to take advantage of SystemML's
+of SystemDS's distributed modes, such as Spark batch mode or Hadoop batch mode, to take advantage of SystemDS's
 distributed computing capabilities. JMLC offers embeddability at the cost of performance, so its use is
 dependent on the nature of the business use case being addressed.
 
@@ -54,7 +54,7 @@ dependent on the nature of the business use case being addressed.
 JMLC can be configured to gather runtime statistics, as in the MLContext API, by calling Connection's `setStatistics()`
 method with a value of `true`. JMLC can also be configured to gather statistics on the memory used by matrices and
 frames in the DML script. To enable collection of memory statistics, call PreparedScript's `gatherMemStats()` method
-with a value of `true`. When finegrained statistics are enabled in `SystemML.conf`, JMLC will also report the variables
+with a value of `true`. When finegrained statistics are enabled in `SystemDS.conf`, JMLC will also report the variables
 in the DML script which used the most memory. An example showing how to enable statistics in JMLC is presented in the
 section below.
 
@@ -62,12 +62,12 @@ section below.
 
 # Examples
 
-JMLC is patterned loosely after JDBC. To interact with SystemML via JMLC, we can begin by creating a `Connection`
+JMLC is patterned loosely after JDBC. To interact with SystemDS via JMLC, we can begin by creating a `Connection`
 object. We can then prepare (precompile) a DML script by calling the `Connection`'s `prepareScript` method,
 which returns a `PreparedScript` object. We can then call the `executeScript` method on the `PreparedScript`
 object to invoke this script.
 
-Here, we see a "hello world" example, which invokes SystemML via JMLC and prints "hello world" to the console.
+Here, we see a "hello world" example, which invokes SystemDS via JMLC and prints "hello world" to the console.
 
 {% highlight java %}
 Connection conn = new Connection();
@@ -98,12 +98,12 @@ write(predicted_y, "./tmp", format="text");
 {% endhighlight %}
 
 
-In the Java below, we initialize SystemML by obtaining a `Connection` object. Next, we read in the above DML script
+In the Java below, we initialize SystemDS by obtaining a `Connection` object. Next, we read in the above DML script
 (`"scoring-example.dml"`) as a `String`. We precompile this script by calling the `prepareScript` method on the
 `Connection` object with the names of the inputs (`"W"` and `"X"`) and outputs (`"predicted_y"`) to register.
 
 Following this, we set matrix `"W"` and we set a matrix of input data `"X"`. We execute the script and read
-the resulting `"predicted_y"` matrix. We repeat this process. When done, we close the SystemML `Connection`.
+the resulting `"predicted_y"` matrix. We repeat this process. When done, we close the SystemDS `Connection`.
 
 
 #### Java
@@ -120,14 +120,14 @@ the resulting `"predicted_y"` matrix. We repeat this process. When done, we clos
  
     public static void main(String[] args) throws Exception {
  
-        // obtain connection to SystemML
+        // obtain connection to SystemDS
         Connection conn = new Connection();
  
         // read in and precompile DML script, registering inputs and outputs
         String dml = conn.readScript("scoring-example.dml");
         PreparedScript script = conn.prepareScript(dml, new String[] { "W", "X" }, new String[] { "predicted_y" }, false);
 
-        // obtain the runtime plan generated by SystemML
+        // obtain the runtime plan generated by SystemDS
         String plan = script.explain();
         System.out.println(plan);
 
@@ -206,5 +206,5 @@ the resulting `"predicted_y"` matrix. We repeat this process. When done, we clos
 
 ---
 
-For additional information regarding programmatic access to SystemML, please see the
+For additional information regarding programmatic access to SystemDS, please see the
 [Spark MLContext Programming Guide](spark-mlcontext-programming-guide.html).
diff --git a/lang-ref/README_HADOOP_CONFIG.txt b/lang-ref/README_HADOOP_CONFIG.txt
index e34d4f3..e96a535 100644
--- a/lang-ref/README_HADOOP_CONFIG.txt
+++ b/lang-ref/README_HADOOP_CONFIG.txt
@@ -1,11 +1,11 @@
 Usage
 -----
-The machine learning algorithms described in SystemML_Algorithms_Reference.pdf can be invoked
+The machine learning algorithms described in SystemDS_Algorithms_Reference.pdf can be invoked
 from the hadoop command line using the described, algorithm-specific parameters. 
 
 Generic command line arguments arguments are provided by the help command below.
 
-   hadoop jar SystemML.jar -? or -help 
+   hadoop jar SystemDS.jar -? or -help 
 
 
 Recommended configurations
@@ -53,26 +53,26 @@ behavior, we recommend to disable THP with
 4) JVM Reuse:
 Performance benefits from JVM reuse because data sets that fit into the mapper memory budget are 
 reused across tasks per slot. However, Hadoop 1.0.3 JVM Reuse is incompatible with security (when 
-using the LinuxTaskController). The workaround is to use the DefaultTaskController. SystemML provides 
-a configuration property in SystemML-config.xml to enable JVM reuse on a per job level without
+using the LinuxTaskController). The workaround is to use the DefaultTaskController. SystemDS provides 
+a configuration property in SystemDS-config.xml to enable JVM reuse on a per job level without
 changing the global cluster configuration.
    
    <jvmreuse>false</jvmreuse> 
    
 5) Number of Reducers:
-The number of reducers can have significant impact on performance. SystemML provides a configuration
+The number of reducers can have significant impact on performance. SystemDS provides a configuration
 property to set the default number of reducers per job without changing the global cluster configuration.
 In general, we recommend a setting of twice the number of nodes. Smaller numbers create less intermediate
 files, larger numbers increase the degree of parallelism for compute and parallel write. In
-SystemML-config.xml, set:
+SystemDS-config.xml, set:
    
    <!-- default number of reduce tasks per MR job, default: 2 x number of nodes -->
    <numreducers>12</numreducers> 
 
-6) SystemML temporary directories:
-SystemML uses temporary directories in two different locations: (1) on local file system for temping from 
+6) SystemDS temporary directories:
+SystemDS uses temporary directories in two different locations: (1) on local file system for temping from 
 the client process, and (2) on HDFS for intermediate results between different MR jobs and between MR jobs 
-and in-memory operations. Locations of these directories can be configured in SystemML-config.xml with the
+and in-memory operations. Locations of these directories can be configured in SystemDS-config.xml with the
 following properties:
 
    <!-- local fs tmp working directory-->
diff --git a/native-backend.md b/native-backend.md
index 0f01fa4..2ab2c11 100644
--- a/native-backend.md
+++ b/native-backend.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: Using SystemML with Native BLAS support
-description: Using SystemML with Native BLAS support
+title: Using SystemDS with Native BLAS support
+description: Using SystemDS with Native BLAS support
 ---
 <!--
 {% comment %}
@@ -29,24 +29,24 @@ limitations under the License.
 
 # User Guide
 
-By default, SystemML implements all its matrix operations in Java.
+By default, SystemDS implements all its matrix operations in Java.
 This simplifies deployment especially in a distributed environment.
 
 In some cases (such as deep learning), the user might want to use native BLAS
-rather than SystemML's internal Java library for performing single-node
+rather than SystemDS's internal Java library for performing single-node
 operations such matrix multiplication, convolution, etc.
 
-To allow SystemML to use native BLAS rather than internal Java library,
+To allow SystemDS to use native BLAS rather than internal Java library,
 please set the configuration property `sysml.native.blas` to `auto`.
 Other possible options are: `mkl`, `openblas` and `none`.
 The first two options will only attempt to use the respective BLAS libraries.
 
-By default, SystemML will first attempt to use Intel MKL (if installed)
+By default, SystemDS will first attempt to use Intel MKL (if installed)
 and then OpenBLAS (if installed).
-If both Intel MKL and OpenBLAS are not available, SystemML
+If both Intel MKL and OpenBLAS are not available, SystemDS
 falls back to its internal Java library.
 
-The current version of SystemML only supports BLAS on **Linux** machines.
+The current version of SystemDS only supports BLAS on **Linux** machines.
 
 ## Step 1: Install BLAS
 
@@ -63,7 +63,7 @@ with license key. Since we use MKL DNN primitives, we depend on Intel MKL versio
 ### Option 2: Install OpenBLAS  
 
 The default OpenBLAS (via yum/apt-get) uses its internal threading rather than OpenMP, 
-which can lead to performance degradation when using SystemML. So, instead we recommend that you
+which can lead to performance degradation when using SystemDS. So, instead we recommend that you
 compile OpenBLAS from the source instead of installing it with `yum` or `apt-get`.
 
 The steps to install OpenBLAS v0.2.20:
@@ -100,7 +100,7 @@ sudo ln -s /lib64/libgomp.so.1 /lib64/libgomp.so
 
 2. Alternatively, you can add the location of the native libraries (i.e. BLAS and other dependencies) 
 to the environment variable `LD_LIBRARY_PATH` (on Linux). 
-If you want to use SystemML with Spark, please add the following line to `spark-env.sh` 
+If you want to use SystemDS with Spark, please add the following line to `spark-env.sh` 
 (or to the bash profile).
 
 	export LD_LIBRARY_PATH=/path/to/blas-n-other-dependencies
@@ -115,7 +115,7 @@ mlCtx.setConfigProperty("sysml.native.blas.directory", "/path/to/blas-n-other-de
 
 ## Step 3: Set configuration property to enable native BLAS
 
-The configuration property `sysml.native.blas` can be either set in the file `SystemML-config.xml`
+The configuration property `sysml.native.blas` can be either set in the file `SystemDS-config.xml`
 or using `setConfigProperty` method of `MLContext` or `mllearn` classes. For example:
 
 ```python 
@@ -146,7 +146,7 @@ Make sure that this path is accessible to Java as per instructions provided in t
 By default, OpenBLAS libraries will be installed in the location `/opt/OpenBLAS/lib/`.
 Make sure that this path is accessible to Java as per instructions provided in the above section.
 
-- Using OpenBLAS without OpenMP can lead to performance degradation when using SystemML.
+- Using OpenBLAS without OpenMP can lead to performance degradation when using SystemDS.
  
 You can check if the OpenBLAS on you system is compiled with OpenMP or not using following commands:
 If you don't see any output after the second command, then OpenBLAS installed on your system is using its internal threading.
@@ -162,7 +162,7 @@ In this case, we highly recommend that you reinstall OpenBLAS using the above co
 We noticed that double-precision MKL DNN primitives for convolution instruction
 is considerably slower than than  the corresponding single-precision MKL DNN primitives
 as of MKL 2017 Update 1. We anticipate that this performance bug will be fixed in the future MKL versions.
-Until then or until SystemML supports single-precision matrices, we recommend that you use OpenBLAS when using script with `conv2d`.
+Until then or until SystemDS supports single-precision matrices, we recommend that you use OpenBLAS when using script with `conv2d`.
 
 Here are the end-to-end runtime performance in seconds of 10 `conv2d` operations 
 on randomly generated 64 images of size 256 X 256 with sparsity 0.9
@@ -245,12 +245,12 @@ The current set of dependencies other than MKL and OpenBLAS, are as follows:
 If CMake cannot detect your OpenBLAS installation, set the `OpenBLAS_HOME` environment variable to the OpenBLAS Home.
 
 
-## Debugging SystemML's native code
+## Debugging SystemDS's native code
 
-To debug issues in SystemML's native code, please use the following flags:
+To debug issues in SystemDS's native code, please use the following flags:
 
 ```
-$SPARK_HOME/bin/spark-submit --conf 'spark.driver.extraJavaOptions=-XX:OnError="gdb - %p"' SystemML.jar -f test_conv2d.dml -stats 10 -explain -nvargs stride=$stride pad=$pad out=out_cp.csv N=$N C=$C H=$H W=$W K=$K R=$R S=$S
+$SPARK_HOME/bin/spark-submit --conf 'spark.driver.extraJavaOptions=-XX:OnError="gdb - %p"' SystemDS.jar -f test_conv2d.dml -stats 10 -explain -nvargs stride=$stride pad=$pad out=out_cp.csv N=$N C=$C H=$H W=$W K=$K R=$R S=$S
 ```
 
 When it fails, it will start a native debugger.
\ No newline at end of file
diff --git a/python-performance-test.md b/python-performance-test.md
index b47b7c9..726501e 100644
--- a/python-performance-test.md
+++ b/python-performance-test.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-title: SystemML Performance Testing
-description: Description of SystemML performance testing.
-displayTitle: SystemML Performance Testing
+title: SystemDS Performance Testing
+description: Description of SystemDS performance testing.
+displayTitle: SystemDS Performance Testing
 ---
 <!--
 {% comment %}
@@ -118,7 +118,7 @@ Default setting for our performance test below:
 
 ## Examples
 
-Some examples of SystemML performance test with arguments shown below:
+Some examples of SystemDS performance test with arguments shown below:
 
 `./scripts/perftest/python/run_perftest.py --family binomial clustering multinomial regression1 regression2 stats1 stats2
 `
diff --git a/python-reference.md b/python-reference.md
index 7d6af46..43a67c0 100644
--- a/python-reference.md
+++ b/python-reference.md
@@ -29,21 +29,21 @@ limitations under the License.
 
 ## Introduction
 
-SystemML enables flexible, scalable machine learning. This flexibility is achieved through the specification of a high-level declarative machine learning language that comes in two flavors, 
+SystemDS enables flexible, scalable machine learning. This flexibility is achieved through the specification of a high-level declarative machine learning language that comes in two flavors, 
 one with an R-like syntax (DML) and one with a Python-like syntax (PyDML).
 
 Algorithm scripts written in DML and PyDML can be run on Hadoop, on Spark, or in Standalone mode. 
-No script modifications are required to change between modes. SystemML automatically performs advanced optimizations 
+No script modifications are required to change between modes. SystemDS automatically performs advanced optimizations 
 based on data and cluster characteristics, so much of the need to manually tweak algorithms is largely reduced or eliminated.
 To understand more about DML and PyDML, we recommend that you read [Beginner's Guide to DML and PyDML](https://apache.github.io/systemml/beginners-guide-to-dml-and-pydml.html).
 
-For convenience of Python users, SystemML exposes several language-level APIs that allow Python users to use SystemML
+For convenience of Python users, SystemDS exposes several language-level APIs that allow Python users to use SystemDS
 and its algorithms without the need to know DML or PyDML. We explain these APIs in the below sections.
 
 ## matrix class
 
 The matrix class is an **experimental** feature that is often referred to as Python DSL.
-It allows the user to perform linear algebra operations in SystemML using a NumPy-like interface.
+It allows the user to perform linear algebra operations in SystemDS using a NumPy-like interface.
 It implements basic matrix operators, matrix functions as well as converters to common Python
 types (for example: Numpy arrays, PySpark DataFrame and Pandas
 DataFrame).
@@ -87,7 +87,7 @@ To disable lazy evaluation, please us `set_lazy` method:
 >>> import numpy as np
 >>> m1 = sml.matrix(np.ones((3,3)) + 2)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 
 >>> m2 = sml.matrix(np.ones((3,3)) + 3)
 >>> np.add(m1, m2) + m1
@@ -113,7 +113,7 @@ Please see below [troubleshooting steps](http://apache.github.io/systemml/python
 ### Dealing with the loops
 
 It is important to note that this API doesnot pushdown loop, which means the
-SystemML engine essentially gets an unrolled DML script.
+SystemDS engine essentially gets an unrolled DML script.
 This can lead to two issues:
 
 1. Since matrix is backed by lazy evaluation and uses a recursive Depth First Search (DFS),
@@ -129,7 +129,7 @@ The unrolling of the for loop can be demonstrated by the below example:
 >>> import numpy as np
 >>> m1 = sml.matrix(np.ones((3,3)) + 2)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 
 >>> m2 = sml.matrix(np.ones((3,3)) + 3)
 >>> m3 = m1
@@ -159,7 +159,7 @@ We can reduce the impact of this unrolling by eagerly evaluating the variables i
 >>> import numpy as np
 >>> m1 = sml.matrix(np.ones((3,3)) + 2)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 
 >>> m2 = sml.matrix(np.ones((3,3)) + 3)
 >>> m3 = m1
@@ -243,7 +243,7 @@ Residual sum of squares: 25282.12
 
 For all the above functions, we always return a two dimensional matrix, especially for aggregation functions with axis. 
 For example: Assuming m1 is a matrix of (3, n), NumPy returns a 1d vector of dimension (3,) for operation m1.sum(axis=1)
-whereas SystemML returns a 2d matrix of dimension (3, 1).
+whereas SystemDS returns a 2d matrix of dimension (3, 1).
 
 Note: an evaluated matrix contains a data field computed by eval
 method as DataFrame or NumPy array.
@@ -335,8 +335,8 @@ save(mVar3, " ")
 
 ## MLContext API
 
-The Spark MLContext API offers a programmatic interface for interacting with SystemML from Spark using languages such as Scala, Java, and Python. 
-As a result, it offers a convenient way to interact with SystemML from the Spark Shell and from Notebooks such as Jupyter and Zeppelin.
+The Spark MLContext API offers a programmatic interface for interacting with SystemDS from Spark using languages such as Scala, Java, and Python. 
+As a result, it offers a convenient way to interact with SystemDS from the Spark Shell and from Notebooks such as Jupyter and Zeppelin.
 
 ### Usage
 
@@ -407,7 +407,7 @@ model.transform(df_test)
 </div>
 </div>
 
-Please note that when training using mllearn API (i.e. `model.fit(X_df)`), SystemML 
+Please note that when training using mllearn API (i.e. `model.fit(X_df)`), SystemDS 
 expects that labels have been converted to 1-based value.
 This avoids unnecessary decoding overhead for large dataset if the label columns has already been decoded.
 For scikit-learn API, there is no such requirement.
@@ -429,7 +429,7 @@ These parameters are also specified in the usage section of the [Algorithms Refe
 | is_multi_class | Specifies whether to use binary-class or multi-class classifier (default: False) | - | - | X | - |
 | laplace | Laplace smoothing specified by the user to avoid creation of 0 probabilities (default: 1.0) | - | - | - | X |
 
-In the below example, we invoke SystemML's [Logistic Regression](https://apache.github.io/systemml/algorithms-classification.html#multinomial-logistic-regression)
+In the below example, we invoke SystemDS's [Logistic Regression](https://apache.github.io/systemml/algorithms-classification.html#multinomial-logistic-regression)
 algorithm on digits datasets.
 
 ```python
@@ -497,7 +497,7 @@ LogisticRegression score: 0.922222
 
 #### MLPipeline interface
 
-In the below example, we demonstrate how the same `LogisticRegression` class can allow SystemML to fit seamlessly into 
+In the below example, we demonstrate how the same `LogisticRegression` class can allow SystemDS to fit seamlessly into 
 large data pipelines.
 
 ```python
@@ -549,10 +549,10 @@ Output:
 
 ## Troubleshooting Python APIs
 
-#### Unable to load SystemML.jar into current pyspark session.
+#### Unable to load SystemDS.jar into current pyspark session.
 
-While using SystemML's Python package through pyspark or notebook (SparkContext is not previously created in the session), the
-below method is not required. However, if the user wishes to use SystemML through spark-submit and has not previously invoked 
+While using SystemDS's Python package through pyspark or notebook (SparkContext is not previously created in the session), the
+below method is not required. However, if the user wishes to use SystemDS through spark-submit and has not previously invoked 
 
  `systemml.defmatrix.setSparkContext`(*sc*)
 :   Before using the matrix, the user needs to invoke this function if SparkContext is not previously created in the session.
@@ -573,16 +573,16 @@ m4 = 1.0 - m2
 m4.sum(axis=1).toNumPy()
 ```
 
-If SystemML was not installed via pip, you may have to download SystemML.jar and provide it to pyspark via `--driver-class-path` and `--jars`. 
+If SystemDS was not installed via pip, you may have to download SystemDS.jar and provide it to pyspark via `--driver-class-path` and `--jars`. 
 
 #### matrix API is running slow when set_lazy(False) or when eval() is called often.
 
 This is a known issue. The matrix API is slow in this scenario due to slow Py4J conversion from Java MatrixObject or Java RDD to Python NumPy or DataFrame.
-To resolve this for now, we recommend writing the matrix to FileSystemML and using `load` function.
+To resolve this for now, we recommend writing the matrix to FileSystemDS and using `load` function.
 
 #### maximum recursion depth exceeded
 
-SystemML matrix is backed by lazy evaluation and uses a recursive Depth First Search (DFS).
+SystemDS matrix is backed by lazy evaluation and uses a recursive Depth First Search (DFS).
 Python can throw `RuntimeError: maximum recursion depth exceeded` when the recursion of DFS exceeds beyond the limit 
 set by Python. There are two ways to address it:
 
diff --git a/reference-guide-caffe2dml.md b/reference-guide-caffe2dml.md
index 993d587..f52d0c7 100644
--- a/reference-guide-caffe2dml.md
+++ b/reference-guide-caffe2dml.md
@@ -586,12 +586,12 @@ layer {
 #### What is the purpose of Caffe2DML API ?
 
 Most deep learning experts are more likely to be familiar with the Caffe's specification
-rather than DML language. For these users, the Caffe2DML API reduces the learning curve to using SystemML.
+rather than DML language. For these users, the Caffe2DML API reduces the learning curve to using SystemDS.
 Instead of requiring the users to write a DML script for training, fine-tuning and testing the model,
 Caffe2DML takes as an input a network and solver specified in the Caffe specification
 and automatically generates the corresponding DML.
 
-#### With Caffe2DML, does SystemML now require Caffe to be installed ?
+#### With Caffe2DML, does SystemDS now require Caffe to be installed ?
 
 Absolutely not. We only support Caffe's API for convenience of the user as stated above.
 Since the Caffe's API is specified in the protobuf format, we are able to generate the java parser files
@@ -602,8 +602,8 @@ Dml.g4      ---> antlr  ---> DmlLexer.java, DmlListener.java, DmlParser.java ---
 caffe.proto ---> protoc ---> target/generated-sources/caffe/Caffe.java       ---> parse caffe_network.proto, caffe_solver.proto 
 ```
 
-Again, the SystemML engine doesnot invoke (or depend on) Caffe for any of its runtime operators.
-Since the grammar files for the respective APIs (i.e. `caffe.proto`) are used by SystemML, 
+Again, the SystemDS engine doesnot invoke (or depend on) Caffe for any of its runtime operators.
+Since the grammar files for the respective APIs (i.e. `caffe.proto`) are used by SystemDS, 
 we include their licenses in our jar files.
 
 #### How can I speedup the training with Caffe2DML ?
@@ -632,9 +632,9 @@ To be consistent with other mllearn algorithms, we recommend that you use follow
 the `solver_mode` in solver file.
 
 ```python
-# The below method tells SystemML optimizer to use a GPU-enabled instruction if the operands fit in the GPU memory 
+# The below method tells SystemDS optimizer to use a GPU-enabled instruction if the operands fit in the GPU memory 
 caffe2dmlObject.setGPU(True)
-# The below method tells SystemML optimizer to always use a GPU-enabled instruction irrespective of the memory requirement
+# The below method tells SystemDS optimizer to always use a GPU-enabled instruction irrespective of the memory requirement
 caffe2dmlObject.setForceGPU(True)
 ```
 
@@ -731,7 +731,7 @@ test_interval: 500
 #### How to pass a single jpeg image to Caffe2DML for prediction ?
 
 To convert a jpeg into NumPy matrix, you can use the [pillow package](https://pillow.readthedocs.io/) and 
-SystemML's  `convertImageToNumPyArr` utility function. The below pyspark code demonstrates the usage:
+SystemDS's  `convertImageToNumPyArr` utility function. The below pyspark code demonstrates the usage:
  
 ```python
 from PIL import Image
@@ -1147,7 +1147,7 @@ To simplify the DML generation in `getTrainingScript` and `getPredictionScript m
 This interface generates DML string for common operations such as loops (such as if, for, while) as well as built-in functions (read, write), etc. 
 Also, this interface helps in "code reading" of the Caffe2DML class.
 
-Here is an analogy for SystemML developers to think of various moving components of Caffe2DML:
+Here is an analogy for SystemDS developers to think of various moving components of Caffe2DML:
 - Like `Dml.g4` in the `org.apache.sysml.parser.dml` package, `caffe.proto` in the `src/main/proto/caffe` directory
 is used to generate classes to parse the input files.
 
@@ -1169,11 +1169,11 @@ X = matrix("1.2 3.5 0.999 7.123", rows=2, cols=2)
 
 - Just like we convert the AST generated by antlr into our DMLProgram representation, we convert
 caffe's abstraction into the below given mapping classes for layer, solver and learning rate.
-These mapping classes maps the corresponding Caffe abstraction to the SystemML-NN library.
+These mapping classes maps the corresponding Caffe abstraction to the SystemDS-NN library.
 This greatly simplifies adding new layers into Caffe2DML:
 ```
 trait CaffeLayer {
-  // Any layer that wants to reuse SystemML-NN has to override following methods that help in generating the DML for the given layer:
+  // Any layer that wants to reuse SystemDS-NN has to override following methods that help in generating the DML for the given layer:
   def sourceFileName:String;
   def init(dmlScript:StringBuilder):Unit;
   def forward(dmlScript:StringBuilder, isPrediction:Boolean):Unit;
diff --git a/reference-guide-keras2dml.md b/reference-guide-keras2dml.md
index d04ff51..6e8d1ae 100644
--- a/reference-guide-keras2dml.md
+++ b/reference-guide-keras2dml.md
@@ -35,7 +35,7 @@ We follow the Keras specification very closely during DML generation and compare
 
 - Following layers are not supported but will be supported in near future: `Reshape, Permute, RepeatVector, ActivityRegularization, Masking, SpatialDropout1D, SpatialDropout2D, SeparableConv1D, SeparableConv2D, DepthwiseConv2D, Cropping1D, Cropping2D, GRU and Embedding`.
 - Following layers are not supported by their 2D variants exists (consider using them instead): `UpSampling1D, ZeroPadding1D, MaxPooling1D, AveragePooling1D and Conv1D`.
-- Specialized `CuDNNGRU and CuDNNLSTM` layers are not required in SystemML. Instead use `LSTM` layer. 
+- Specialized `CuDNNGRU and CuDNNLSTM` layers are not required in SystemDS. Instead use `LSTM` layer. 
 - We do not have immediate plans to support the following layers: `Lambda, SpatialDropout3D, Conv3D, Conv3DTranspose, Cropping3D, UpSampling3D, ZeroPadding3D, MaxPooling3D, AveragePooling3D and ConvLSTM2D*`.
 
 # Frequently asked questions
@@ -127,9 +127,9 @@ algorithm using the parameters `train_algo` and `test_algo` (valid values are: `
 
 Here are high-level guidelines to train very deep models on GPU with Keras2DML (and Caffe2DML):
 
-1. If there exists at least one layer/operator that does not fit on the device, please allow SystemML's optimizer to perform operator placement based on the memory estimates `sysml_model.setGPU(True)`.
+1. If there exists at least one layer/operator that does not fit on the device, please allow SystemDS's optimizer to perform operator placement based on the memory estimates `sysml_model.setGPU(True)`.
 2. If each individual layer/operator fits on the device but not the entire network with a batch size of 1, then 
-- Rely on SystemML's GPU Memory Manager to perform automatic eviction (recommended): `sysml_model.setGPU(True) # Optional: .setForceGPU(True)`
+- Rely on SystemDS's GPU Memory Manager to perform automatic eviction (recommended): `sysml_model.setGPU(True) # Optional: .setForceGPU(True)`
 - Or enable Nvidia's Unified Memory:  `sysml_model.setConfigProperty('sysml.gpu.memory.allocator', 'unified_memory')`
 3. If the entire neural network does not fit in the GPU memory with the user-specified `batch_size`, but fits in the GPU memory with `local_batch_size` such that `1 << local_batch_size < batch_size`, then
 - Use either of the above two options.
diff --git a/release-creation-process.md b/release-creation-process.md
index bf05000..b28d25b 100644
--- a/release-creation-process.md
+++ b/release-creation-process.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-title: SystemML Release Creation Process
-description: Description of the SystemML release build process.
-displayTitle: SystemML Release Creation Process
+title: SystemDS Release Creation Process
+description: Description of the SystemDS release build process.
+displayTitle: SystemDS Release Creation Process
 ---
 <!--
 {% comment %}
@@ -92,7 +92,7 @@ Step 3: Close the release candidate build on Nexus site.
 
 Visit [NexusRepository](https://repository.apache.org/#stagingRepositories) site.
 
-	Find out SystemML under (Staging Repositories) link. It should be in Open State (status). Close it (button on top left to middle) with proper comment. Once it completes copying, URL will be updated with maven location to be sent in mail.
+	Find out SystemDS under (Staging Repositories) link. It should be in Open State (status). Close it (button on top left to middle) with proper comment. Once it completes copying, URL will be updated with maven location to be sent in mail.
 
 Step 4: Send mail for voting (dev PMC dev@systemml.apache.org).
 
@@ -120,8 +120,8 @@ Step 7: If release has been approved, then make it available for general use for
 	RELEASE_STAGING_LOCATION="https://dist.apache.org/repos/dist/dev/systemml/"
 	RELEASE_STAGING_LOCATION2="https://dist.apache.org/repos/dist/release/systemml/"
 
-	e.g. for SystemML 0.15 rc2 build
-	svn move -m "Move SystemML 0.15 from dev to release" $RELEASE_STAGING_LOCATION/0.15.0-rc2  $RELEASE_STAGING_LOCATION2/0.15.0
+	e.g. for SystemDS 0.15 rc2 build
+	svn move -m "Move SystemDS 0.15 from dev to release" $RELEASE_STAGING_LOCATION/0.15.0-rc2  $RELEASE_STAGING_LOCATION2/0.15.0
 
 
 	7.b. Move Nexus data from dev to release.
@@ -141,4 +141,4 @@ Step 7: If release has been approved, then make it available for general use for
 	7.e. Send ANNOUNCE NOTE.
 	To:  dev@systemml.apache.org  announce@apache.org
 	Subject e.g.
-	[ANNOUNCE] Apache SystemML 0.15.0 released.
+	[ANNOUNCE] Apache SystemDS 0.15.0 released.
diff --git a/release-process.md b/release-process.md
index c50a27e..5f82a45 100644
--- a/release-process.md
+++ b/release-process.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-title: SystemML Release Process
-description: Description of the SystemML release process and validation.
-displayTitle: SystemML Release Process
+title: SystemDS Release Process
+description: Description of the SystemDS release process and validation.
+displayTitle: SystemDS Release Process
 ---
 <!--
 {% comment %}
@@ -93,7 +93,7 @@ gpg --list-keys
 gpg --list-secret-keys
 ```
 
-**Clone SystemML Repository**
+**Clone SystemDS Repository**
 
 Since the artifacts will be deployed publicly, you should ensure that the project is completely clean.
 The deploy command should not be run on a copy of the project that you develop on. It should be a completely
@@ -139,7 +139,7 @@ Verify that the snapshot is now available at
 
 # Release Candidate Build and Deployment
 
-For detailed information, please see [SystemML Release Creation Process](release-creation-process.html).
+For detailed information, please see [SystemDS Release Creation Process](release-creation-process.html).
 
 # Release Candidate Checklist
 
@@ -158,7 +158,7 @@ checksums (such as .asc and .md5).
 The release candidate should build on Windows, OS X, and Linux. To do this cleanly,
 the following procedure can be performed.
 
-Clone the Apache SystemML GitHub repository
+Clone the Apache SystemDS GitHub repository
 to an empty location. Next, check out the release tag. Following
 this, build the distributions using Maven. This should be performed
 with an empty local Maven repository.
@@ -199,7 +199,7 @@ this OS X example.
 	tar -xvzf systemml-1.0.0-bin.tgz
 	cd systemml-1.0.0-bin
 	echo "print('hello world');" > hello.dml
-	./runStandaloneSystemML.sh hello.dml
+	./runStandaloneSystemDS.sh hello.dml
 	cd ..
 
 	# verify standalone zip works
@@ -207,7 +207,7 @@ this OS X example.
 	unzip systemml-1.0.0-bin.zip
 	cd systemml-1.0.0-bin
 	echo "print('hello world');" > hello.dml
-	./runStandaloneSystemML.sh hello.dml
+	./runStandaloneSystemDS.sh hello.dml
 	cd ..
 
 	# verify src works
@@ -216,7 +216,7 @@ this OS X example.
 	mvn clean package -P distribution
 	cd target/
 	java -cp "./lib/*:systemml-1.0.0.jar" org.apache.sysml.api.DMLScript -s "print('hello world');"
-	java -cp "./lib/*:SystemML.jar" org.apache.sysml.api.DMLScript -s "print('hello world');"
+	java -cp "./lib/*:SystemDS.jar" org.apache.sysml.api.DMLScript -s "print('hello world');"
 	cd ../..
 
 	# verify spark batch mode
@@ -261,19 +261,19 @@ Install Keras and Tensorflow:
 	python3 -m pip install --user keras=='2.1.5'
 	python3 -m pip install --user tensorflow=='1.11.0'
 
-Compile SystemML distribution:
+Compile SystemDS distribution:
 
 	mvn package -P distribution
 	cd src/main/python/tests/
 
 For Spark 2.*, the Python tests at (`src/main/python/tests`) can be executed in the following manner:
 
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_agg_fn.py
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_binary_op.py
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mlcontext.py
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_df.py
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_numpy.py
-	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemML.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_nn_numpy.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_agg_fn.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_matrix_binary_op.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mlcontext.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_df.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_mllearn_numpy.py
+	PYSPARK_PYTHON=python3 spark-submit --driver-class-path ../../../../target/SystemDS.jar,../../../../target/systemml-*-SNAPSHOT-extra.jar test_nn_numpy.py
 
 
 ## Check LICENSE and NOTICE Files
@@ -309,7 +309,7 @@ the tests should pass.
 
 <a href="#release-candidate-checklist">Up to Checklist</a>
 
-The standalone tgz and zip artifacts contain `runStandaloneSystemML.sh` and `runStandaloneSystemML.bat`
+The standalone tgz and zip artifacts contain `runStandaloneSystemDS.sh` and `runStandaloneSystemDS.bat`
 files. Verify that one or more algorithms can be run on a single node using these
 standalone distributions.
 
@@ -322,7 +322,7 @@ demonstrating the execution of an algorithm (on OS X).
 	echo '{"rows": 306, "cols": 4, "format": "csv"}' > data/haberman.data.mtd
 	echo '1,1,1,2' > data/types.csv
 	echo '{"rows": 1, "cols": 4, "format": "csv"}' > data/types.csv.mtd
-	./runStandaloneSystemML.sh scripts/algorithms/Univar-Stats.dml -nvargs X=data/haberman.data TYPES=data/types.csv STATS=data/univarOut.mtx CONSOLE_OUTPUT=TRUE
+	./runStandaloneSystemDS.sh scripts/algorithms/Univar-Stats.dml -nvargs X=data/haberman.data TYPES=data/types.csv STATS=data/univarOut.mtx CONSOLE_OUTPUT=TRUE
 	cd ..
 
 
@@ -330,7 +330,7 @@ demonstrating the execution of an algorithm (on OS X).
 
 <a href="#release-candidate-checklist">Up to Checklist</a>
 
-Verify that SystemML runs algorithms on Spark locally.
+Verify that SystemDS runs algorithms on Spark locally.
 
 Here is an example of running the `Univar-Stats.dml` algorithm on random generated data.
 
@@ -347,7 +347,7 @@ Here is an example of running the `Univar-Stats.dml` algorithm on random generat
 
 <a href="#release-candidate-checklist">Up to Checklist</a>
 
-Verify that SystemML runs algorithms on Hadoop locally.
+Verify that SystemDS runs algorithms on Hadoop locally.
 
 Based on the "Single-Node Spark" setup above, the `Univar-Stats.dml` algorithm could be run as follows:
 
@@ -359,7 +359,7 @@ Based on the "Single-Node Spark" setup above, the `Univar-Stats.dml` algorithm c
 
 <a href="#release-candidate-checklist">Up to Checklist</a>
 
-Verify that SystemML can be executed from Jupyter and Zeppelin notebooks.
+Verify that SystemDS can be executed from Jupyter and Zeppelin notebooks.
 For examples, see the [Spark MLContext Programming Guide](http://apache.github.io/systemml/spark-mlcontext-programming-guide.html).
 
 
@@ -370,7 +370,7 @@ For examples, see the [Spark MLContext Programming Guide](http://apache.github.i
 Verify that the performance suite executes on Spark and Hadoop. Testing should
 include 80MB, 800MB, 8GB, and 80GB data sizes.
 
-For more information, please see [SystemML Performance Testing](python-performance-test.html).
+For more information, please see [SystemDS Performance Testing](python-performance-test.html).
 
 
 # Run NN Unit Tests for GPU
@@ -398,7 +398,7 @@ file and remove all the `@Ignore` annotations from all the tests. Then run the N
 
 # Voting
 
-Following a successful release candidate vote by SystemML PMC members on the SystemML mailing list, the release candidate
+Following a successful release candidate vote by SystemDS PMC members on the SystemDS mailing list, the release candidate
 has been approved.
 
 
diff --git a/spark-batch-mode.md b/spark-batch-mode.md
index 349f17c..df75cf0 100644
--- a/spark-batch-mode.md
+++ b/spark-batch-mode.md
@@ -1,7 +1,7 @@
 ---
 layout: global
-title: Invoking SystemML in Spark Batch Mode
-description: Invoking SystemML in Spark Batch Mode
+title: Invoking SystemDS in Spark Batch Mode
+description: Invoking SystemDS in Spark Batch Mode
 ---
 <!--
 {% comment %}
@@ -30,55 +30,55 @@ limitations under the License.
 
 # Overview
 
-Given that a primary purpose of SystemML is to perform machine learning on large distributed data
-sets, one of the most important ways to invoke SystemML is Spark Batch. Here, we will look at this
+Given that a primary purpose of SystemDS is to perform machine learning on large distributed data
+sets, one of the most important ways to invoke SystemDS is Spark Batch. Here, we will look at this
 mode in more depth.
 
-**NOTE:** For a programmatic API to run and interact with SystemML via Scala or Python, please see the
+**NOTE:** For a programmatic API to run and interact with SystemDS via Scala or Python, please see the
 [Spark MLContext Programming Guide](spark-mlcontext-programming-guide).
 
 ---
 
 # Spark Batch Mode Invocation Syntax
 
-SystemML can be invoked in Spark Batch mode using the following syntax:
+SystemDS can be invoked in Spark Batch mode using the following syntax:
 
-    spark-submit SystemML.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
+    spark-submit SystemDS.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
 
-The DML script to invoke is specified after the `-f` argument. Configuration settings can be passed to SystemML
+The DML script to invoke is specified after the `-f` argument. Configuration settings can be passed to SystemDS
 using the optional `-config ` argument. DML scripts can optionally take named arguments (`-nvargs`) or positional
 arguments (`-args`). Named arguments are preferred over positional arguments. Positional arguments are considered
-to be deprecated. All the primary algorithm scripts included with SystemML use named arguments.
+to be deprecated. All the primary algorithm scripts included with SystemDS use named arguments.
 
 
 **Example #1: DML Invocation with Named Arguments**
 
-    spark-submit SystemML.jar -f scripts/algorithms/Kmeans.dml -nvargs X=X.mtx k=5
+    spark-submit SystemDS.jar -f scripts/algorithms/Kmeans.dml -nvargs X=X.mtx k=5
 
 
 **Example #2: DML Invocation with Positional Arguments**
 
-	spark-submit SystemML.jar -f src/test/scripts/applications/linear_regression/LinearRegression.dml -args "v" "y" 0.00000001 "w"
+	spark-submit SystemDS.jar -f src/test/scripts/applications/linear_regression/LinearRegression.dml -args "v" "y" 0.00000001 "w"
 
 # Execution modes
 
-SystemML works seamlessly with all Spark execution modes, including *local* (`--master local[*]`),
+SystemDS works seamlessly with all Spark execution modes, including *local* (`--master local[*]`),
 *yarn client* (`--master yarn --deploy-mode client`), *yarn cluster* (`--master yarn --deploy-mode cluster`), *etc*.  More
 information on Spark cluster execution modes can be found on the
 [official Spark cluster deployment documentation](https://spark.apache.org/docs/latest/cluster-overview.html).
 *Note* that Spark can be easily run on a laptop in local mode using the `--master local[*]` described
-above, which SystemML supports.
+above, which SystemDS supports.
 
 # Recommended Spark Configuration Settings
 
-For best performance, we recommend setting the following configuration value when running SystemML with Spark:
+For best performance, we recommend setting the following configuration value when running SystemDS with Spark:
 `--conf spark.driver.maxResultSize=0`.
 
 # Examples
 
 Please see the MNIST examples in the included
-[SystemML-NN](https://github.com/apache/systemml/tree/master/scripts/nn)
-library for examples of Spark Batch mode execution with SystemML to train MNIST classifiers:
+[SystemDS-NN](https://github.com/apache/systemml/tree/master/scripts/nn)
+library for examples of Spark Batch mode execution with SystemDS to train MNIST classifiers:
 
   * [MNIST Softmax Classifier](https://github.com/apache/systemml/blob/master/scripts/nn/examples/mnist_softmax-train.dml)
   * [MNIST LeNet ConvNet](https://github.com/apache/systemml/blob/master/scripts/nn/examples/mnist_lenet-train.dml)
diff --git a/spark-mlcontext-programming-guide.md b/spark-mlcontext-programming-guide.md
index 63e48be..2662c30 100644
--- a/spark-mlcontext-programming-guide.md
+++ b/spark-mlcontext-programming-guide.md
@@ -30,27 +30,27 @@ limitations under the License.
 
 # Overview
 
-The Spark `MLContext` API offers a programmatic interface for interacting with SystemML from Spark using languages
-such as Scala, Java, and Python. As a result, it offers a convenient way to interact with SystemML from the Spark
+The Spark `MLContext` API offers a programmatic interface for interacting with SystemDS from Spark using languages
+such as Scala, Java, and Python. As a result, it offers a convenient way to interact with SystemDS from the Spark
 Shell and from Notebooks such as Jupyter and Zeppelin.
 
 # Spark Shell Example
 
-## Start Spark Shell with SystemML
+## Start Spark Shell with SystemDS
 
-To use SystemML with Spark Shell, the SystemML jar can be referenced using Spark Shell's `--jars` option.
+To use SystemDS with Spark Shell, the SystemDS jar can be referenced using Spark Shell's `--jars` option.
 
 <div class="codetabs">
 
 <div data-lang="Spark Shell" markdown="1">
 {% highlight bash %}
-spark-shell --executor-memory 4G --driver-memory 4G --jars SystemML.jar
+spark-shell --executor-memory 4G --driver-memory 4G --jars SystemDS.jar
 {% endhighlight %}
 </div>
 
 <div data-lang="PySpark Shell" markdown="1">
 {% highlight bash %}
-pyspark --executor-memory 4G --driver-memory 4G --jars SystemML.jar --driver-class-path SystemML.jar
+pyspark --executor-memory 4G --driver-memory 4G --jars SystemDS.jar --driver-class-path SystemDS.jar
 {% endhighlight %}
 </div>
 
@@ -61,7 +61,7 @@ pyspark --executor-memory 4G --driver-memory 4G --jars SystemML.jar --driver-cla
 All primary classes that a user interacts with are located in the `org.apache.sysml.api.mlcontext` package.
 For convenience, we can additionally add a static import of `ScriptFactory` to shorten the syntax for creating `Script` objects.
 An `MLContext` object can be created by passing its constructor a reference to the `SparkSession` (`spark`) or `SparkContext` (`sc`).
-If successful, you should see a "`Welcome to Apache SystemML!`" message.
+If successful, you should see a "`Welcome to Apache SystemDS!`" message.
 
 <div class="codetabs">
 
@@ -83,7 +83,7 @@ import org.apache.sysml.api.mlcontext.ScriptFactory._
 
 scala> val ml = new MLContext(spark)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 
 ml: org.apache.sysml.api.mlcontext.MLContext = org.apache.sysml.api.mlcontext.MLContext@12139db0
 
@@ -103,7 +103,7 @@ ml = MLContext(spark)
 >>> from systemml import MLContext, dml, dmlFromResource, dmlFromFile, dmlFromUrl
 >>> ml = MLContext(spark)
 
-Welcome to Apache SystemML!
+Welcome to Apache SystemDS!
 Version 1.0.0-SNAPSHOT
 {% endhighlight %}
 </div>
@@ -160,7 +160,7 @@ ml.execute(helloScript)
 >>> helloScript = dml("print('hello world')")
 >>> ml.execute(helloScript)
 hello world
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.001 sec.
 Number of executed Spark inst:  0.
 
@@ -174,9 +174,9 @@ MLResults
 
 ## LeNet on MNIST Example
 
-SystemML features the DML-based [`nn` library for deep learning](https://github.com/apache/systemml/tree/master/scripts/nn).
+SystemDS features the DML-based [`nn` library for deep learning](https://github.com/apache/systemml/tree/master/scripts/nn).
 
-At project build time, SystemML automatically generates wrapper classes for DML scripts
+At project build time, SystemDS automatically generates wrapper classes for DML scripts
 to enable convenient access to scripts and execution of functions.
 In the example below, we obtain a reference (`clf`) to the LeNet on MNIST example.
 We generate dummy data, train a convolutional net using the LeNet architecture,
@@ -208,7 +208,7 @@ Outputs:
 None
 
 scala> val dummy = clf.generate_dummy_data
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		0.144 sec.
 Number of executed Spark inst:	0.
 
@@ -220,7 +220,7 @@ Hin (long): 28
 Win (long): 28
 
 scala> val dummyVal = clf.generate_dummy_data
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		0.147 sec.
 Number of executed Spark inst:	0.
 
@@ -255,7 +255,7 @@ Starting optimization
 17/06/05 15:52:19 WARN TaskSetManager: Stage 27 contains a task of very large size (508 KB). The maximum recommended task size is 100 KB.
 17/06/05 15:52:19 WARN TaskSetManager: Stage 29 contains a task of very large size (508 KB). The maximum recommended task size is 100 KB.
 17/06/05 15:52:20 WARN TaskSetManager: Stage 31 contains a task of very large size (508 KB). The maximum recommended task size is 100 KB.
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		11.261 sec.
 Number of executed Spark inst:	32.
 
@@ -266,14 +266,14 @@ W2 (Matrix): MatrixObject: scratch_space//_p64701_192.168.1.103//_t0/temp2196_15
 b2 (Matrix): MatrixObject: scratch_space//_p64701_192.168.1.103//_t0/temp2200_1603, [64 x 1, nnz=64, blocks (1000 x 1000)], binaryblock, dirty
 W3 (Matrix): MatrixObject: scratch_space//_p64701_192.168.1.103//_t0/temp2186_1589, [3136 x 512, nnz=1605632, blocks (1000 x 1000)], binaryblock, ...
 scala> val probs = clf.predict(dummy.X, dummy.C, dummy.Hin, dummy.Win, params.W1, params.b1, params.W2, params.b2, params.W3, params.b3, params.W4, params.b4)
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		2.148 sec.
 Number of executed Spark inst:	48.
 
 probs: org.apache.sysml.api.mlcontext.Matrix = MatrixObject: scratch_space//_p64701_192.168.1.103//_t0/temp2505_1865, [1024 x 10, nnz=10240, blocks (1000 x 1000)], binaryblock, dirty
 
 scala> val perf = clf.eval(probs, dummy.Y)
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		0.007 sec.
 Number of executed Spark inst:	48.
 
@@ -449,7 +449,7 @@ min, max, mean = ml.execute(minMaxMeanScript).get("minOut", "maxOut", "meanOut")
 ... """
 >>> minMaxMeanScript = dml(minMaxMean).input("Xin", df).output("minOut", "maxOut", "meanOut")
 >>> min, max, mean = ml.execute(minMaxMeanScript).get("minOut", "maxOut", "meanOut")
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.570 sec.
 Number of executed Spark inst:  0.
 {% endhighlight %}
@@ -613,7 +613,7 @@ message = sumResults.get("message")
 s1 = sumResults.get("s1")
 s2 = sumResults.get("s2")
 message = sumResults.get("message")
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.933 sec.
 Number of executed Spark inst:  4.
 
@@ -728,7 +728,7 @@ s1, s2, message = sumResults.get("s1", "s2", "message")
 {% highlight python %}
 >>> sumScript = dmlFromFile("sums.dml").input(m1=rdd1).input(m2= rdd2).output("s1").output("s2").output("message")
 >>> sumResults = ml.execute(sumScript)
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           1.057 sec.
 Number of executed Spark inst:  4.
 
@@ -747,7 +747,7 @@ u's2 is greater'
 
 ## Matrix Output
 
-Let's look at an example of reading a matrix out of SystemML. We'll create a DML script
+Let's look at an example of reading a matrix out of SystemDS. We'll create a DML script
 in which we create a 2x2 matrix `m`. We'll set the variable `n` to be the sum of the cells in the matrix.
 
 We create a script object using String `s`, and we set `m` and `n` as the outputs. We execute the script, and in
@@ -855,7 +855,7 @@ x.toNumPy()
 ... """
 >>> scr = dml(s).output("m", "n");
 >>> res = ml.execute(scr)
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.000 sec.
 Number of executed Spark inst:  0.
 
@@ -884,7 +884,7 @@ array([[ 11.,  22.],
 ## Univariate Statistics on Haberman Data
 
 Our next example will involve Haberman's Survival Data Set in CSV format from the Center for Machine Learning
-and Intelligent Systems. We will run the SystemML Univariate Statistics ("Univar-Stats.dml") script on this
+and Intelligent Systems. We will run the SystemDS Univariate Statistics ("Univar-Stats.dml") script on this
 data.
 
 We'll pull the data from a URL and convert it to an RDD, `habermanRDD`. Next, we'll create metadata, `habermanMetadata`,
@@ -1098,7 +1098,7 @@ Feature [4]: Categorical (Nominal)
  (15) Num of categories   | 2
  (16) Mode                | 1
  (17) Num of modes        | 1
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.733 sec.
 Number of executed Spark inst:  4.
 
@@ -1291,7 +1291,7 @@ Feature [4]: Categorical (Nominal)
  (15) Num of categories   | 2
  (16) Mode                | 1
  (17) Num of modes        | 1
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:		0.211 sec.
 Number of executed Spark inst:	8.
 
@@ -1310,7 +1310,7 @@ If we examine the
 [`Univar-Stats.dml`](https://raw.githubusercontent.com/apache/systemml/master/scripts/algorithms/Univar-Stats.dml)
 file, we see in the comments that it can take 4 input
 parameters, `$X`, `$TYPES`, `$CONSOLE_OUTPUT`, and `$STATS`. Input parameters are typically useful when
-executing SystemML in Standalone mode, Spark batch mode, or Hadoop batch mode. For example, `$X` specifies
+executing SystemDS in Standalone mode, Spark batch mode, or Hadoop batch mode. For example, `$X` specifies
 the location in the file system where the input data matrix is located, `$TYPES` specifies the location in the file system
 where the input types matrix is located, `$CONSOLE_OUTPUT` specifies whether or not labeled statistics should be
 output to the console, and `$STATS` specifies the location in the file system where the output matrix should be written.
@@ -1404,7 +1404,7 @@ baseStats.toNumPy().flatten()[0:9]
 {% highlight python %}
 >>> uni = dmlFromUrl(scriptUrl).input(A=habermanRDD, K=typesRDD).output("baseStats")
 >>> baseStats = ml.execute(uni).get("baseStats")
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.690 sec.
 Number of executed Spark inst:  4.
 
@@ -1420,7 +1420,7 @@ array([ 30.,  58.,   0.,   0.,  83.,  69.,  52.,   0.,  53.])
 
 The `info` method on a Script object can provide useful information about a DML or PyDML script, such as
 the inputs, output, symbol table, script string, and the script execution string that is passed to the internals of
-SystemML.
+SystemDS.
 
 <div class="codetabs">
 
@@ -1540,7 +1540,7 @@ print(minMaxMeanScript.info())
 min, max, mean = ml.execute(minMaxMeanScript).get("minOut", "maxOut", "meanOut")
 print(minMaxMeanScript.info())>>> min, max, mean = ml.execute(minMaxMeanScript).get("minOut", "maxOut", "meanOut")
 
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.521 sec.
 Number of executed Spark inst:  0.
 
@@ -1723,7 +1723,7 @@ Outputs:
 
 
 scala> val (min, max, mean) = ml.execute(minMaxMeanScript).getTuple[Double, Double, Double]("minOut", "maxOut", "meanOut")
-SystemML Statistics:
+SystemDS Statistics:
 Total elapsed time:		0.000 sec.
 Total compilation time:		0.000 sec.
 Total execution time:		0.000 sec.
@@ -1778,7 +1778,7 @@ MLContext
 ... """
 >>> minMaxMeanScript = dml(minMaxMean).input(Xin=df).output("minOut", "maxOut", "meanOut")
 >>> min, max, mean = ml.execute(minMaxMeanScript).get("minOut", "maxOut", "meanOut")
-SystemML Statistics:
+SystemDS Statistics:
 Total elapsed time:             0.608 sec.
 Total compilation time:         0.000 sec.
 Total execution time:           0.608 sec.
@@ -1811,7 +1811,7 @@ Heavy hitter instructions:
 
 ## GPU
 
-If the driver node has a GPU, SystemML may be able to utilize it, subject to memory constraints and what instructions are used in the dml script
+If the driver node has a GPU, SystemDS may be able to utilize it, subject to memory constraints and what instructions are used in the dml script
 
 <div class="codetabs">
 
@@ -1860,7 +1860,7 @@ scala> ml.execute(matMultScript)
 247.368 239.882 234.353 237.087 252.337 248.801 246.627 249.077 244.305 245.621
 252.827 257.352 239.546 246.529 258.916 255.612 260.480 254.805 252.695 257.531
 
-SystemML Statistics:
+SystemDS Statistics:
 Total elapsed time:		0.000 sec.
 Total compilation time:		0.000 sec.
 Total execution time:		0.000 sec.
@@ -1934,7 +1934,7 @@ MLContext
 252.990 244.238 248.096 241.145 242.065 253.795 245.352 246.056 251.132 253.063
 253.216 249.008 247.910 246.579 242.657 251.078 245.954 244.681 241.878 248.555
 
-SystemML Statistics:
+SystemDS Statistics:
 Total elapsed time:             0.042 sec.
 Total compilation time:         0.000 sec.
 Total execution time:           0.042 sec.
@@ -1978,7 +1978,7 @@ Note that GPU instructions show up prepended with a "gpu" in the statistics.
 
 ## Explain
 
-A DML or PyDML script is converted into a SystemML program during script execution. Information
+A DML or PyDML script is converted into a SystemDS program during script execution. Information
 about this program can be displayed by calling MLContext's `setExplain` method with a value
 of `true`.
 
@@ -2090,7 +2090,7 @@ PROGRAM ( size CP/SP = 7/0 )
 ------CP assignvar _Var3.SCALAR.DOUBLE.false meanOut.SCALAR.DOUBLE
 ------CP rmvar _Var1 _Var2 _Var3
 
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.952 sec.
 Number of executed Spark inst:  0.
 
@@ -2164,7 +2164,7 @@ PROGRAM ( size CP/SP = 7/0 )
 ------CP assignvar _Var6.SCALAR.DOUBLE.false meanOut.SCALAR.DOUBLE
 ------CP rmvar _Var4 _Var5 _Var6
 
-SystemML Statistics:
+SystemDS Statistics:
 Total execution time:           0.022 sec.
 Number of executed Spark inst:  0.
 
@@ -2252,7 +2252,7 @@ val s4 = ScriptFactory.dmlFromFile("uni.dml")
 
 **Script from InputStream:**
 
-The SystemML jar file contains all the primary algorithm scripts. We can read one of these scripts as an InputStream
+The SystemDS jar file contains all the primary algorithm scripts. We can read one of these scripts as an InputStream
 and use this to create a Script object.
 
 {% highlight scala %}
@@ -2263,7 +2263,7 @@ val s5 = ScriptFactory.dmlFromInputStream(inputStream)
 
 **Script from Resource:**
 
-As mentioned, the SystemML jar file contains all the primary algorithm script files. For convenience, we can
+As mentioned, the SystemDS jar file contains all the primary algorithm script files. For convenience, we can
 read these script files or other script files on the classpath using ScriptFactory's `dmlFromResource` and `pydmlFromResource`
 methods.
 
@@ -2276,7 +2276,7 @@ val s6 = ScriptFactory.dmlFromResource("/scripts/algorithms/Univar-Stats.dml");
 
 A Script is executed by a ScriptExecutor. If no ScriptExecutor is specified, a default ScriptExecutor will
 be created to execute a Script. Script execution consists of several steps, as detailed in
-[SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs](http://sites.computer.org/debull/A14sept/p52.pdf).
+[SystemDS's Optimizer: Plan Generation for Large-Scale Machine Learning Programs](http://sites.computer.org/debull/A14sept/p52.pdf).
 Additional information can be found in the Javadocs for ScriptExecutor.
 
 Advanced users may find it useful to be able to specify their own execution or to override ScriptExecutor methods by
@@ -2330,15 +2330,15 @@ None
 
 ## MatrixMetadata
 
-When supplying matrix data to Apache SystemML using the MLContext API, matrix metadata can be
+When supplying matrix data to Apache SystemDS using the MLContext API, matrix metadata can be
 supplied using a `MatrixMetadata` object. Supplying characteristics about a matrix can significantly
 improve performance. For some types of input matrices, supplying metadata is mandatory.
 Metadata at a minimum typically consists of the number of rows and columns in
 a matrix. The number of non-zeros can also be supplied.
 
 Additionally, the number of rows and columns per block can be supplied, although in typical usage
-it's probably fine to use the default values used by SystemML (1,000 rows and 1,000 columns per block).
-SystemML handles a matrix internally by splitting the matrix into chunks, or *blocks*.
+it's probably fine to use the default values used by SystemDS (1,000 rows and 1,000 columns per block).
+SystemDS handles a matrix internally by splitting the matrix into chunks, or *blocks*.
 The number of rows and columns per block refers to the size of these matrix blocks.
 
 
@@ -2476,14 +2476,14 @@ res22: org.apache.sysml.api.mlcontext.MLResults =
 
 ## Matrix Data Conversions and Performance
 
-Internally, Apache SystemML uses a binary-block matrix representation, where a matrix is
+Internally, Apache SystemDS uses a binary-block matrix representation, where a matrix is
 represented as a grouping of blocks. Each block is equal in size to the other blocks in the matrix and
 consists of a number of rows and columns. The default block size is 1,000 rows by 1,000
 columns.
 
-Conversion of a large set of data to a SystemML matrix representation can potentially be time-consuming.
+Conversion of a large set of data to a SystemDS matrix representation can potentially be time-consuming.
 Therefore, if you use a set of data multiple times, one way to potentially improve performance is
-to convert it to a SystemML matrix representation and then use this representation rather than performing
+to convert it to a SystemDS matrix representation and then use this representation rather than performing
 the data conversion each time.
 
 If you have an input DataFrame, it can be converted to a Matrix, and this Matrix
@@ -2531,7 +2531,7 @@ val minMaxMeanScript = dml(minMaxMean).in("Xin", matrix).out("minOut", "maxOut",
 {% endhighlight %}
 
 When a matrix is returned as an output, it is returned as a Matrix object, which is a wrapper around
-a SystemML MatrixObject. As a result, an output Matrix is already in a SystemML representation,
+a SystemDS MatrixObject. As a result, an output Matrix is already in a SystemDS representation,
 meaning that it can be passed as an input with no data conversion penalty.
 
 As an example, here we read in matrix `x` as an RDD in CSV format. We create a Script that adds one to all
@@ -2598,7 +2598,7 @@ scala> for (i <- 1 to 5) {
 
 ## Project Information
 
-SystemML project information such as version and build time can be obtained through the
+SystemDS project information such as version and build time can be obtained through the
 MLContext API. The project version can be obtained by `ml.version`. The build time can
 be obtained by `ml.buildTime`. The contents of the project manifest can be displayed
 using `ml.info`. Individual properties can be obtained using the `ml.info.property`
@@ -2677,13 +2677,13 @@ Version: 1.0.0-SNAPSHOT
 
 # Jupyter (PySpark) Notebook Example - Poisson Nonnegative Matrix Factorization
 
-Similar to the Scala API, SystemML also provides a Python MLContext API.  Before usage, you'll need
+Similar to the Scala API, SystemDS also provides a Python MLContext API.  Before usage, you'll need
 **[to install it first](beginners-guide-python#download--setup)**.
 
-Here, we'll explore the use of SystemML via PySpark in a [Jupyter notebook](http://jupyter.org/).
+Here, we'll explore the use of SystemDS via PySpark in a [Jupyter notebook](http://jupyter.org/).
 This Jupyter notebook example can be nicely viewed in a rendered state
-[on GitHub](https://github.com/apache/systemml/blob/master/samples/jupyter-notebooks/SystemML-PySpark-Recommendation-Demo.ipynb),
-and can be [downloaded here](https://raw.githubusercontent.com/apache/systemml/master/samples/jupyter-notebooks/SystemML-PySpark-Recommendation-Demo.ipynb) to a directory of your choice.
+[on GitHub](https://github.com/apache/systemml/blob/master/samples/jupyter-notebooks/SystemDS-PySpark-Recommendation-Demo.ipynb),
+and can be [downloaded here](https://raw.githubusercontent.com/apache/systemml/master/samples/jupyter-notebooks/SystemDS-PySpark-Recommendation-Demo.ipynb) to a directory of your choice.
 
 From the directory with the downloaded notebook, start Jupyter with PySpark:
 
@@ -2704,7 +2704,7 @@ This will open Jupyter in a browser:
 
 ![Jupyter Notebook](img/spark-mlcontext-programming-guide/jupyter1.png "Jupyter Notebook")
 
-We can then open up the `SystemML-PySpark-Recommendation-Demo` notebook.
+We can then open up the `SystemDS-PySpark-Recommendation-Demo` notebook.
 
 ## Set up the notebook and download the data
 
@@ -2747,17 +2747,17 @@ numProducts = max(max_prod_i, max_prod_j) + 1 # 0-based indexing
 print("Total number of products: {}".format(numProducts))
 {% endhighlight %}
 
-## Create a SystemML MLContext object
+## Create a SystemDS MLContext object
 
 {% highlight python %}
-# Create SystemML MLContext
+# Create SystemDS MLContext
 ml = MLContext(sc)
 {% endhighlight %}
 
 ## Define a kernel for Poisson nonnegative matrix factorization (PNMF) in DML
 
 {% highlight python %}
-# Define PNMF kernel in SystemML's DSL using the R-like syntax for PNMF
+# Define PNMF kernel in SystemDS's DSL using the R-like syntax for PNMF
 pnmf = """
 # data & args
 X = X+1 # change product IDs to be 1-based, rather than 0-based
@@ -2791,7 +2791,7 @@ while(i <= max_iter) {
 ## Execute the algorithm
 
 {% highlight python %}
-# Run the PNMF script on SystemML with Spark
+# Run the PNMF script on SystemDS with Spark
 script = dml(pnmf).input(X=X_train, max_iter=100, rank=10).output("W", "H", "losses")
 W, H, losses = ml.execute(script).get("W", "H", "losses")
 {% endhighlight %}
@@ -2814,5 +2814,5 @@ plt.title('PNMF Training Loss')
 
 # Recommended Spark Configuration Settings
 
-For best performance, we recommend setting the following configuration value when running SystemML with Spark:
+For best performance, we recommend setting the following configuration value when running SystemDS with Spark:
 `--conf spark.driver.maxResultSize=0`.
diff --git a/standalone-guide.md b/standalone-guide.md
index 7116f25..2a1b5ab 100644
--- a/standalone-guide.md
+++ b/standalone-guide.md
@@ -1,8 +1,8 @@
 ---
 layout: global
-title: SystemML Standalone Guide
-description: SystemML Standalone Guide
-displayTitle: SystemML Standalone Guide
+title: SystemDS Standalone Guide
+description: SystemDS Standalone Guide
+displayTitle: SystemDS Standalone Guide
 ---
 <!--
 {% comment %}
@@ -28,13 +28,13 @@ limitations under the License.
 
 <br/>
 
-This tutorial provides a quick introduction to using SystemML by
-running existing SystemML algorithms in standalone mode.
+This tutorial provides a quick introduction to using SystemDS by
+running existing SystemDS algorithms in standalone mode.
 
 
-# What is SystemML
+# What is SystemDS
 
-SystemML enables large-scale machine learning (ML) via a high-level declarative
+SystemDS enables large-scale machine learning (ML) via a high-level declarative
 language with R-like syntax called [DML](dml-language-reference.html) and
 Python-like syntax called PyDML. DML and PyDML allow data scientists to
 express their ML algorithms with full flexibility but without the need to fine-tune
@@ -44,25 +44,25 @@ and cluster characteristics using rule-based and cost-based optimization techniq
 The compiler automatically generates hybrid runtime execution plans ranging
 from in-memory, single node execution to distributed computation for Hadoop
 or Spark Batch execution.
-SystemML features a suite of algorithms for Descriptive Statistics, Classification,
+SystemDS features a suite of algorithms for Descriptive Statistics, Classification,
 Clustering, Regression, Matrix Factorization, and Survival Analysis. Detailed descriptions of these
 algorithms can be found in the [Algorithms Reference](algorithms-reference.html).
 
-# Download SystemML
+# Download SystemDS
 
-Apache SystemML releases are available from the [Downloads](http://systemml.apache.org/download.html) page.
+Apache SystemDS releases are available from the [Downloads](http://systemml.apache.org/download.html) page.
 
-SystemML can also be downloaded from GitHub and built with Maven.
-The SystemML project is available on GitHub at [https://github.com/apache/systemml](https://github.com/apache/systemml).
-Instructions to build SystemML can be found in the <a href="engine-dev-guide.html">Engine Developer Guide</a>.
+SystemDS can also be downloaded from GitHub and built with Maven.
+The SystemDS project is available on GitHub at [https://github.com/apache/systemml](https://github.com/apache/systemml).
+Instructions to build SystemDS can be found in the <a href="engine-dev-guide.html">Engine Developer Guide</a>.
 
 # Standalone vs Distributed Execution Mode
 
-SystemML's standalone mode is designed to allow data scientists to rapidly prototype algorithms
+SystemDS's standalone mode is designed to allow data scientists to rapidly prototype algorithms
 on a single machine. In standalone mode, all operations occur on a single node in a non-Hadoop
 environment. Standalone mode is not appropriate for large datasets.
 
-For large-scale production environments, SystemML algorithm execution can be
+For large-scale production environments, SystemDS algorithm execution can be
 distributed across multi-node clusters using [Apache Hadoop](https://hadoop.apache.org/)
 or [Apache Spark](http://spark.apache.org/).
 We will make use of standalone mode throughout this tutorial.
@@ -113,7 +113,7 @@ the data along with its metadata file `types.csv.mtd`.
 
 To run the `Univar-Stats.dml` algorithm, issue the following command (we set the optional argument `CONSOLE_OUTPUT` to `TRUE` to print the statistics to the console):
 
-    $ ./runStandaloneSystemML.sh scripts/algorithms/Univar-Stats.dml -nvargs X=data/haberman.data TYPES=data/types.csv STATS=data/univarOut.mtx CONSOLE_OUTPUT=TRUE
+    $ ./runStandaloneSystemDS.sh scripts/algorithms/Univar-Stats.dml -nvargs X=data/haberman.data TYPES=data/types.csv STATS=data/univarOut.mtx CONSOLE_OUTPUT=TRUE
 
     [...]
     -------------------------------------------------
@@ -277,7 +277,7 @@ We will create the file `perc.csv` and `perc.csv.mtd` to define the sampling vec
 
 Let's run the sampling algorithm to create the two data samples:
 
-    $ ./runStandaloneSystemML.sh scripts/utils/sample.dml -nvargs X=data/haberman.data sv=data/perc.csv O=data/haberman.part ofmt="csv"
+    $ ./runStandaloneSystemDS.sh scripts/utils/sample.dml -nvargs X=data/haberman.data sv=data/perc.csv O=data/haberman.part ofmt="csv"
 
 
 ## Splitting Labels from Features
@@ -296,9 +296,9 @@ Parameters:
 We specify `y=4` as the 4th column contains the labels to be predicted and run
 the `splitXY.dml` algorithm on our training and test data sets.
 
-    $ ./runStandaloneSystemML.sh scripts/utils/splitXY.dml -nvargs X=data/haberman.part/1 y=4 OX=data/haberman.train.data.csv OY=data/haberman.train.labels.csv ofmt="csv"
+    $ ./runStandaloneSystemDS.sh scripts/utils/splitXY.dml -nvargs X=data/haberman.part/1 y=4 OX=data/haberman.train.data.csv OY=data/haberman.train.labels.csv ofmt="csv"
 
-    $ ./runStandaloneSystemML.sh scripts/utils/splitXY.dml -nvargs X=data/haberman.part/2 y=4 OX=data/haberman.test.data.csv  OY=data/haberman.test.labels.csv  ofmt="csv"
+    $ ./runStandaloneSystemDS.sh scripts/utils/splitXY.dml -nvargs X=data/haberman.part/2 y=4 OX=data/haberman.test.data.csv  OY=data/haberman.test.labels.csv  ofmt="csv"
 
 ## Training and Testing the Model
 
@@ -315,11 +315,11 @@ Now we need to train our model using the `l2-svm.dml` algorithm.
 
 The `l2-svm.dml` algorithm is used on our training data sample to train the model.
 
-    $ ./runStandaloneSystemML.sh scripts/algorithms/l2-svm.dml -nvargs X=data/haberman.train.data.csv Y=data/haberman.train.labels.csv model=data/l2-svm-model.csv fmt="csv" Log=data/l2-svm-log.csv
+    $ ./runStandaloneSystemDS.sh scripts/algorithms/l2-svm.dml -nvargs X=data/haberman.train.data.csv Y=data/haberman.train.labels.csv model=data/l2-svm-model.csv fmt="csv" Log=data/l2-svm-log.csv
 
 The `l2-svm-predict.dml` algorithm is used on our test data sample to predict the labels based on the trained model.
 
-    $ ./runStandaloneSystemML.sh scripts/algorithms/l2-svm-predict.dml -nvargs X=data/haberman.test.data.csv Y=data/haberman.test.labels.csv model=data/l2-svm-model.csv fmt="csv" confusion=data/l2-svm-confusion.csv
+    $ ./runStandaloneSystemDS.sh scripts/algorithms/l2-svm-predict.dml -nvargs X=data/haberman.test.data.csv Y=data/haberman.test.labels.csv model=data/l2-svm-model.csv fmt="csv" confusion=data/l2-svm-confusion.csv
 
 The console output should show the accuracy of the trained model in percent, i.e.:
 
@@ -337,7 +337,7 @@ The console output should show the accuracy of the trained model in percent, i.e
     15/09/01 01:32:51 INFO conf.DMLConfig: Updating sysml.parallel.ops with value true
     15/09/01 01:32:51 INFO conf.DMLConfig: Updating sysml.parallel.io with value true
     Accuracy (%): 74.14965986394557
-    15/09/01 01:32:52 INFO api.DMLScript: SystemML Statistics:
+    15/09/01 01:32:52 INFO api.DMLScript: SystemDS Statistics:
     Total execution time:		0.130 sec.
     Number of executed MR Jobs:	0.
 
@@ -372,18 +372,18 @@ Refer to the [Algorithms Reference](algorithms-reference.html) for more details.
 For this example, we'll use a standalone wrapper executable, `bin/systemml`, that is available to
 be run directly within the project's source directory when built locally.
 
-After you build SystemML from source (`mvn clean package`), the standalone mode can be executed
+After you build SystemDS from source (`mvn clean package`), the standalone mode can be executed
 either on Linux or OS X using the `./bin/systemml` script, or on Windows using the
 `.\bin\systemml.bat` batch file.
 
 If you run from the script from the project root folder `./` or from the `./bin` folder, then the
-output files from running SystemML will be created inside the `./temp` folder to keep them separate
-from the SystemML source files managed by Git. The output files for this example will be created
+output files from running SystemDS will be created inside the `./temp` folder to keep them separate
+from the SystemDS source files managed by Git. The output files for this example will be created
 under the `./temp` folder.
 
-The runtime behavior and logging behavior of SystemML can be customized by editing the files
-`./conf/SystemML-config.xml` and `./conf/log4j.properties`. Both files will be created from their
-corresponding `*.template` files during the first execution of the SystemML executable script.
+The runtime behavior and logging behavior of SystemDS can be customized by editing the files
+`./conf/SystemDS-config.xml` and `./conf/log4j.properties`. Both files will be created from their
+corresponding `*.template` files during the first execution of the SystemDS executable script.
 
 When invoking the `./bin/systemml` or `.\bin\systemml.bat` with any of the prepackaged DML scripts
 you can omit the relative path to the DML script file. The following two commands are equivalent:
@@ -397,11 +397,11 @@ of the DML scripts.
 
 ## Linear Regression Example
 
-As an example of the capabilities and power of SystemML and DML, let's consider the Linear Regression algorithm.
+As an example of the capabilities and power of SystemDS and DML, let's consider the Linear Regression algorithm.
 We require sets of data to train and test our model. To obtain this data, we can either use real data or
 generate data for our algorithm. The
 [UCI Machine Learning Repository Datasets](https://archive.ics.uci.edu/ml/datasets.html) is one location for real data.
-Use of real data typically involves some degree of data wrangling. In the following example, we will use SystemML to
+Use of real data typically involves some degree of data wrangling. In the following example, we will use SystemDS to
 generate random data to train and test our model.
 
 This example consists of the following parts:
@@ -413,7 +413,7 @@ This example consists of the following parts:
   * [Train Model on First Sample](#train-model-on-first-sample)
   * [Test Model on Second Sample](#test-model-on-second-sample)
 
-SystemML is distributed in several packages, including a standalone package. We'll operate in Standalone mode in this
+SystemDS is distributed in several packages, including a standalone package. We'll operate in Standalone mode in this
 example.
 
 <a name="run-dml-script-to-generate-random-data" />
@@ -434,7 +434,7 @@ This generates the following files inside the `./temp` folder:
     linRegData.csv.mtd  # Metadata file
     perc.csv            # Used to generate two subsets of the data (for training and testing)
     perc.csv.mtd        # Metadata file
-    scratch_space       # SystemML scratch_space directory
+    scratch_space       # SystemDS scratch_space directory
 
 <a name="divide-generated-data-into-two-sample-groups" />
 
@@ -506,7 +506,7 @@ This splits column 51 off the data, resulting in the following files:
 ### Train Model on First Sample
 
 Now, we can train our model based on the first sample. To do this, we utilize the `LinearRegDS.dml` (Linear Regression
-Direct Solve) script. Note that SystemML also includes a `LinearRegCG.dml` (Linear Regression Conjugate Gradient)
+Direct Solve) script. Note that SystemDS also includes a `LinearRegCG.dml` (Linear Regression Conjugate Gradient)
 algorithm for situations where the number of features is large.
 
     ./bin/systemml ./scripts/algorithms/LinearRegDS.dml -nvargs X=linRegData.train.data.csv Y=linRegData.train.labels.csv B=betas.csv fmt=csv
@@ -601,9 +601,9 @@ For convenience, we can encapsulate our DML invocations in a single script:
 # Troubleshooting
 
 If you encounter a `"java.lang.OutOfMemoryError"` you can edit the invocation
-script (`runStandaloneSystemML.sh` or `runStandaloneSystemML.bat`) to increase
+script (`runStandaloneSystemDS.sh` or `runStandaloneSystemDS.bat`) to increase
 the memory available to the JVM, i.e:
 
     java -Xmx16g -Xms4g -Xmn1g -cp ${CLASSPATH} org.apache.sysml.api.DMLScript \
-         -f ${SCRIPT_FILE} -exec singlenode -config SystemML-config.xml \
+         -f ${SCRIPT_FILE} -exec singlenode -config SystemDS-config.xml \
          $@
diff --git a/troubleshooting-guide.md b/troubleshooting-guide.md
index b4eac52..545d975 100644
--- a/troubleshooting-guide.md
+++ b/troubleshooting-guide.md
@@ -30,9 +30,9 @@ limitations under the License.
 
 ## ClassNotFoundException for commons-math3
 
-The Apache Commons Math library is utilized by SystemML. The commons-math3
+The Apache Commons Math library is utilized by SystemDS. The commons-math3
 dependency is included with Spark and with newer versions of Hadoop. Running
-SystemML on an older Hadoop cluster can potentially generate an error such
+SystemDS on an older Hadoop cluster can potentially generate an error such
 as the following due to the missing commons-math3 dependency:
 
 	java.lang.ClassNotFoundException: org.apache.commons.math3.linear.RealMatrix
@@ -47,7 +47,7 @@ from `provided` to `compile`.
 		<scope>compile</scope>
 	</dependency>
 
-SystemML can then be rebuilt with the `commons-math3` dependency using
+SystemDS can then be rebuilt with the `commons-math3` dependency using
 Maven (`mvn clean package -P distribution`).
 
 ## OutOfMemoryError in Hadoop Reduce Phase 
@@ -83,22 +83,22 @@ These configurations can be modified **globally** by inserting/modifying the fol
      <value>0.0</value>
     </property>
 
-They can also be configured on a **per SystemML-task basis** by inserting the following in `SystemML-config.xml`.
+They can also be configured on a **per SystemDS-task basis** by inserting the following in `SystemDS-config.xml`.
 
     <mapred.job.shuffle.merge.percent>0.2</mapred.job.shuffle.merge.percent>
     <mapred.job.shuffle.input.buffer.percent>0.2</mapred.job.shuffle.input.buffer.percent>
     <mapred.job.reduce.input.buffer.percent>0</mapred.job.reduce.input.buffer.percent>
 
-Note: The default `SystemML-config.xml` is located in `<path to SystemML root>/conf/`. It is passed to SystemML using the `-config` argument:
+Note: The default `SystemDS-config.xml` is located in `<path to SystemDS root>/conf/`. It is passed to SystemDS using the `-config` argument:
 
-    hadoop jar SystemML.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
+    hadoop jar SystemDS.jar [-? | -help | -f <filename>] (-config <config_filename>) ([-args | -nvargs] <args-list>)
     
-See [Invoking SystemML in Hadoop Batch Mode](hadoop-batch-mode.html) for details of the syntax. 
+See [Invoking SystemDS in Hadoop Batch Mode](hadoop-batch-mode.html) for details of the syntax. 
 
 ## Total size of serialized results is bigger than spark.driver.maxResultSize
 
 Spark aborts a job if the estimated result size of collect is greater than maxResultSize to avoid out-of-memory errors in driver.
-However, SystemML's optimizer has estimates the memory required for each operator and provides guards against these out-of-memory errors in driver.
+However, SystemDS's optimizer has estimates the memory required for each operator and provides guards against these out-of-memory errors in driver.
 So, we recommend setting the configuration `--conf spark.driver.maxResultSize=0`.
 
 ## File does not exist on HDFS/LFS error from remote parfor
@@ -129,7 +129,7 @@ To avoid false-positive errors due to network failures in case of compute-bound
 ## Advanced developer statistics
 
 Few of our operators (for example: convolution-related operator) and GPU backend allows an expert user to get advanced statistics
-by setting the configuration `systemml.stats.extraGPU` and `systemml.stats.extraDNN` in the file SystemML-config.xml. 
+by setting the configuration `systemml.stats.extraGPU` and `systemml.stats.extraDNN` in the file SystemDS-config.xml. 
 
 ## Out-Of-Memory on executors