You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2020/05/14 07:06:15 UTC

[kylin] branch document updated: Remove unnecessary extra characters

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new 00f1670  Remove unnecessary extra characters
00f1670 is described below

commit 00f1670f8646f755daa2d67d1b4e834355b3062d
Author: rupengwang <wa...@live.cn>
AuthorDate: Thu May 7 10:51:18 2020 +0800

    Remove unnecessary extra characters
---
 website/_docs/tutorial/cube_spark.cn.md      | 21 +++++++++++++++++++++
 website/_docs/tutorial/sql_reference.cn.md   |  2 +-
 website/_docs30/tutorial/cube_spark.cn.md    | 21 +++++++++++++++++++++
 website/_docs30/tutorial/sql_reference.cn.md |  2 +-
 website/_docs31/tutorial/cube_spark.cn.md    | 21 +++++++++++++++++++++
 website/_docs31/tutorial/sql_reference.cn.md |  2 +-
 6 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/website/_docs/tutorial/cube_spark.cn.md b/website/_docs/tutorial/cube_spark.cn.md
index 0bc7dee..68d3597 100644
--- a/website/_docs/tutorial/cube_spark.cn.md
+++ b/website/_docs/tutorial/cube_spark.cn.md
@@ -133,6 +133,27 @@ Kylin 启动后,访问 Kylin 网站,在 "Advanced Setting" 页,编辑名
 
 所有步骤成功执行后,Cube 的状态变为 "Ready" 且您可以像往常那样进行查询。
 
+## 通过Apache Livy使用Spark
+开启使用Livy需要修改如下配置:
+
+{% highlight Groff markup %}
+kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar  
+{% endhighlight %}
+
+需要注意的是jar包路径之间不能存在空格。
+
+## 可选功能
+
+现在构建步骤中的'extract fact table distinct value' 和 'build dimension dictionary' 两个步骤也可以使用Spark进行构建了。相关的配置如下:
+
+{% highlight Groff markup %}
+kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true 
+{% endhighlight %}
+
 ## 疑难解答
 
 当出现 error,您可以首先查看 "logs/kylin.log". 其中包含 Kylin 执行的所有 Spark 命令,例如:
diff --git a/website/_docs/tutorial/sql_reference.cn.md b/website/_docs/tutorial/sql_reference.cn.md
index d645fff..43901af 100644
--- a/website/_docs/tutorial/sql_reference.cn.md
+++ b/website/_docs/tutorial/sql_reference.cn.md
@@ -185,7 +185,7 @@ SELECT cal_dt ,sum(price) AS sum_price FROM (SELECT kylin_cal_dt.cal_dt, kylin_s
 在表中存在至少一个匹配时,```INNER JOIN``` 关键字返回行。
 例子:
 {% highlight Groff markup %}
-SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt**** = kylin_cal_dt.cal_dt;
+SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt = kylin_cal_dt.cal_dt;
 {% endhighlight %}
 
 ### LEFT JOIN {#LEFTJOIN}
diff --git a/website/_docs30/tutorial/cube_spark.cn.md b/website/_docs30/tutorial/cube_spark.cn.md
index 7379887..b0b1249 100644
--- a/website/_docs30/tutorial/cube_spark.cn.md
+++ b/website/_docs30/tutorial/cube_spark.cn.md
@@ -133,6 +133,27 @@ Kylin 启动后,访问 Kylin 网站,在 "Advanced Setting" 页,编辑名
 
 所有步骤成功执行后,Cube 的状态变为 "Ready" 且您可以像往常那样进行查询。
 
+## 通过Apache Livy使用Spark
+开启使用Livy需要修改如下配置:
+
+{% highlight Groff markup %}
+kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar  
+{% endhighlight %}
+
+需要注意的是jar包路径之间不能存在空格。
+
+## 可选功能
+
+现在构建步骤中的'extract fact table distinct value' 和 'build dimension dictionary' 两个步骤也可以使用Spark进行构建了。相关的配置如下:
+
+{% highlight Groff markup %}
+kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true 
+{% endhighlight %}
+
 ## 疑难解答
 
 当出现 error,您可以首先查看 "logs/kylin.log". 其中包含 Kylin 执行的所有 Spark 命令,例如:
diff --git a/website/_docs30/tutorial/sql_reference.cn.md b/website/_docs30/tutorial/sql_reference.cn.md
index 149523f..8231d07 100644
--- a/website/_docs30/tutorial/sql_reference.cn.md
+++ b/website/_docs30/tutorial/sql_reference.cn.md
@@ -185,7 +185,7 @@ SELECT cal_dt ,sum(price) AS sum_price FROM (SELECT kylin_cal_dt.cal_dt, kylin_s
 在表中存在至少一个匹配时,```INNER JOIN``` 关键字返回行。
 例子:
 {% highlight Groff markup %}
-SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt**** = kylin_cal_dt.cal_dt;
+SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt = kylin_cal_dt.cal_dt;
 {% endhighlight %}
 
 ### LEFT JOIN {#LEFTJOIN}
diff --git a/website/_docs31/tutorial/cube_spark.cn.md b/website/_docs31/tutorial/cube_spark.cn.md
index 58037e3..184e0a7 100644
--- a/website/_docs31/tutorial/cube_spark.cn.md
+++ b/website/_docs31/tutorial/cube_spark.cn.md
@@ -133,6 +133,27 @@ Kylin 启动后,访问 Kylin 网站,在 "Advanced Setting" 页,编辑名
 
 所有步骤成功执行后,Cube 的状态变为 "Ready" 且您可以像往常那样进行查询。
 
+## 通过Apache Livy使用Spark
+开启使用Livy需要修改如下配置:
+
+{% highlight Groff markup %}
+kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar  
+{% endhighlight %}
+
+需要注意的是jar包路径之间不能存在空格。
+
+## 可选功能
+
+现在构建步骤中的'extract fact table distinct value' 和 'build dimension dictionary' 两个步骤也可以使用Spark进行构建了。相关的配置如下:
+
+{% highlight Groff markup %}
+kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true 
+{% endhighlight %}
+
 ## 疑难解答
 
 当出现 error,您可以首先查看 "logs/kylin.log". 其中包含 Kylin 执行的所有 Spark 命令,例如:
diff --git a/website/_docs31/tutorial/sql_reference.cn.md b/website/_docs31/tutorial/sql_reference.cn.md
index 28f6a92..2cd53a0 100644
--- a/website/_docs31/tutorial/sql_reference.cn.md
+++ b/website/_docs31/tutorial/sql_reference.cn.md
@@ -185,7 +185,7 @@ SELECT cal_dt ,sum(price) AS sum_price FROM (SELECT kylin_cal_dt.cal_dt, kylin_s
 在表中存在至少一个匹配时,```INNER JOIN``` 关键字返回行。
 例子:
 {% highlight Groff markup %}
-SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt**** = kylin_cal_dt.cal_dt;
+SELECT kylin_cal_dt.cal_dt, kylin_sales.price FROM kylin_sales INNER JOIN kylin_cal_dt AS kylin_cal_dt ON kylin_sales.part_dt = kylin_cal_dt.cal_dt;
 {% endhighlight %}
 
 ### LEFT JOIN {#LEFTJOIN}