You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by li...@apache.org on 2020/01/03 14:11:41 UTC

svn commit: r1872288 - in /kylin/site: docs/tutorial/cube_spark.html docs30/tutorial/cube_spark.html docs31/tutorial/cube_spark.html feed.xml

Author: lidong
Date: Fri Jan  3 14:11:41 2020
New Revision: 1872288

URL: http://svn.apache.org/viewvc?rev=1872288&view=rev
Log:
spark build distinct value, dimension dic, uhc dic

Modified:
    kylin/site/docs/tutorial/cube_spark.html
    kylin/site/docs30/tutorial/cube_spark.html
    kylin/site/docs31/tutorial/cube_spark.html
    kylin/site/feed.xml

Modified: kylin/site/docs/tutorial/cube_spark.html
URL: http://svn.apache.org/viewvc/kylin/site/docs/tutorial/cube_spark.html?rev=1872288&r1=1872287&r2=1872288&view=diff
==============================================================================
--- kylin/site/docs/tutorial/cube_spark.html (original)
+++ kylin/site/docs/tutorial/cube_spark.html Fri Jan  3 14:11:41 2020
@@ -7147,6 +7147,23 @@ $KYLIN_HOME/bin/kylin.sh start</code></p
 
 <p>After all steps be successfully executed, the Cube becomes “Ready” and you can query it as normal.</p>
 
+<h2 id="using-spark-with-apache-livy">Using Spark with Apache Livy</h2>
+
+<p>You can use Livy by adding flowing configuration:</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar</code></pre></div>
+
+<h2 id="optional">Optional</h2>
+
+<p>As we all know, the cubing job includes several steps and the steps ‘extract fact table distinct value’, ‘build dimension dictionary’ and ‘build UHC dimension dictionary’ can also be built by spark. The configurations are as follows.</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true 
+kylin.engine.spark-udc-dictionary=true</code></pre></div>
+
 <h2 id="troubleshooting">Troubleshooting</h2>
 
 <p>When getting error, you should check “logs/kylin.log” firstly. There has the full Spark command that Kylin executes, e.g:</p>

Modified: kylin/site/docs30/tutorial/cube_spark.html
URL: http://svn.apache.org/viewvc/kylin/site/docs30/tutorial/cube_spark.html?rev=1872288&r1=1872287&r2=1872288&view=diff
==============================================================================
--- kylin/site/docs30/tutorial/cube_spark.html (original)
+++ kylin/site/docs30/tutorial/cube_spark.html Fri Jan  3 14:11:41 2020
@@ -7147,6 +7147,22 @@ $KYLIN_HOME/bin/kylin.sh start</code></p
 
 <p>After all steps be successfully executed, the Cube becomes “Ready” and you can query it as normal.</p>
 
+<h2 id="using-spark-with-apache-livy">Using Spark with Apache Livy</h2>
+
+<p>You can use Livy by adding flowing configuration:</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar</code></pre></div>
+
+<h2 id="optional">Optional</h2>
+
+<p>As we all know, the cubing job includes several steps and the steps ‘extract fact table distinct value’ and ‘build dimension dictionary’ can also be built by spark. The configurations are as follows.</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true</code></pre></div>
+
 <h2 id="troubleshooting">Troubleshooting</h2>
 
 <p>When getting error, you should check “logs/kylin.log” firstly. There has the full Spark command that Kylin executes, e.g:</p>

Modified: kylin/site/docs31/tutorial/cube_spark.html
URL: http://svn.apache.org/viewvc/kylin/site/docs31/tutorial/cube_spark.html?rev=1872288&r1=1872287&r2=1872288&view=diff
==============================================================================
--- kylin/site/docs31/tutorial/cube_spark.html (original)
+++ kylin/site/docs31/tutorial/cube_spark.html Fri Jan  3 14:11:41 2020
@@ -7147,6 +7147,23 @@ $KYLIN_HOME/bin/kylin.sh start</code></p
 
 <p>After all steps be successfully executed, the Cube becomes “Ready” and you can query it as normal.</p>
 
+<h2 id="using-spark-with-apache-livy">Using Spark with Apache Livy</h2>
+
+<p>You can use Livy by adding flowing configuration:</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.livy-conf.livy-enabled=true
+kylin.engine.livy-conf.livy-url=http://ip:8998
+kylin.engine.livy-conf.livy-key.file=hdfs:///path/kylin-job-3.0.0-SNAPSHOT.jar
+kylin.engine.livy-conf.livy-arr.jars=hdfs:///path/hbase-client-1.2.0-{$env.version}.jar,hdfs:///path/hbase-common-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-hadoop2-compat-1.2.0-{$env.version}.jar,hdfs:///path/hbase-server-1.2.0-{$env.version}.jar,hdfs:///path/htrace-core-3.2.0-incubating.jar,hdfs:///path/metrics-core-2.2.0.jar</code></pre></div>
+
+<h2 id="optional">Optional</h2>
+
+<p>As we all know, the cubing job includes several steps and the steps ‘extract fact table distinct value’, ‘build dimension dictionary’ and ‘build UHC dimension dictionary’ can also be built by spark. The configurations are as follows.</p>
+
+<div class="highlight"><pre><code class="language-groff" data-lang="groff">kylin.engine.spark-fact-distinct=true
+kylin.engine.spark-dimension-dictionary=true 
+kylin.engine.spark-udc-dictionary=true</code></pre></div>
+
 <h2 id="troubleshooting">Troubleshooting</h2>
 
 <p>When getting error, you should check “logs/kylin.log” firstly. There has the full Spark command that Kylin executes, e.g:</p>

Modified: kylin/site/feed.xml
URL: http://svn.apache.org/viewvc/kylin/site/feed.xml?rev=1872288&r1=1872287&r2=1872288&view=diff
==============================================================================
--- kylin/site/feed.xml (original)
+++ kylin/site/feed.xml Fri Jan  3 14:11:41 2020
@@ -19,8 +19,8 @@
     <description>Apache Kylin Home</description>
     <link>http://kylin.apache.org/</link>
     <atom:link href="http://kylin.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Mon, 30 Dec 2019 05:59:52 -0800</pubDate>
-    <lastBuildDate>Mon, 30 Dec 2019 05:59:52 -0800</lastBuildDate>
+    <pubDate>Fri, 03 Jan 2020 05:59:19 -0800</pubDate>
+    <lastBuildDate>Fri, 03 Jan 2020 05:59:19 -0800</lastBuildDate>
     <generator>Jekyll v2.5.3</generator>
     
       <item>