You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by sh...@apache.org on 2018/11/28 05:30:46 UTC

[kylin] branch document updated: Update dev_env doc according to WuBin's blog

This is an automated email from the ASF dual-hosted git repository.

shaofengshi pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new 582b91b  Update dev_env doc according to WuBin's blog
582b91b is described below

commit 582b91bf348b49359d80b790ca39c39707cd892b
Author: shaofengshi <sh...@apache.org>
AuthorDate: Wed Nov 28 13:30:36 2018 +0800

    Update dev_env doc according to WuBin's blog
---
 website/_dev/dev_env.cn.md | 14 ++++++++++++--
 website/_dev/dev_env.md    | 16 +++++++++++++---
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/website/_dev/dev_env.cn.md b/website/_dev/dev_env.cn.md
index eb99604..c53da27 100644
--- a/website/_dev/dev_env.cn.md
+++ b/website/_dev/dev_env.cn.md
@@ -27,6 +27,10 @@ ambari-server start
 
 对于 hadoop 分布式,基本上启动 hadoop 集群,确保 HDFS,YARN,Hive,HBase 运行着即可。
 
+注意:
+
+* 为YARN resource manager 分配 3-4GB 内存.
+* 升级 Sandbox 里的 Java 到 Java 8 (Kyin 2.5 需要 Java 8).
 
 ## 开发机器的环境
 
@@ -44,7 +48,7 @@ ln -s /root/apache-maven-3.2.5/bin/mvn /usr/bin/mvn
 
 ### 安装 Spark
 
-在像 /usr/local/spark 这样的本地文件夹下手动安装 Spark;你需要确认所需要的 Spark 的版本,以及从 Spark 下载页面获取下载链接。 Kylin 2.3 - 2.5 需要 Spark 2.1; 例如:
+在像 /usr/local/spark 这样的本地文件夹下手动安装 Spark;你需要确认所需要的 Spark 的版本,以及从 Spark 下载页面获取下载链接。 Kylin 2.3 - 2.4 需要 Spark 2.1, Kylin 2.5 需要 Spark 2.3; 例如:
 
 {% highlight Groff markup %}
 wget -O /tmp/spark-2.1.2-bin-hadoop2.7.tgz https://archive.apache.org/dist/spark/spark-2.1.2/spark-2.1.2-bin-hadoop2.7.tgz
@@ -99,7 +103,7 @@ mvn test -fae -Dhdp.version=<hdp-version> -P sandbox
 ### 运行集成测试
 在真正运行集成测试前,需要为测试数据的填充运行一些端到端的 cube 构建作业,同时验证 cube 过程。然后是集成测试。
 
-其可能需要一段时间(也许一小时),请保持耐心。
+其可能需要一段时间(也许两个小时),请保持耐心。
  
 {% highlight Groff markup %}
 mvn verify -fae -Dhdp.version=<hdp-version> -P sandbox
@@ -123,6 +127,12 @@ npm install -g bower
 bower --allow-root install
 {% endhighlight %}
 
+如果在bower install的过程当中遇到问题,可以尝试命令:
+
+{% highlight Groff markup %}
+git config --global url."git://".insteadOf https://
+{% endhighlight %}
+
 注意,如果是在 Windows 上,安装完 bower,需要将 "bower.cmd" 的路径加入系统环境变量 'PATH' 中,然后运行:
 
 {% highlight Groff markup %}
diff --git a/website/_dev/dev_env.md b/website/_dev/dev_env.md
index 0b98aaf..bc89d9d 100644
--- a/website/_dev/dev_env.md
+++ b/website/_dev/dev_env.md
@@ -27,6 +27,10 @@ With both command successfully run you can go to ambari home page at <http://you
 
 For other hadoop distribution, basically start the hadoop cluster, make sure HDFS, YARN, Hive, HBase are running.
 
+Note:
+
+* You may need to adjust the YARN configuration, allocating 3-4GB memory to YARN resource manager.
+* The JDK in the sandbox VM might be old, please manually upgrade to Java 8 (Kyin 2.5 requires Java 8).
 
 ## Environment on the dev machine
 
@@ -44,7 +48,7 @@ ln -s /root/apache-maven-3.2.5/bin/mvn /usr/bin/mvn
 
 ### Install Spark
 
-Manually install the Spark binary in in a local folder like /usr/local/spark. You need to check what's the right version for your Kylin version, and then get the download link from Apache Spark website. Kylin 2.3 to 2.5 requires Spark 2.1; For example:
+Manually install the Spark binary in in a local folder like /usr/local/spark. You need to check what's the right version for your Kylin version, and then get the download link from Apache Spark website. Kylin 2.3 - 2.4 requires Spark 2.1, Kylin 2.5 requires Spark 2.3.2; For example:
 
 
 {% highlight Groff markup %}
@@ -70,7 +74,7 @@ First clone the Kylin project to your local:
 git clone https://github.com/apache/kylin.git
 {% endhighlight %}
 	
-Install Kylin artifacts to the maven repo
+Install Kylin artifacts to the maven repository.
 
 {% highlight Groff markup %}
 mvn clean install -DskipTests
@@ -100,7 +104,7 @@ mvn test -fae -Dhdp.version=<hdp-version> -P sandbox
 ### Run integration tests
 Before actually running integration tests, need to run some end-to-end cube building jobs for test data population, in the meantime validating cubing process. Then comes with the integration tests.
 
-It might take a while (maybe one hour), please keep patient.
+It might take a while (maybe two hours), please keep patient.
  
 {% highlight Groff markup %}
 mvn verify -fae -Dhdp.version=<hdp-version> -P sandbox
@@ -124,6 +128,12 @@ npm install -g bower
 bower --allow-root install
 {% endhighlight %}
 
+If you encounter network problem when run "bower install", you may try:
+
+{% highlight Groff markup %}
+git config --global url."git://".insteadOf https://
+{% endhighlight %}
+
 Note, if on Windows, after install bower, need to add the path of "bower.cmd" to system environment variable 'PATH', and then run:
 
 {% highlight Groff markup %}