You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by 崔苗 <cu...@danale.com> on 2017/11/08 03:59:34 UTC

hadoop enviroment

we have some problems in hadoop environment:
1、how does kylin find the hadoop environment ? if we export such as HIVE_HOME=/root/hive_2.10-0.10.0.0 into /etc/profile,could the profile file help kylin to find hive or hbase environment?
2、we have already installed spark2 in the cluster ,how to use this spark2 instead of the spark within kylin?






Re: hadoop enviroment

Posted by Li Yang <li...@apache.org>.
Kylin defects hadoop deployment by running shell commands, like `hadoop`,
`hive`, `hbase`. Make sure your wanted version are on path.

Set SPARK_HOME to let Kylin use a specified Spark instead of the shipped
one.

On Wed, Nov 8, 2017 at 11:59 AM, 崔苗 <cu...@danale.com> wrote:

> we have some problems in hadoop environment:
> 1、how does kylin find the hadoop environment ? if we export such as
> HIVE_HOME=/root/hive_2.10-0.10.0.0 into /etc/profile,could the profile
> file help kylin to find hive or hbase environment?
> 2、we have already installed spark2 in the cluster ,how to use this spark2
> instead of the spark within kylin?
>
>
>
>
>
>