You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "fengYu (JIRA)" <ji...@apache.org> on 2015/09/10 05:39:45 UTC

[jira] [Created] (KYLIN-1021) upload dependent jars of kylin to HDFS and set tmpjars

fengYu created KYLIN-1021:
-----------------------------

             Summary: upload dependent jars of kylin to HDFS and set tmpjars
                 Key: KYLIN-1021
                 URL: https://issues.apache.org/jira/browse/KYLIN-1021
             Project: Kylin
          Issue Type: Improvement
    Affects Versions: v1.0
            Reporter: fengYu


As [~Shaofengshi] says in maillist : Regrading your question about the jar files located in local disk instead of HDFS, yes the hadoop/hive/hbase jars should exist in local disk on each machine of the hadoop cluster, with the same locations; Kylin will not upload those jars; Please check and ensure the consistency of your hadoop cluster.

However, our hadoop cluster is managed by hadoop administrator, we have no right to login those machine, even though we have the right, copy all files to hundreds of machine will be a painful job(I do not know is there some tools can do this well).

However, I can not get any tips about you measure(If you has the document, tell me)...

I change my source code and create a directory in kylin tmp directory(kylin.hdfs.working.dir/kylin_metadata) and upload all jars to the directory if the directory is empty(it only happened at the first time) when submitting a mapreduce job, then set those locations to tmpjars of the mapreduce job(just like kylin set tmpfiles before submit job), This is automated and make kylin deploying easier..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)