You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@kylin.apache.org by Na Zhai <na...@kyligence.io> on 2019/03/15 09:02:37 UTC

答复: Build cube error

Hi, yangxc

You can check hive.log firstly. If you found nothing useful. You can check the log of MapReduce job. You may find the root cause of this error.


发送自 Windows 10 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>应用

________________________________
发件人: yangxc@staryea.com <ya...@staryea.com>
发送时间: Friday, March 15, 2019 10:23:37 AM
收件人: user
主题: Build cube error

        Environment: ambari hdp
                   Kylin2.5 version
                   Hive is connected with beeline
         The configuration is as follows:
        [cid:_Foxmail.1@23dbed66-b6e1-0799-95ac-f2d317c2f6ad]
        ERROR when building cube, the error is as follows:
        java.io.IOException: OS command error exit with return code: 2, error message: Connecting to jdbc:hive2://it-ete-001:10000

Connected to: Apache Hive (version 1.2.1000.2.6.4.0-91)
Driver: Hive JDBC (version 1.2.1000.2.6.4.0-91)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://it-ete-001:10000> USE it_ete_db;
No rows affected (0.613 seconds)
0: jdbc:hive2://it-ete-001:10000>
0: jdbc:hive2://it-ete-001:10000> set mapreduce.job.reduces=1;
No rows affected (0.006 seconds)
0: jdbc:hive2://it-ete-001:10000>
0: jdbc:hive2://it-ete-001:10000> set hive.merge.mapredfiles=false;
No rows affected (0.005 seconds)
0: jdbc:hive2://it-ete-001:10000>
0: jdbc:hive2://it-ete-001:10000> INSERT OVERWRITE TABLE kylin_intermediate_pvuv_cube_a88a7883_c9fb_e867_4540_7fe856af6624 SELECT * FRO
M kylin_intermediate_pvuv_cube_a88a7883_c9fb_e867_4540_7fe856af6624 DISTRIBUTE BY WEB_ACCESS_FACT_TB1_DAY,WEB_ACCESS_FACT_TB1_REGIONID,
WEB_ACCESS_FACT_TB1_CITYID;
INFO  : Number of reduce tasks not specified. Defaulting to jobconf value of: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1552534984103_0011
INFO  : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:it-ete-cluster, Ident: (HDFS_DELEGATION_TOKEN token 23 for star_bo)
INFO  : The url to track the job: http://it-ete-002:8088/proxy/application_1552534984103_0011/
INFO  : Starting Job = job_1552534984103_0011, Tracking URL = http://it-ete-002:8088/proxy/application_1552534984103_0011/
INFO  : Kill Command = /usr/hdp/2.6.4.0-91/hadoop/bin/hadoop job  -kill job_1552534984103_0011
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2019-03-15 10:09:07,523 Stage-1 map = 0%,  reduce = 0%
INFO  : 2019-03-15 10:09:29,007 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.21 sec
INFO  : 2019-03-15 10:09:43,936 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.21 sec
INFO  : MapReduce Total cumulative CPU time: 4 seconds 210 msec
ERROR : Ended Job = job_1552534984103_0011 with errors
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
Closing: 0: jdbc:hive2://it-ete-001:10000
The command is:
cat >/tmp/1552615724425.hql<<EOL
USE it_ete_db;
set mapreduce.job.reduces=1;
set hive.merge.mapredfiles=false;
INSERT OVERWRITE TABLE kylin_intermediate_pvuv_cube_a88a7883_c9fb_e867_4540_7fe856af6624 SELECT * FROM kylin_intermediate_pvuv_cube_a88a7883_c9fb_e867_4540_7fe856af6624 DISTRIBUTE BY WEB_ACCESS_FACT_TB1_DAY,WEB_ACCESS_FACT_TB1_REGIONID,WEB_ACCESS_FACT_TB1_CITYID;
EOL
beeline -n star_bo --hiveconf hive.security.authorization.sqlstd.confwhitelist.append='mapreduce.job.*|dfs.*' -u jdbc:hive2://it-ete-001:10000 -p star_bo --hiveconf hive.merge.mapredfiles=false --hiveconf hive.auto.convert.join=true --hiveconf dfs.replication=2 --hiveconf hive.exec.compress.output=true --hiveconf hive.auto.convert.join.noconditionaltask=true --hiveconf mapreduce.job.split.metainfo.maxsize=-1 --hiveconf hive.merge.mapfiles=false --hiveconf hive.auto.convert.join.noconditionaltask.size=100000000 --hiveconf hive.stats.autogather=true -f /tmp/1552615724425.hql;ret_code=$?;rm -f /tmp/1552615724425.hql;exit $ret_code
        at org.apache.kylin.common.util.CliCommandExecutor.execute(CliCommandExecutor.java:95)
        at org.apache.kylin.source.hive.RedistributeFlatHiveTableStep.redistributeTable(RedistributeFlatHiveTableStep.java:62)
        at org.apache.kylin.source.hive.RedistributeFlatHiveTableStep.doWork(RedistributeFlatHiveTableStep.java:113)
        at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:163)
        at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:69)
        at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:163)
        at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:113)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

   How can this be solved?
________________________________
yangxc@staryea.com