You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by Yagyank Chadha <ya...@gmail.com> on 2016/04/05 08:33:12 UTC

Kylin sample cube giving error

Hello Kylin Developers,

I am trying to run sample cube given by kylin (
http://kylin.apache.org/docs15/tutorial/kylin_sample.html) . I am  stuck on
the part which tells to build the cube.

My build cube fails on the first stage itself that is #1 Step Name: Create
Intermediate Flat Hive Table

Below is the output I am getting in log file

*OS command error exit with 2 -- hive -e "USE default;
DROP TABLE IF EXISTS
kylin_intermediate_kylin_sales_cube_desc_20120101000000_20160403000000;

CREATE EXTERNAL TABLE IF NOT EXISTS
kylin_intermediate_kylin_sales_cube_desc_20120101000000_20160403000000
(
DEFAULT_KYLIN_SALES_PART_DT date
,DEFAULT_KYLIN_SALES_LEAF_CATEG_ID bigint
,DEFAULT_KYLIN_SALES_LSTG_SITE_ID int
,DEFAULT_KYLIN_CATEGORY_GROUPINGS_META_CATEG_NAME string
,DEFAULT_KYLIN_CATEGORY_GROUPINGS_CATEG_LVL2_NAME string
,DEFAULT_KYLIN_CATEGORY_GROUPINGS_CATEG_LVL3_NAME string
,DEFAULT_KYLIN_SALES_LSTG_FORMAT_NAME string
,DEFAULT_KYLIN_SALES_PRICE decimal(19,4)
,DEFAULT_KYLIN_SALES_SELLER_ID bigint
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\177'
STORED AS SEQUENCEFILE
LOCATION '/kylin/kylin_metadata/kylin-2f78b10c-cff6-4d2c-bef4-089b3831c2d2/kylin_intermediate_kylin_sales_cube_desc_20120101000000_20160403000000';

SET dfs.replication=2;
SET dfs.block.size=32000000;
SET hive.exec.compress.output=true;
SET hive.auto.convert.join.noconditionaltask=true;
SET hive.auto.convert.join.noconditionaltask.size=300000000;
SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET hive.merge.mapfiles=true;
SET hive.merge.mapredfiles=true;
SET mapred.output.compression.type=BLOCK;
SET hive.merge.size.per.task=256000000;
SET hive.support.concurrency=false;
SET mapreduce.job.split.metainfo.maxsize=-1;
INSERT OVERWRITE TABLE
kylin_intermediate_kylin_sales_cube_desc_20120101000000_20160403000000
SELECT
KYLIN_SALES.PART_DT
,KYLIN_SALES.LEAF_CATEG_ID
,KYLIN_SALES.LSTG_SITE_ID
,KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME
,KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME
,KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME
,KYLIN_SALES.LSTG_FORMAT_NAME
,KYLIN_SALES.PRICE
,KYLIN_SALES.SELLER_ID
FROM DEFAULT.KYLIN_SALES as KYLIN_SALES
INNER JOIN DEFAULT.KYLIN_CAL_DT as KYLIN_CAL_DT
ON KYLIN_SALES.PART_DT = KYLIN_CAL_DT.CAL_DT
INNER JOIN DEFAULT.KYLIN_CATEGORY_GROUPINGS as KYLIN_CATEGORY_GROUPINGS
ON KYLIN_SALES.LEAF_CATEG_ID = KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID
AND KYLIN_SALES.LSTG_SITE_ID = KYLIN_CATEGORY_GROUPINGS.SITE_ID
WHERE (KYLIN_SALES.PART_DT >= '2012-01-01' AND KYLIN_SALES.PART_DT <
'2016-04-03')
;

"

Logging initialized using configuration in
jar:file:/usr/lib/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
OK
Time taken: 1.183 seconds
OK
Time taken: 0.169 seconds
OK
Time taken: 0.773 seconds
Query ID = root_20160405115931_e7622f84-9f30-4a1e-acad-91e45effb2f5
Total jobs = 3
Execution log at:
/tmp/root/root_20160405115931_e7622f84-9f30-4a1e-acad-91e45effb2f5.log
2016-04-05 11:59:36	Starting to launch local task to process map
join;	maximum memory = 477626368
2016-04-05 11:59:38	Dump the side-table for tag: 1 with group count:
144 into file: file:/home/hduser/iotmp/5518ec6b-ebf5-4aa3-8984-dd6269959b30/hive_2016-04-05_11-59-31_741_4864253870930847560-1/-local-10004/HashTable-Stage-11/MapJoin-mapfile01--.hashtable
2016-04-05 11:59:39	Uploaded 1 File to:
file:/home/hduser/iotmp/5518ec6b-ebf5-4aa3-8984-dd6269959b30/hive_2016-04-05_11-59-31_741_4864253870930847560-1/-local-10004/HashTable-Stage-11/MapJoin-mapfile01--.hashtable
(10893 bytes)
2016-04-05 11:59:39	Dump the side-table for tag: 0 with group count:
731 into file: file:/home/hduser/iotmp/5518ec6b-ebf5-4aa3-8984-dd6269959b30/hive_2016-04-05_11-59-31_741_4864253870930847560-1/-local-10004/HashTable-Stage-11/MapJoin-mapfile10--.hashtable
2016-04-05 11:59:39	Uploaded 1 File to:
file:/home/hduser/iotmp/5518ec6b-ebf5-4aa3-8984-dd6269959b30/hive_2016-04-05_11-59-31_741_4864253870930847560-1/-local-10004/HashTable-Stage-11/MapJoin-mapfile10--.hashtable
(271350 bytes)
2016-04-05 11:59:39	End of local task; Time Taken: 2.208 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:675)
	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
	at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:644)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:676)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:754)
	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:414)
	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:644)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:657)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:660)
	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:756)
	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:414)
	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
	at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:122)
	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
	at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97)
	at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:162)
	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:508)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 32 more
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1010)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 18 more
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:577)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1010)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	... 16 more
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 18 more
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:577)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1010)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	... 16 more
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 18 more
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:577)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1010)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	... 16 more
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 18 more
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:577)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1010)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:630)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566)
	... 16 more
Caused by: java.lang.RuntimeException: native snappy library not
available: SnappyCompressor has not been loaded.
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)
	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)
	at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)
	at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)
	at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)
	at org.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:1508)
	at org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
	at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
	... 18 more
2016-04-05 11:59:41,721 Stage-11 map = 0%,  reduce = 0%
Ended Job = job_local1683457853_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:8080/ <http://localhost:8080/>
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-11:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec*



-- 
Regards
*Yagyank chadha*

*Undergraduate student*
*Computer Science Engineering*
*Thapar University, Patiala*

Re: Kylin sample cube giving error

Posted by joffrey <or...@gmail.com>.
I disable the same config in kylin_hive_site.xml, no snappy codec,but
another error occured, in build learn_kylin process:

2016-04-19 20:17:21,829 DEBUG [pool-5-thread-2]
common.HadoopStatusChecker:72 : Hadoop job job_local1810951969_0001 status
: {"RemoteException":{"exception":"NumberFormatException","message":"For
input string:
\"local1810951969\"","javaClassName":"java.lang.NumberFormatException"}}
2016-04-19 20:17:21,996 ERROR [pool-5-thread-2]
common.HadoopStatusChecker:93 : error check status
java.lang.NullPointerException
at
org.apache.kylin.engine.mr.common.HadoopStatusGetter.get(HadoopStatusGetter.java:74)
at
org.apache.kylin.engine.mr.common.HadoopStatusChecker.checkStatus(HadoopStatusChecker.java:58)
at
org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:147)
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
at
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
at
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



2016-04-19 20:20 GMT+08:00 OriginGod Huang <or...@gmail.com>:

> I did comment those configs that contains 'compression',but I still got
> this error in kylin_job.log
>
> SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
>
>
> 2016-04-18 19:49 GMT+08:00 mahongbin [via Apache Kylin] <
> ml-node+s74782n4195h76@n6.nabble.com>:
>
>> hi, did you follow
>> http://kylin.apache.org/docs15/install/advance_settings.html to disable
>> compression? and can you be more specific on where's not working?
>>
>> On Mon, Apr 18, 2016 at 10:26 AM, joffrey <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=4195&i=0>> wrote:
>>
>> > Hi, I meet the same problem too, I tried to disable the snappy
>> > configuration,
>> > but it didn't work, and I can't find the hard-code sql-expression in
>> source
>> > code, pls let me know if there is any progress now.
>> >
>> > --
>> > View this message in context:
>> >
>> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4183.html
>> > Sent from the Apache Kylin mailing list archive at Nabble.com.
>> >
>>
>>
>>
>> --
>> Regards,
>>
>> *Bin Mahone | 马洪宾*
>> Apache Kylin: http://kylin.io
>> Github: https://github.com/binmahone
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4195.html
>> To unsubscribe from Kylin sample cube giving error, click here
>> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4052&code=b3JpZ2luZ29kLm9oQGdtYWlsLmNvbXw0MDUyfC0xMTMxMDYzMzk4>
>> .
>> NAML
>> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>


--
View this message in context: http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4198.html
Sent from the Apache Kylin mailing list archive at Nabble.com.

Re: Kylin sample cube giving error

Posted by joffrey <or...@gmail.com>.
I did comment those configs that contains 'compression',but I still got
this error in kylin_job.log

SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;


2016-04-18 19:49 GMT+08:00 mahongbin [via Apache Kylin] <
ml-node+s74782n4195h76@n6.nabble.com>:

> hi, did you follow
> http://kylin.apache.org/docs15/install/advance_settings.html to disable
> compression? and can you be more specific on where's not working?
>
> On Mon, Apr 18, 2016 at 10:26 AM, joffrey <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=4195&i=0>> wrote:
>
> > Hi, I meet the same problem too, I tried to disable the snappy
> > configuration,
> > but it didn't work, and I can't find the hard-code sql-expression in
> source
> > code, pls let me know if there is any progress now.
> >
> > --
> > View this message in context:
> >
> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4183.html
> > Sent from the Apache Kylin mailing list archive at Nabble.com.
> >
>
>
>
> --
> Regards,
>
> *Bin Mahone | 马洪宾*
> Apache Kylin: http://kylin.io
> Github: https://github.com/binmahone
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4195.html
> To unsubscribe from Kylin sample cube giving error, click here
> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4052&code=b3JpZ2luZ29kLm9oQGdtYWlsLmNvbXw0MDUyfC0xMTMxMDYzMzk4>
> .
> NAML
> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>


--
View this message in context: http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4197.html
Sent from the Apache Kylin mailing list archive at Nabble.com.

Re: Kylin sample cube giving error

Posted by joffrey <or...@gmail.com>.
anyway, thanks for the answer, I trying to use hdp to avoid env issues.


2016-04-19 20:29 GMT+08:00 OriginGod Huang <or...@gmail.com>:

> I disable the same config in kylin_hive_site.xml, no snappy codec,but
> another error occured, in build learn_kylin process:
>
> 2016-04-19 20:17:21,829 DEBUG [pool-5-thread-2]
> common.HadoopStatusChecker:72 : Hadoop job job_local1810951969_0001 status
> : {"RemoteException":{"exception":"NumberFormatException","message":"For
> input string:
> \"local1810951969\"","javaClassName":"java.lang.NumberFormatException"}}
> 2016-04-19 20:17:21,996 ERROR [pool-5-thread-2]
> common.HadoopStatusChecker:93 : error check status
> java.lang.NullPointerException
> at
> org.apache.kylin.engine.mr.common.HadoopStatusGetter.get(HadoopStatusGetter.java:74)
> at
> org.apache.kylin.engine.mr.common.HadoopStatusChecker.checkStatus(HadoopStatusChecker.java:58)
> at
> org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:147)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
> at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:114)
> at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
> 2016-04-19 20:20 GMT+08:00 OriginGod Huang <or...@gmail.com>:
>
>> I did comment those configs that contains 'compression',but I still got
>> this error in kylin_job.log
>>
>> SET mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
>> SET mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
>>
>>
>> 2016-04-18 19:49 GMT+08:00 mahongbin [via Apache Kylin] <
>> ml-node+s74782n4195h76@n6.nabble.com>:
>>
>>> hi, did you follow
>>> http://kylin.apache.org/docs15/install/advance_settings.html to disable
>>> compression? and can you be more specific on where's not working?
>>>
>>> On Mon, Apr 18, 2016 at 10:26 AM, joffrey <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=4195&i=0>> wrote:
>>>
>>> > Hi, I meet the same problem too, I tried to disable the snappy
>>> > configuration,
>>> > but it didn't work, and I can't find the hard-code sql-expression in
>>> source
>>> > code, pls let me know if there is any progress now.
>>> >
>>> > --
>>> > View this message in context:
>>> >
>>> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4183.html
>>> > Sent from the Apache Kylin mailing list archive at Nabble.com.
>>> >
>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>> *Bin Mahone | 马洪宾*
>>> Apache Kylin: http://kylin.io
>>> Github: https://github.com/binmahone
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>>
>>> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4195.html
>>> To unsubscribe from Kylin sample cube giving error, click here
>>> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4052&code=b3JpZ2luZ29kLm9oQGdtYWlsLmNvbXw0MDUyfC0xMTMxMDYzMzk4>
>>> .
>>> NAML
>>> <http://apache-kylin.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>


--
View this message in context: http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4199.html
Sent from the Apache Kylin mailing list archive at Nabble.com.

Re: Kylin sample cube giving error

Posted by hongbin ma <ma...@apache.org>.
hi, did you follow
http://kylin.apache.org/docs15/install/advance_settings.html to disable
compression? and can you be more specific on where's not working?

On Mon, Apr 18, 2016 at 10:26 AM, joffrey <or...@gmail.com> wrote:

> Hi, I meet the same problem too, I tried to disable the snappy
> configuration,
> but it didn't work, and I can't find the hard-code sql-expression in source
> code, pls let me know if there is any progress now.
>
> --
> View this message in context:
> http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4183.html
> Sent from the Apache Kylin mailing list archive at Nabble.com.
>



-- 
Regards,

*Bin Mahone | 马洪宾*
Apache Kylin: http://kylin.io
Github: https://github.com/binmahone

Re: Kylin sample cube giving error

Posted by joffrey <or...@gmail.com>.
Hi, I meet the same problem too, I tried to disable the snappy configuration,
but it didn't work, and I can't find the hard-code sql-expression in source
code, pls let me know if there is any progress now.

--
View this message in context: http://apache-kylin.74782.x6.nabble.com/Kylin-sample-cube-giving-error-tp4052p4183.html
Sent from the Apache Kylin mailing list archive at Nabble.com.

Re: Kylin sample cube giving error

Posted by hongbin ma <ma...@apache.org>.
​Seems snappy compression is not enabled in your env.


By default Kylin leverages snappy compression to compress the output of MR
jobs, as well as hbase table storage, to reduce the storage overhead. We do
not choose LZO compression in Kylin because hadoop venders tend to not
include LZO in their distributions due to license(GPL) issues. If you
compression related issues happened in your cubing job, you have two
options: 1. Disable compression 2. Choose other compression algorithms like
LZO.

Compression settings only take effect after restarting Kylin server
instance (by `./kylin.sh start` and `./kylin.sh stop`). To disable
compressing MR jobs you need to modify $KYLIN_HOME/conf/kylin_job_conf.xml
by removing all configuration entries related to compression(Just grep the
keyword "compress"). To disable compressing hbase tables you need to open
$KYLIN_HOME/conf/kylin.properties and remove the line starting with
kylin.hbase.default.compression.codec.



-- 
Regards,

*Bin Mahone | 马洪宾*
Apache Kylin: http://kylin.io
Github: https://github.com/binmahone