You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Siddharth Seth (JIRA)" <ji...@apache.org> on 2015/08/11 23:03:46 UTC

[jira] [Commented] (HIVE-11524) LLAP: tez.runtime.compress doesn't appear to be honored for LLAP

    [ https://issues.apache.org/jira/browse/HIVE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14682506#comment-14682506 ] 

Siddharth Seth commented on HIVE-11524:
---------------------------------------

This points to the client config not being picked up. LLAP / Containers should behave no differently for this.

> LLAP: tez.runtime.compress doesn't appear to be honored for LLAP
> ----------------------------------------------------------------
>
>                 Key: HIVE-11524
>                 URL: https://issues.apache.org/jira/browse/HIVE-11524
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sergey Shelukhin
>            Assignee: Siddharth Seth
>
> When running llap on an openstack cluster without snappy installed, with tez.runtime.compress set to false and codec set to snappy, one still gets the exceptions due to snappy codec being absent:
> {noformat}
> 2015-08-10 11:14:30,440 [TezTaskRunner_attempt_1438943112941_0015_2_00_000000_0(attempt_1438943112941_0015_2_00_000000_0)] ERROR org.apache.hadoop.io.compress.snappy.SnappyCompressor: failed to load SnappyCompressor
> java.lang.NoSuchFieldError: clazz
> 	at org.apache.hadoop.io.compress.snappy.SnappyCompressor.initIDs(Native Method)
> 	at org.apache.hadoop.io.compress.snappy.SnappyCompressor.<clinit>(SnappyCompressor.java:57)
> 	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:69)
> 	at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
> 	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
> 	at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:165)
> 	at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.<init>(IFile.java:153)
> 	at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.<init>(IFile.java:138)
> 	at org.apache.tez.runtime.library.common.writers.UnorderedPartitionedKVWriter$SpillCallable.callInternal(UnorderedPartitionedKVWriter.java:406)
> 	at org.apache.tez.runtime.library.common.writers.UnorderedPartitionedKVWriter$SpillCallable.callInternal(UnorderedPartitionedKVWriter.java:367)
> 	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> 	at org.apache.tez.runtime.library.common.writers.UnorderedPartitionedKVWriter.finalSpill(UnorderedPartitionedKVWriter.java:612)
> 	at org.apache.tez.runtime.library.common.writers.UnorderedPartitionedKVWriter.close(UnorderedPartitionedKVWriter.java:521)
> 	at org.apache.tez.runtime.library.output.UnorderedKVOutput.close(UnorderedKVOutput.java:128)
> 	at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.close(LogicalIOProcessorRuntimeTask.java:376)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:79)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:60)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:60)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:35)
> 	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> {noformat}
> When it's set to true, the client complains about snappy. When it's set to fails, the client doesn't complain but it still tries to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)