You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@tez.apache.org by Azuryy Yu <az...@gmail.com> on 2014/06/04 05:53:43 UTC

Tez Cannot find Hive UDF jars

Hi,
I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0

fistly, I add one jar under hive shell using : add jar test.jar;

I wrote some UDF functioins in test.jar.

Hive job can success if I set hive.execution.engine=mr, but if I set it to
'tez', and
mapreduce.framework.name=yarn-tez, It always fail. I looked through the
container log: It throws ClassNotFoundExcetion for classes in test.jar.

what addtional configuration I need to do? Thanks.

Re: Tez Cannot find Hive UDF jars

Posted by Azuryy Yu <az...@gmail.com>.
Fixed.

Sorry, I forgot put a tez jar to the HDFS.


On Wed, Jun 4, 2014 at 4:10 PM, Azuryy Yu <az...@gmail.com> wrote:

> Thanks Bikas,
>
> I cannot run tez mapreduce example, I ran with:
>  hadoop jar tez-mapreduce-examples-0.5.0-incubating-SNAPSHOT.jar
> orderedwordcount  wc/test.data  output
>
> It failed. I found only one Exception in the NM log:
> 2014-06-04 15:15:36,212 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: an
> error occurred during container running
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:539)
>         at org.apache.hadoop.util.Shell.run(Shell.java:452)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:684)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-06-04 15:15:36,213 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit
> code from container container_1401865179784_0005_01_000003 is : 143
> 2014-06-04 15:15:36,213 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Container container_1401865179784_0005_01_000003 succeeded
>
>
> The following is from AM container log:
> 2014-06-04 15:15:44,871 INFO [AsyncDispatcher event handler]
> org.apache.tez.dag.app.DAGAppMaster: Waiting for next DAG to be submitted.
> 2014-06-04 15:15:44,941 INFO [IPC Server handler 0 on 3174]
> org.apache.tez.dag.app.DAGAppMaster: Received message to shutdown AM
> 2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
> org.apache.tez.dag.app.rm.TaskSchedulerEventHandler: TaskScheduler notified
> that it should unregister from RM
> 2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
> org.apache.tez.dag.app.DAGAppMaster: No current running DAG, shutting down
> the AM
> 2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
> org.apache.tez.dag.app.DAGAppMaster: Handling DAGAppMaster shutdown
> 2014-06-04 15:15:45,504 INFO [AMRM Callback Handler Thread]
> org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
> 0 taskAllocations: 0
> 2014-06-04 15:15:46,506 INFO [AMRM Callback Handler Thread]
> org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
> 0 taskAllocations: 0
> 2014-06-04 15:15:47,509 INFO [AMRM Callback Handler Thread]
> org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
> 0 taskAllocations: 0
> 2014-06-04 15:15:48,511 INFO [AMRM Callback Handler Thread]
> org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
> 0 taskAllocations: 0
> 2014-06-04 15:15:49,514 INFO [AMRM Callback Handler Thread]
> org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
> 0 taskAllocations: 0
> 2014-06-04 15:15:49,944 INFO [AMShutdownThread]
> org.apache.tez.dag.app.DAGAppMaster: Calling stop for all the services
> 2014-06-04 15:15:49,945 INFO [AMShutdownThread]
> org.apache.tez.dag.history.HistoryEventHandler: Stopping HistoryEventHandler
> 2014-06-04 15:15:49,945 INFO [AMShutdownThread]
> org.apache.tez.dag.history.recovery.RecoveryService: Stopping
> RecoveryService
> 2014-06-04 15:15:49,945 INFO [AMShutdownThread]
> org.apache.tez.dag.history.recovery.RecoveryService: Closing Summary Stream
> 2014-06-04 15:15:49,945 INFO [RecoveryEventHandlingThread]
> org.apache.tez.dag.history.recovery.RecoveryService: EventQueue take
> interrupted. Returning
> 2014-06-04 15:15:49,969 WARN [AMShutdownThread]
> org.apache.tez.dag.history.recovery.RecoveryService: Error when closing
> summary stream
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on
> /data/hadoop/data/tez/staging/application_1401865179784_0005/application_1401865179784_0005/recovery/1/application_1401865179784_0005.summary
> (inode 18085): File does not exist. Holder
> DFSClient_NONMAPREDUCE_-1751193571_1 does not have any open files.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2973)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3053)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3023)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:649)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:486)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1565)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy14.complete(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:407)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> at com.sun.proxy.$Proxy15.complete(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2135)
> at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2119)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
> at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)
> at
> org.apache.tez.dag.history.recovery.RecoveryService.serviceStop(RecoveryService.java:160)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
> at
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
> at
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
> at
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
> at
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
> at
> org.apache.tez.dag.history.HistoryEventHandler.serviceStop(HistoryEventHandler.java:80)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
> at
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
> at
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
> at org.apache.tez.dag.app.DAGAppMaster.stopServices(DAGAppMaster.java:1518)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:1649)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
> at
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHandler$AMShutdownRunnable.run(DAGAppMaster.java:607)
> at java.lang.Thread.run(Thread.java:744)
> 2014-06-04 15:15:49,973 INFO [AMShutdownThread]
> org.apache.tez.dag.history.logging.impl.SimpleHistoryLoggingService:
> Stopping SimpleHistoryLoggingService, eventQueueBacklog=0
> 2014-06-04 15:15:49,973 INFO [HistoryEventHandlingThread]
> org.apache.tez.dag.history.logging.impl.SimpleHistoryLoggingService:
> EventQueue take interrupted. Returning
> 2014-06-04 15:15:49,974 INFO [DelayedContainerManager]
> org.apache.tez.dag.app.rm.TaskScheduler: AllocatedContainerManager Thread
> interrupted
> 2014-06-04 15:15:49,978 INFO [AMShutdownThread]
> org.apache.tez.dag.app.rm.TaskScheduler: Unregistering application from RM,
> exitStatus=SUCCEEDED, exitMessage=Session stats:submittedDAGs=1,
> successfulDAGs=0, failedDAGs=1, killedDAGs=0
> , trackingURL=
> 2014-06-04 15:15:49,984 INFO [AMShutdownThread]
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl: Waiting for
> application to be successfully unregistered.
> 2014-06-04 15:15:50,087 INFO [AMRM Callback Handler Thread]
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl:
> Interrupted while waiting for queue
> java.lang.InterruptedException
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
> at
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:275)
> 2014-06-04 15:15:50,087 INFO [AMShutdownThread]
> org.apache.hadoop.ipc.Server: Stopping server on 55800
> 2014-06-04 15:15:50,088 INFO [IPC Server listener on 55800]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 55800
> 2014-06-04 15:15:50,088 INFO [AMShutdownThread]
> org.apache.hadoop.ipc.Server: Stopping server on 3174
> 2014-06-04 15:15:50,088 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2014-06-04 15:15:50,088 INFO [IPC Server listener on 3174]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 3174
> 2014-06-04 15:15:50,088 INFO [AMShutdownThread]
> org.apache.tez.dag.app.DAGAppMaster: Exiting DAGAppMaster..GoodBye!
>
>
>
>
>
>
> On Wed, Jun 4, 2014 at 1:18 PM, Bikas Saha <bi...@hortonworks.com> wrote:
>
>> Can you please double check if you have followed all the instructions at
>>
>> http://tez.incubator.apache.org/install.html
>>
>> And then try to run a sample Tez job. If that passes then it may be an
>> issue in the Hive configuration.
>>
>>
>>
>> Bikas
>>
>>
>>
>> *From:* Azuryy Yu [mailto:azuryyyu@gmail.com]
>> *Sent:* Tuesday, June 03, 2014 8:54 PM
>> *To:* user@tez.incubator.apache.org
>> *Subject:* Tez Cannot find Hive UDF jars
>>
>>
>>
>> Hi,
>>
>> I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0
>>
>>
>>
>> fistly, I add one jar under hive shell using : add jar test.jar;
>>
>>
>>
>> I wrote some UDF functioins in test.jar.
>>
>>
>>
>> Hive job can success if I set hive.execution.engine=mr, but if I set it
>> to 'tez', and
>>
>> mapreduce.framework.name=yarn-tez, It always fail. I looked through the
>> container log: It throws ClassNotFoundExcetion for classes in test.jar.
>>
>>
>>
>> what addtional configuration I need to do? Thanks.
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

Re: Tez Cannot find Hive UDF jars

Posted by Azuryy Yu <az...@gmail.com>.
Thanks Bikas,

I cannot run tez mapreduce example, I ran with:
 hadoop jar tez-mapreduce-examples-0.5.0-incubating-SNAPSHOT.jar
orderedwordcount  wc/test.data  output

It failed. I found only one Exception in the NM log:
2014-06-04 15:15:36,212 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: an
error occurred during container running
org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:539)
        at org.apache.hadoop.util.Shell.run(Shell.java:452)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:684)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
2014-06-04 15:15:36,213 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit
code from container container_1401865179784_0005_01_000003 is : 143
2014-06-04 15:15:36,213 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
Container container_1401865179784_0005_01_000003 succeeded


The following is from AM container log:
2014-06-04 15:15:44,871 INFO [AsyncDispatcher event handler]
org.apache.tez.dag.app.DAGAppMaster: Waiting for next DAG to be submitted.
2014-06-04 15:15:44,941 INFO [IPC Server handler 0 on 3174]
org.apache.tez.dag.app.DAGAppMaster: Received message to shutdown AM
2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
org.apache.tez.dag.app.rm.TaskSchedulerEventHandler: TaskScheduler notified
that it should unregister from RM
2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
org.apache.tez.dag.app.DAGAppMaster: No current running DAG, shutting down
the AM
2014-06-04 15:15:44,942 INFO [IPC Server handler 0 on 3174]
org.apache.tez.dag.app.DAGAppMaster: Handling DAGAppMaster shutdown
2014-06-04 15:15:45,504 INFO [AMRM Callback Handler Thread]
org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
0 taskAllocations: 0
2014-06-04 15:15:46,506 INFO [AMRM Callback Handler Thread]
org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
0 taskAllocations: 0
2014-06-04 15:15:47,509 INFO [AMRM Callback Handler Thread]
org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
0 taskAllocations: 0
2014-06-04 15:15:48,511 INFO [AMRM Callback Handler Thread]
org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
0 taskAllocations: 0
2014-06-04 15:15:49,514 INFO [AMRM Callback Handler Thread]
org.apache.tez.dag.app.rm.TaskScheduler: App total resource memory: 0 cpu:
0 taskAllocations: 0
2014-06-04 15:15:49,944 INFO [AMShutdownThread]
org.apache.tez.dag.app.DAGAppMaster: Calling stop for all the services
2014-06-04 15:15:49,945 INFO [AMShutdownThread]
org.apache.tez.dag.history.HistoryEventHandler: Stopping HistoryEventHandler
2014-06-04 15:15:49,945 INFO [AMShutdownThread]
org.apache.tez.dag.history.recovery.RecoveryService: Stopping
RecoveryService
2014-06-04 15:15:49,945 INFO [AMShutdownThread]
org.apache.tez.dag.history.recovery.RecoveryService: Closing Summary Stream
2014-06-04 15:15:49,945 INFO [RecoveryEventHandlingThread]
org.apache.tez.dag.history.recovery.RecoveryService: EventQueue take
interrupted. Returning
2014-06-04 15:15:49,969 WARN [AMShutdownThread]
org.apache.tez.dag.history.recovery.RecoveryService: Error when closing
summary stream
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on
/data/hadoop/data/tez/staging/application_1401865179784_0005/application_1401865179784_0005/recovery/1/application_1401865179784_0005.summary
(inode 18085): File does not exist. Holder
DFSClient_NONMAPREDUCE_-1751193571_1 does not have any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2973)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3053)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3023)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:649)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:486)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1565)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.complete(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:407)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy15.complete(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2135)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2119)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)
at
org.apache.tez.dag.history.recovery.RecoveryService.serviceStop(RecoveryService.java:160)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at
org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at
org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at
org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
at
org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
at
org.apache.tez.dag.history.HistoryEventHandler.serviceStop(HistoryEventHandler.java:80)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at
org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at
org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at org.apache.tez.dag.app.DAGAppMaster.stopServices(DAGAppMaster.java:1518)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:1649)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHandler$AMShutdownRunnable.run(DAGAppMaster.java:607)
at java.lang.Thread.run(Thread.java:744)
2014-06-04 15:15:49,973 INFO [AMShutdownThread]
org.apache.tez.dag.history.logging.impl.SimpleHistoryLoggingService:
Stopping SimpleHistoryLoggingService, eventQueueBacklog=0
2014-06-04 15:15:49,973 INFO [HistoryEventHandlingThread]
org.apache.tez.dag.history.logging.impl.SimpleHistoryLoggingService:
EventQueue take interrupted. Returning
2014-06-04 15:15:49,974 INFO [DelayedContainerManager]
org.apache.tez.dag.app.rm.TaskScheduler: AllocatedContainerManager Thread
interrupted
2014-06-04 15:15:49,978 INFO [AMShutdownThread]
org.apache.tez.dag.app.rm.TaskScheduler: Unregistering application from RM,
exitStatus=SUCCEEDED, exitMessage=Session stats:submittedDAGs=1,
successfulDAGs=0, failedDAGs=1, killedDAGs=0
, trackingURL=
2014-06-04 15:15:49,984 INFO [AMShutdownThread]
org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl: Waiting for
application to be successfully unregistered.
2014-06-04 15:15:50,087 INFO [AMRM Callback Handler Thread]
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl:
Interrupted while waiting for queue
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:275)
2014-06-04 15:15:50,087 INFO [AMShutdownThread]
org.apache.hadoop.ipc.Server: Stopping server on 55800
2014-06-04 15:15:50,088 INFO [IPC Server listener on 55800]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 55800
2014-06-04 15:15:50,088 INFO [AMShutdownThread]
org.apache.hadoop.ipc.Server: Stopping server on 3174
2014-06-04 15:15:50,088 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2014-06-04 15:15:50,088 INFO [IPC Server listener on 3174]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 3174
2014-06-04 15:15:50,088 INFO [AMShutdownThread]
org.apache.tez.dag.app.DAGAppMaster: Exiting DAGAppMaster..GoodBye!






On Wed, Jun 4, 2014 at 1:18 PM, Bikas Saha <bi...@hortonworks.com> wrote:

> Can you please double check if you have followed all the instructions at
>
> http://tez.incubator.apache.org/install.html
>
> And then try to run a sample Tez job. If that passes then it may be an
> issue in the Hive configuration.
>
>
>
> Bikas
>
>
>
> *From:* Azuryy Yu [mailto:azuryyyu@gmail.com]
> *Sent:* Tuesday, June 03, 2014 8:54 PM
> *To:* user@tez.incubator.apache.org
> *Subject:* Tez Cannot find Hive UDF jars
>
>
>
> Hi,
>
> I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0
>
>
>
> fistly, I add one jar under hive shell using : add jar test.jar;
>
>
>
> I wrote some UDF functioins in test.jar.
>
>
>
> Hive job can success if I set hive.execution.engine=mr, but if I set it to
> 'tez', and
>
> mapreduce.framework.name=yarn-tez, It always fail. I looked through the
> container log: It throws ClassNotFoundExcetion for classes in test.jar.
>
>
>
> what addtional configuration I need to do? Thanks.
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

RE: Tez Cannot find Hive UDF jars

Posted by Bikas Saha <bi...@hortonworks.com>.
Can you please double check if you have followed all the instructions at

http://tez.incubator.apache.org/install.html

And then try to run a sample Tez job. If that passes then it may be an
issue in the Hive configuration.



Bikas



*From:* Azuryy Yu [mailto:azuryyyu@gmail.com]
*Sent:* Tuesday, June 03, 2014 8:54 PM
*To:* user@tez.incubator.apache.org
*Subject:* Tez Cannot find Hive UDF jars



Hi,

I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0



fistly, I add one jar under hive shell using : add jar test.jar;



I wrote some UDF functioins in test.jar.



Hive job can success if I set hive.execution.engine=mr, but if I set it to
'tez', and

mapreduce.framework.name=yarn-tez, It always fail. I looked through the
container log: It throws ClassNotFoundExcetion for classes in test.jar.



what addtional configuration I need to do? Thanks.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Tez Cannot find Hive UDF jars

Posted by Azuryy <az...@gmail.com>.
Hi Hitesh,

I fixed it because of my incorrect configuration, sorry for not post back here.


Sent from my iPhone5s

> On 2014年6月7日, at 0:59, Hitesh Shah <hi...@apache.org> wrote:
> 
> Hi 
> 
> Are you still having issues with the hive UDF? 
> 
> Couple of things to try out if its still not working:
>     - If you are familiar with YARN, could you look at the launch_container.sh of a task container and see if test.jar is getting localized? 
>     - Can you upload the test.jar to the same dir as the tez jars on HDFS and re-run the query? This is not a solution but just something to check if there are any other underlying issues with the UDF jar. 
> 
> thanks
> -- Hitesh
> 
> 
>> On Jun 3, 2014, at 8:53 PM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>> Hi,
>> I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0
>> 
>> fistly, I add one jar under hive shell using : add jar test.jar;
>> 
>> I wrote some UDF functioins in test.jar.
>> 
>> Hive job can success if I set hive.execution.engine=mr, but if I set it to 'tez', and
>> mapreduce.framework.name=yarn-tez, It always fail. I looked through the container log: It throws ClassNotFoundExcetion for classes in test.jar.
>> 
>> what addtional configuration I need to do? Thanks.
> 

Re: Tez Cannot find Hive UDF jars

Posted by Hitesh Shah <hi...@apache.org>.
Hi

Are you still having issues with the hive UDF?

Couple of things to try out if its still not working:
    - If you are familiar with YARN, could you look at the
launch_container.sh of a task container and see if test.jar is getting
localized?
    - Can you upload the test.jar to the same dir as the tez jars on HDFS
and re-run the query? This is not a solution but just something to check if
there are any other underlying issues with the UDF jar.

thanks
-- Hitesh


On Jun 3, 2014, at 8:53 PM, Azuryy Yu <az...@gmail.com> wrote:

Hi,
I am using Hadoop-2.4.0 and tez-0.5-snapshot, hive-0.13.0

fistly, I add one jar under hive shell using : add jar test.jar;

I wrote some UDF functioins in test.jar.

Hive job can success if I set hive.execution.engine=mr, but if I set it to
'tez', and
mapreduce.framework.name=yarn-tez, It always fail. I looked through the
container log: It throws ClassNotFoundExcetion for classes in test.jar.

what addtional configuration I need to do? Thanks.