You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Dai, Kevin" <yu...@ebay.com> on 2014/12/23 09:42:40 UTC
Spark SQL job block when use hive udf from_unixtime
Hi, there
When I use hive udf from_unixtime with the HiveContext, the job block and the log is as follow:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
But if I replace it with my own udf, it works.
My data is from HBase
Whether I cache the data in memory or save it as the parquet file and load it later, the job still blocks
How can I fix it?
Thanks,
Kevin.
Re: Spark SQL job block when use hive udf from_unixtime
Posted by Cheng Lian <li...@gmail.com>.
Could you please provide a complete stacktrace? Also it would be good if
you can share your hive-site.xml as well.
On 12/23/14 4:42 PM, Dai, Kevin wrote:
>
> Hi, there
>
> When I use hive udf from_unixtime with the HiveContext, the job block
> and the log is as follow:
>
> sun.misc.Unsafe.park(Native Method)
>
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
>
> But if I replace it with my own udf, it works.
>
> My data is from HBase
>
> Whether I cache the data in memory or save it as the parquet file and
> load it later, the job still blocks
>
> How can I fix it?
>
> Thanks,
>
> Kevin.
>