You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Leena Gupta <gu...@gmail.com> on 2014/05/15 20:07:42 UTC

Hive UDF error

Hi,

I'm trying to create a function that generates a UUID, want to use it in a
query to insert data into another table.

Here is the function:

package com.udf.example;

import  java.util.UUID;
import org.apache.hadoop.hive.ql.exec.Description;
import  org.apache.hadoop.hive.ql.exec.UDF;
import  org.apache.hadoop.io.Text;


@Description(
name = "Uuid",
value = "_FUNC_() - Generate a unique uuid",
extended="Select Uuid from foo limit 1;"
)

class Uuid extends UDF{
  public Text evaluate(){
    return new Text(UUID.randomUUID().toString());
  }
}

I registered it successfully in Hive but when I try to use it in a query I
get a Nullpointer exception(see below). The same function when I run
outside of Hive by including main() is able to return a UUID.
Could someone please help shed some light on why I'm getting this error.

select entity_volume,Uuid() from test_volume limit 5;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201311092117_0312, Tracking URL =
http://bi1-xxx.com:50030/jobdetails.jsp?jobid=job_201311092117_0312
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=
bi1-xxx.com:8021 -kill job_201311092117_0312
Hadoop job information for Stage-1: number of mappers: 1; number of
reducers: 0
2014-05-15 10:12:10,825 Stage-1 map = 0%,  reduce = 0%
2014-05-15 10:12:29,916 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201311092117_0312 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201311092117_0312_m_000002 (and more) from job
job_201311092117_0312
Exception in thread "Thread-23" java.lang.NullPointerException
at
org.apache.hadoop.hive.shims.Hadoop23Shims.getTaskAttemptLogUrl(Hadoop23Shims.java:44)
at
org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.getTaskInfos(JobDebugger.java:186)
at
org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.run(JobDebugger.java:142)
at java.lang.Thread.run(Thread.java:745)
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL

Thanks!

Re: Hive UDF error

Posted by Edward Capriolo <ed...@gmail.com>.
try
public class Uuid extends UDF{


On Thu, May 15, 2014 at 2:07 PM, Leena Gupta <gu...@gmail.com> wrote:

> Hi,
>
> I'm trying to create a function that generates a UUID, want to use it in a
> query to insert data into another table.
>
> Here is the function:
>
> package com.udf.example;
>
> import  java.util.UUID;
> import org.apache.hadoop.hive.ql.exec.Description;
> import  org.apache.hadoop.hive.ql.exec.UDF;
> import  org.apache.hadoop.io.Text;
>
>
> @Description(
> name = "Uuid",
> value = "_FUNC_() - Generate a unique uuid",
> extended="Select Uuid from foo limit 1;"
> )
>
> class Uuid extends UDF{
>   public Text evaluate(){
>     return new Text(UUID.randomUUID().toString());
>   }
> }
>
> I registered it successfully in Hive but when I try to use it in a query I
> get a Nullpointer exception(see below). The same function when I run
> outside of Hive by including main() is able to return a UUID.
> Could someone please help shed some light on why I'm getting this error.
>
> select entity_volume,Uuid() from test_volume limit 5;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201311092117_0312, Tracking URL =
> http://bi1-xxx.com:50030/jobdetails.jsp?jobid=job_201311092117_0312
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=
> bi1-xxx.com:8021 -kill job_201311092117_0312
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
> 2014-05-15 10:12:10,825 Stage-1 map = 0%,  reduce = 0%
> 2014-05-15 10:12:29,916 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201311092117_0312 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_201311092117_0312_m_000002 (and more) from job
> job_201311092117_0312
> Exception in thread "Thread-23" java.lang.NullPointerException
> at
> org.apache.hadoop.hive.shims.Hadoop23Shims.getTaskAttemptLogUrl(Hadoop23Shims.java:44)
>  at
> org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.getTaskInfos(JobDebugger.java:186)
> at
> org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.run(JobDebugger.java:142)
>  at java.lang.Thread.run(Thread.java:745)
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:
> Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
>
> Thanks!
>

Re: Hive UDF error

Posted by Jason Dere <jd...@hortonworks.com>.
What version of Hive are you running?
It looks like the error you're seeing might be from Hive trying to retrieve the error message from the logs and might not be related to the actual error.  Might want to check the logs for the Hadoop task that was run as part of this query, to see if that has any other messages.


On May 15, 2014, at 11:07 AM, Leena Gupta <gu...@gmail.com> wrote:

> Hi,
> 
> I'm trying to create a function that generates a UUID, want to use it in a query to insert data into another table.
> 
> Here is the function:
> 
> package com.udf.example;
> 
> import  java.util.UUID;
> import org.apache.hadoop.hive.ql.exec.Description;
> import  org.apache.hadoop.hive.ql.exec.UDF;
> import  org.apache.hadoop.io.Text;
> 
> 
> @Description(
> name = "Uuid",
> value = "_FUNC_() - Generate a unique uuid",
> extended="Select Uuid from foo limit 1;"
> )
> 
> class Uuid extends UDF{
>   public Text evaluate(){
>     return new Text(UUID.randomUUID().toString());
>   }
> }
> 
> I registered it successfully in Hive but when I try to use it in a query I get a Nullpointer exception(see below). The same function when I run outside of Hive by including main() is able to return a UUID. 
> Could someone please help shed some light on why I'm getting this error.
> 
> select entity_volume,Uuid() from test_volume limit 5;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201311092117_0312, Tracking URL = http://bi1-xxx.com:50030/jobdetails.jsp?jobid=job_201311092117_0312
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=bi1-xxx.com:8021 -kill job_201311092117_0312
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
> 2014-05-15 10:12:10,825 Stage-1 map = 0%,  reduce = 0%
> 2014-05-15 10:12:29,916 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201311092117_0312 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_201311092117_0312_m_000002 (and more) from job job_201311092117_0312
> Exception in thread "Thread-23" java.lang.NullPointerException
> 	at org.apache.hadoop.hive.shims.Hadoop23Shims.getTaskAttemptLogUrl(Hadoop23Shims.java:44)
> 	at org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.getTaskInfos(JobDebugger.java:186)
> 	at org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.run(JobDebugger.java:142)
> 	at java.lang.Thread.run(Thread.java:745)
> FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> 
> Thanks!


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.