You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Aniket Mokashi <an...@gmail.com> on 2012/01/06 18:14:55 UTC

Re: Error in running hive query

[moving to user@hive]

Can you send us the details from task logs?

Thanks,
Aniket

On Fri, Jan 6, 2012 at 2:53 AM, Bhavesh Shah <bh...@gmail.com>wrote:

> Hello,
>
> hive> FROM (
>    > FROM subset
>    > MAP subset.patient_mrn, subset.encounter_date
>    > USING 'q1.txt'
>    > AS mp1, mp2
>    > CLUSTER BY mp1) map_output
>    > INSERT OVERWRITE TABLE t3
>    > REDUCE map_output.mp1
>    > USING 'retrieve'
>    > AS reducef1;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>  set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>  set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>  set mapred.reduce.tasks=<number>
> Starting Job = job_201112281627_0100, Tracking URL =
> http://localhost:50030/jobdetails.jsp?jobid=job_201112281627_0100
> Kill Command = /home/hadoop/hadoop-0.20.2-cdh3u2//bin/hadoop job
> -Dmapred.job.tracker=localhost:54311 -kill job_201112281627_0100
> 2011-12-31 04:34:52,208 Stage-1 map = 0%,  reduce = 0%
> 2011-12-31 04:35:52,939 Stage-1 map = 0%,  reduce = 0%
> 2011-12-31 04:36:34,097 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201112281627_0100 with errors
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> hive>
>
> In 'q1.txt' I have written query in Hive which returns 2 columns
> and
> In 'retrieve' I have wriiten a java code which takes 2 input and display
> the 1 column
>
> Is there any mistake in query.
>
>
> Pls suggest me some solution.
>
>
> --
> Regards,
> Bhavesh Shah
>



-- 
"...:::Aniket:::... Quetzalco@tl"

Re: Error in running hive query

Posted by Bhavesh Shah <bh...@gmail.com>.
Hello,
This are my logs, related to my execution of Hive query.

2011-12-31 04:34:17,604 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
2011-12-31 04:36:35,357 ERROR exec.MapRedTask
(SessionState.java:printError(343)) - Ended Job = job_201112281627_0100
with errors
2011-12-31 04:36:38,112 ERROR ql.Driver (SessionState.java:printError(343))
- FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
2011-12-31 05:37:34,750 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
2011-12-31 05:42:58,176 ERROR exec.MapRedTask
(SessionState.java:printError(343)) - Ended Job = job_201112281627_0101
with errors
2011-12-31 05:43:16,748 ERROR ql.Driver (SessionState.java:printError(343))
- FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
2011-12-31 06:40:31,764 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
2011-12-31 06:55:22,527 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
2011-12-31 07:00:08,695 ERROR exec.MapRedTask
(SessionState.java:printError(343)) - Ended Job = job_201112281627_0103
with errors
2011-12-31 07:00:12,374 ERROR ql.Driver (SessionState.java:printError(343))
- FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
2011-12-31 07:08:21,565 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
2011-12-31 07:10:23,405 ERROR exec.MapRedTask
(SessionState.java:printError(343)) - Ended Job = job_201112281627_0104
with errors
2011-12-31 07:10:25,006 ERROR ql.Driver (SessionState.java:printError(343))
- FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
2011-12-31 07:17:48,490 WARN  mapred.JobClient
(JobClient.java:copyAndConfigureFiles(649)) - Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.


Pls tell me what to do te remove this error.


-- 
Regards,
Bhavesh Shah



On Fri, Jan 6, 2012 at 10:44 PM, Aniket Mokashi <an...@gmail.com> wrote:

> [moving to user@hive]
>
> Can you send us the details from task logs?
>
> Thanks,
> Aniket
>
> On Fri, Jan 6, 2012 at 2:53 AM, Bhavesh Shah <bhavesh25shah@gmail.com
> >wrote:
>
> > Hello,
> >
> > hive> FROM (
> >    > FROM subset
> >    > MAP subset.patient_mrn, subset.encounter_date
> >    > USING 'q1.txt'
> >    > AS mp1, mp2
> >    > CLUSTER BY mp1) map_output
> >    > INSERT OVERWRITE TABLE t3
> >    > REDUCE map_output.mp1
> >    > USING 'retrieve'
> >    > AS reducef1;
> > Total MapReduce jobs = 1
> > Launching Job 1 out of 1
> > Number of reduce tasks not specified. Estimated from input data size: 1
> > In order to change the average load for a reducer (in bytes):
> >  set hive.exec.reducers.bytes.per.reducer=<number>
> > In order to limit the maximum number of reducers:
> >  set hive.exec.reducers.max=<number>
> > In order to set a constant number of reducers:
> >  set mapred.reduce.tasks=<number>
> > Starting Job = job_201112281627_0100, Tracking URL =
> > http://localhost:50030/jobdetails.jsp?jobid=job_201112281627_0100
> > Kill Command = /home/hadoop/hadoop-0.20.2-cdh3u2//bin/hadoop job
> > -Dmapred.job.tracker=localhost:54311 -kill job_201112281627_0100
> > 2011-12-31 04:34:52,208 Stage-1 map = 0%,  reduce = 0%
> > 2011-12-31 04:35:52,939 Stage-1 map = 0%,  reduce = 0%
> > 2011-12-31 04:36:34,097 Stage-1 map = 100%,  reduce = 100%
> > Ended Job = job_201112281627_0100 with errors
> > FAILED: Execution Error, return code 2 from
> > org.apache.hadoop.hive.ql.exec.MapRedTask
> > hive>
> >
> > In 'q1.txt' I have written query in Hive which returns 2 columns
> > and
> > In 'retrieve' I have wriiten a java code which takes 2 input and display
> > the 1 column
> >
> > Is there any mistake in query.
> >
> >
> > Pls suggest me some solution.
> >
> >
> > --
> > Regards,
> > Bhavesh Shah
> >
>
>
>
> --
> "...:::Aniket:::... Quetzalco@tl"
>

RE: Error in running hive query

Posted by Ia...@barclayscapital.com.
I don't appear to be getting any! It doesn't appear to be able to speak to the task tracker to create one in the first place. The full trace I get is listed again below. Being new to Hive, I don't know how to configure a hive log directory.

Ian

hive> select count(9) from currency_dim;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
java.io.IOException: Call to ldndsr36257/10.65.31.71:50030 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1065)
        at org.apache.hadoop.ipc.Client.call(Client.java:1033)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
        at org.apache.hadoop.mapred.$Proxy8.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
        at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:460)
        at org.apache.hadoop.mapred.JobClient.init(JobClient.java:454)
        at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:437)
        at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
        at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:774)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:712)
Job Submission failed with exception 'java.io.IOException(Call to ldndsr36257/10.65.31.71:50030 failed on local exception: java.io.EOFException)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask

From: Aniket Mokashi [mailto:aniket486@gmail.com]
Sent: Friday, January 06, 2012 5:15 PM
To: user@hive.apache.org
Subject: Re: Error in running hive query

[moving to user@hive]

Can you send us the details from task logs?

Thanks,
Aniket
On Fri, Jan 6, 2012 at 2:53 AM, Bhavesh Shah <bh...@gmail.com>> wrote:
Hello,

hive> FROM (
   > FROM subset
   > MAP subset.patient_mrn, subset.encounter_date
   > USING 'q1.txt'
   > AS mp1, mp2
   > CLUSTER BY mp1) map_output
   > INSERT OVERWRITE TABLE t3
   > REDUCE map_output.mp1
   > USING 'retrieve'
   > AS reducef1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
 set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
 set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
 set mapred.reduce.tasks=<number>
Starting Job = job_201112281627_0100, Tracking URL =
http://localhost:50030/jobdetails.jsp?jobid=job_201112281627_0100
Kill Command = /home/hadoop/hadoop-0.20.2-cdh3u2//bin/hadoop job
-Dmapred.job.tracker=localhost:54311 -kill job_201112281627_0100
2011-12-31 04:34:52,208 Stage-1 map = 0%,  reduce = 0%
2011-12-31 04:35:52,939 Stage-1 map = 0%,  reduce = 0%
2011-12-31 04:36:34,097 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201112281627_0100 with errors
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
hive>

In 'q1.txt' I have written query in Hive which returns 2 columns
and
In 'retrieve' I have wriiten a java code which takes 2 input and display
the 1 column

Is there any mistake in query.


Pls suggest me some solution.


--
Regards,
Bhavesh Shah



--
"...:::Aniket:::... Quetzalco@tl"

_______________________________________________

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this e-mail, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this e-mail is not an offer to buy or sell or a solicitation to buy or sell any securities, investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Barclays. Any views or opinions presented are solely those of the author and do not necessarily represent those of Barclays. This e-mail is subject to terms available at the following link: www.barcap.com/emaildisclaimer. By messaging with Barclays you consent to the foregoing.  Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP.  This email may relate to or be sent from other members of the Barclays Group.
_______________________________________________