You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by EdwardKing <zh...@neusoft.com> on 2014/04/17 05:06:30 UTC
question about hive under hadoop
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing <zh...@neusoft.com> wrote:
> Where is hive.log? Thanks.
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing <zh...@neusoft.com> wrote:
> Where is hive.log? Thanks.
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing <zh...@neusoft.com> wrote:
> Where is hive.log? Thanks.
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing <zh...@neusoft.com> wrote:
> Where is hive.log? Thanks.
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
Where is hive.log? Thanks.
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
I think you need to check the RM and NM log to find the detail error message
On Thu, Apr 17, 2014 at 2:30 PM, EdwardKing <zh...@neusoft.com> wrote:
> *hive.log is follows:*
>
> 014-04-16 23:11:59,214 WARN common.LogUtils
> (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on
> CLASSPATH
> 2014-04-16 23:11:59,348 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:11:59,350 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:11:59,902 WARN util.NativeCodeLoader
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
> 2014-04-16 23:12:00,035 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:12:00,037 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter: mapreduce.job.end-notification.max.attempts;
> Ignoring.
> 2014-04-16 23:13:06,780 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:06,783 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:13:13,941 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:13,944 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter
> (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> *hive_job_log_hadoop_11878@master_201404162311_2146666819.txt*<hi...@master_201404162311_2146666819.txt>* is
> floows:*
>
> SessionStart SESSION_ID="hadoop_11878@master_201404162311"
> TIME="1397715119488"
> QueryStart QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193449"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715193504"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask"
> TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193589"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193595"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193743"
> TaskEnd TASK_RET_CODE="0"
> TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193744"
> QueryEnd QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}"
> TIME="1397715193744"
> QueryStart QUERY_STRING="select count(*) from ufodata"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250446"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715250460"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask"
> TASK_ID="Stage-1"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250461"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715250471"
>
> How to do it now?
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
I think you need to check the RM and NM log to find the detail error message
On Thu, Apr 17, 2014 at 2:30 PM, EdwardKing <zh...@neusoft.com> wrote:
> *hive.log is follows:*
>
> 014-04-16 23:11:59,214 WARN common.LogUtils
> (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on
> CLASSPATH
> 2014-04-16 23:11:59,348 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:11:59,350 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:11:59,902 WARN util.NativeCodeLoader
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
> 2014-04-16 23:12:00,035 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:12:00,037 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter: mapreduce.job.end-notification.max.attempts;
> Ignoring.
> 2014-04-16 23:13:06,780 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:06,783 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:13:13,941 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:13,944 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter
> (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> *hive_job_log_hadoop_11878@master_201404162311_2146666819.txt*<hi...@master_201404162311_2146666819.txt>* is
> floows:*
>
> SessionStart SESSION_ID="hadoop_11878@master_201404162311"
> TIME="1397715119488"
> QueryStart QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193449"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715193504"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask"
> TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193589"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193595"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193743"
> TaskEnd TASK_RET_CODE="0"
> TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193744"
> QueryEnd QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}"
> TIME="1397715193744"
> QueryStart QUERY_STRING="select count(*) from ufodata"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250446"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715250460"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask"
> TASK_ID="Stage-1"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250461"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715250471"
>
> How to do it now?
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
I think you need to check the RM and NM log to find the detail error message
On Thu, Apr 17, 2014 at 2:30 PM, EdwardKing <zh...@neusoft.com> wrote:
> *hive.log is follows:*
>
> 014-04-16 23:11:59,214 WARN common.LogUtils
> (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on
> CLASSPATH
> 2014-04-16 23:11:59,348 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:11:59,350 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:11:59,902 WARN util.NativeCodeLoader
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
> 2014-04-16 23:12:00,035 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:12:00,037 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter: mapreduce.job.end-notification.max.attempts;
> Ignoring.
> 2014-04-16 23:13:06,780 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:06,783 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:13:13,941 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:13,944 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter
> (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> *hive_job_log_hadoop_11878@master_201404162311_2146666819.txt*<hi...@master_201404162311_2146666819.txt>* is
> floows:*
>
> SessionStart SESSION_ID="hadoop_11878@master_201404162311"
> TIME="1397715119488"
> QueryStart QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193449"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715193504"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask"
> TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193589"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193595"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193743"
> TaskEnd TASK_RET_CODE="0"
> TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193744"
> QueryEnd QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}"
> TIME="1397715193744"
> QueryStart QUERY_STRING="select count(*) from ufodata"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250446"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715250460"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask"
> TASK_ID="Stage-1"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250461"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715250471"
>
> How to do it now?
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
I think you need to check the RM and NM log to find the detail error message
On Thu, Apr 17, 2014 at 2:30 PM, EdwardKing <zh...@neusoft.com> wrote:
> *hive.log is follows:*
>
> 014-04-16 23:11:59,214 WARN common.LogUtils
> (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on
> CLASSPATH
> 2014-04-16 23:11:59,348 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:11:59,350 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:11:59,902 WARN util.NativeCodeLoader
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
> 2014-04-16 23:12:00,035 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:12:00,037 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt
> to override final parameter: mapreduce.job.end-notification.max.attempts;
> Ignoring.
> 2014-04-16 23:13:06,780 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:06,783 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:13:13,941 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 2014-04-16 23:13:13,944 WARN conf.Configuration
> (Configuration.java:loadProperty(2172)) -
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> 2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter
> (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> *hive_job_log_hadoop_11878@master_201404162311_2146666819.txt*<hi...@master_201404162311_2146666819.txt>* is
> floows:*
>
> SessionStart SESSION_ID="hadoop_11878@master_201404162311"
> TIME="1397715119488"
> QueryStart QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193449"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715193504"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask"
> TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193589"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193595"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715193743"
> TaskEnd TASK_RET_CODE="0"
> TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> TIME="1397715193744"
> QueryEnd QUERY_STRING="describe ufodata"
> QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0"
> QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
> Counters
> plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}"
> TIME="1397715193744"
> QueryStart QUERY_STRING="select count(*) from ufodata"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250446"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}"
> TIME="1397715250460"
> TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask"
> TASK_ID="Stage-1"
> QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c"
> TIME="1397715250461"
> Counters
> plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select
> count(*) from
> ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}"
> TIME="1397715250471"
>
> How to do it now?
>
> ----- Original Message -----
> *From:* Shengjun Xin <sx...@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.retry.interval; Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
>> mapreduce.job.end-notification.max.attempts; Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>> > sighting_location STRING,shape STRING, duration STRING,
>> > description STRING COMMENT 'Free text description')
>> > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>> set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>> set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>> set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong? how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
hive.log is follows:
014-04-16 23:11:59,214 WARN common.LogUtils (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-04-16 23:11:59,348 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:11:59,350 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:11:59,902 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-04-16 23:12:00,035 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:12:00,037 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:06,780 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:06,783 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:13,941 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:13,944 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
hive_job_log_hadoop_11878@master_201404162311_2146666819.txt is floows:
SessionStart SESSION_ID="hadoop_11878@master_201404162311" TIME="1397715119488"
QueryStart QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193449"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715193504"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193589"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715193595"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}" TIME="1397715193743"
TaskEnd TASK_RET_CODE="0" TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193744"
QueryEnd QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}" TIME="1397715193744"
QueryStart QUERY_STRING="select count(*) from ufodata" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250446"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715250460"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask" TASK_ID="Stage-1" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250461"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715250471"
How to do it now?
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
Where is hive.log? Thanks.
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
Where is hive.log? Thanks.
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
hive.log is follows:
014-04-16 23:11:59,214 WARN common.LogUtils (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-04-16 23:11:59,348 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:11:59,350 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:11:59,902 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-04-16 23:12:00,035 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:12:00,037 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:06,780 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:06,783 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:13,941 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:13,944 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
hive_job_log_hadoop_11878@master_201404162311_2146666819.txt is floows:
SessionStart SESSION_ID="hadoop_11878@master_201404162311" TIME="1397715119488"
QueryStart QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193449"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715193504"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193589"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715193595"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}" TIME="1397715193743"
TaskEnd TASK_RET_CODE="0" TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193744"
QueryEnd QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}" TIME="1397715193744"
QueryStart QUERY_STRING="select count(*) from ufodata" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250446"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715250460"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask" TASK_ID="Stage-1" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250461"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715250471"
How to do it now?
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
hive.log is follows:
014-04-16 23:11:59,214 WARN common.LogUtils (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-04-16 23:11:59,348 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:11:59,350 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:11:59,902 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-04-16 23:12:00,035 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:12:00,037 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:06,780 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:06,783 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:13,941 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:13,944 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
hive_job_log_hadoop_11878@master_201404162311_2146666819.txt is floows:
SessionStart SESSION_ID="hadoop_11878@master_201404162311" TIME="1397715119488"
QueryStart QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193449"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715193504"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193589"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715193595"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}" TIME="1397715193743"
TaskEnd TASK_RET_CODE="0" TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193744"
QueryEnd QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}" TIME="1397715193744"
QueryStart QUERY_STRING="select count(*) from ufodata" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250446"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715250460"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask" TASK_ID="Stage-1" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250461"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715250471"
How to do it now?
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
hive.log is follows:
014-04-16 23:11:59,214 WARN common.LogUtils (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-04-16 23:11:59,348 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:11:59,350 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0214:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:11:59,902 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-04-16 23:12:00,035 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:12:00,037 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@c93402:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:06,780 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:06,783 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@198c5ad:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:13:13,941 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-04-16 23:13:13,944 WARN conf.Configuration (Configuration.java:loadProperty(2172)) - org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@183f3f6:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-04-16 23:14:13,847 WARN mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
hive_job_log_hadoop_11878@master_201404162311_2146666819.txt is floows:
SessionStart SESSION_ID="hadoop_11878@master_201404162311" TIME="1397715119488"
QueryStart QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193449"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715193504"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193589"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"true"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715193595"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"false","started":"true"}" TIME="1397715193743"
TaskEnd TASK_RET_CODE="0" TASK_NAME="org.apache.hadoop.hive.ql.exec.DDLTask" TASK_ID="Stage-0" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" TIME="1397715193744"
QueryEnd QUERY_STRING="describe ufodata" QUERY_ID="hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0" QUERY_RET_CODE="0" QUERY_NUM_TASKS="0" TIME="1397715193744"
Counters plan="{"queryId":"hadoop_20140416231313_f9126905-447c-47e4-819d-5818adc0eda0","queryType":null,"queryAttributes":{"queryString":"describe ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"DDL","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"true","started":"true"}],"done":"true","started":"true"}],"done":"true","started":"true"}" TIME="1397715193744"
QueryStart QUERY_STRING="select count(*) from ufodata" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250446"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}" TIME="1397715250460"
TaskStart TASK_NAME="org.apache.hadoop.hive.ql.exec.MapRedTask" TASK_ID="Stage-1" QUERY_ID="hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c" TIME="1397715250461"
Counters plan="{"queryId":"hadoop_20140416231414_e98cb57f-10bc-423b-b8bd-ec554a383f4c","queryType":null,"queryAttributes":{"queryString":"select count(*) from ufodata"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-1","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"true"}],"done":"false","started":"true"}" TIME="1397715250471"
How to do it now?
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by EdwardKing <zh...@neusoft.com>.
Where is hive.log? Thanks.
----- Original Message -----
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/04/16 19:11:03 WARN conf.Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
Logging initialized using configuration in jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Then I crete a table named ufodata,like follows:
hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> sighting_location STRING,shape STRING, duration STRING,
> description STRING COMMENT 'Free text description')
> COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Please use 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds
Then I want to count the table ufodata,like follows:
hive> select count(*) from ufodata;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1397699833108_0002, Tracking URL = http://master:8088/proxy/application_1397699833108_0002/
Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill job_1397699833108_0002
I have two question:
1. Why do above command failed, where is wrong? how to solve it?
2. When I use following command to quit hive,and reboot computer
hive>quit;
$reboot
Then I use following command under hive
hive>describe ufodata;
Table not found 'ufodata'
Where is my table? I am puzzled with it. How to resove above two question?
Thanks
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Regards
Shengjun
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
> I use hive-0.11.0 under hadoop 2.2.0, like follows:
> [hadoop@node1 software]$ hive
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.input.dir.recursive is deprecated. Instead, use
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.rack is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.node is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> Logging initialized using configuration in
> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
>
> Then I crete a table named ufodata,like follows:
> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> > sighting_location STRING,shape STRING, duration STRING,
> > description STRING COMMENT 'Free text description')
> > COMMENT 'The UFO data set.' ;
> OK
> Time taken: 1.588 seconds
> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
> Loading data to table default.ufodata
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Deleted /user/hive/warehouse/ufodata
> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
> 0, total_size: 75342464, raw_data_size: 0]
> OK
> Time taken: 1.483 seconds
>
> Then I want to count the table ufodata,like follows:
>
> hive> select count(*) from ufodata;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_1397699833108_0002, Tracking URL =
> http://master:8088/proxy/application_1397699833108_0002/
> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
> job_1397699833108_0002
>
> I have two question:
> 1. Why do above command failed, where is wrong? how to solve it?
> 2. When I use following command to quit hive,and reboot computer
> hive>quit;
> $reboot
>
> Then I use following command under hive
> hive>describe ufodata;
> Table not found 'ufodata'
>
> Where is my table? I am puzzled with it. How to resove above two question?
>
> Thanks
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
> I use hive-0.11.0 under hadoop 2.2.0, like follows:
> [hadoop@node1 software]$ hive
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.input.dir.recursive is deprecated. Instead, use
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.rack is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.node is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> Logging initialized using configuration in
> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
>
> Then I crete a table named ufodata,like follows:
> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> > sighting_location STRING,shape STRING, duration STRING,
> > description STRING COMMENT 'Free text description')
> > COMMENT 'The UFO data set.' ;
> OK
> Time taken: 1.588 seconds
> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
> Loading data to table default.ufodata
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Deleted /user/hive/warehouse/ufodata
> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
> 0, total_size: 75342464, raw_data_size: 0]
> OK
> Time taken: 1.483 seconds
>
> Then I want to count the table ufodata,like follows:
>
> hive> select count(*) from ufodata;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_1397699833108_0002, Tracking URL =
> http://master:8088/proxy/application_1397699833108_0002/
> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
> job_1397699833108_0002
>
> I have two question:
> 1. Why do above command failed, where is wrong? how to solve it?
> 2. When I use following command to quit hive,and reboot computer
> hive>quit;
> $reboot
>
> Then I use following command under hive
> hive>describe ufodata;
> Table not found 'ufodata'
>
> Where is my table? I am puzzled with it. How to resove above two question?
>
> Thanks
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
> I use hive-0.11.0 under hadoop 2.2.0, like follows:
> [hadoop@node1 software]$ hive
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.input.dir.recursive is deprecated. Instead, use
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.rack is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.node is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> Logging initialized using configuration in
> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
>
> Then I crete a table named ufodata,like follows:
> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> > sighting_location STRING,shape STRING, duration STRING,
> > description STRING COMMENT 'Free text description')
> > COMMENT 'The UFO data set.' ;
> OK
> Time taken: 1.588 seconds
> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
> Loading data to table default.ufodata
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Deleted /user/hive/warehouse/ufodata
> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
> 0, total_size: 75342464, raw_data_size: 0]
> OK
> Time taken: 1.483 seconds
>
> Then I want to count the table ufodata,like follows:
>
> hive> select count(*) from ufodata;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_1397699833108_0002, Tracking URL =
> http://master:8088/proxy/application_1397699833108_0002/
> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
> job_1397699833108_0002
>
> I have two question:
> 1. Why do above command failed, where is wrong? how to solve it?
> 2. When I use following command to quit hive,and reboot computer
> hive>quit;
> $reboot
>
> Then I use following command under hive
> hive>describe ufodata;
> Table not found 'ufodata'
>
> Where is my table? I am puzzled with it. How to resove above two question?
>
> Thanks
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun
Re: question about hive under hadoop
Posted by Shengjun Xin <sx...@gopivotal.com>.
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zh...@neusoft.com> wrote:
> I use hive-0.11.0 under hadoop 2.2.0, like follows:
> [hadoop@node1 software]$ hive
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.input.dir.recursive is deprecated. Instead, use
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.rack is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.min.split.size.per.node is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval; Ignoring.
> 14/04/16 19:11:03 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter:
> mapreduce.job.end-notification.max.attempts; Ignoring.
> Logging initialized using configuration in
> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
>
> Then I crete a table named ufodata,like follows:
> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
> > sighting_location STRING,shape STRING, duration STRING,
> > description STRING COMMENT 'Free text description')
> > COMMENT 'The UFO data set.' ;
> OK
> Time taken: 1.588 seconds
> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
> Loading data to table default.ufodata
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Deleted /user/hive/warehouse/ufodata
> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
> 0, total_size: 75342464, raw_data_size: 0]
> OK
> Time taken: 1.483 seconds
>
> Then I want to count the table ufodata,like follows:
>
> hive> select count(*) from ufodata;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_1397699833108_0002, Tracking URL =
> http://master:8088/proxy/application_1397699833108_0002/
> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill
> job_1397699833108_0002
>
> I have two question:
> 1. Why do above command failed, where is wrong? how to solve it?
> 2. When I use following command to quit hive,and reboot computer
> hive>quit;
> $reboot
>
> Then I use following command under hive
> hive>describe ufodata;
> Table not found 'ufodata'
>
> Where is my table? I am puzzled with it. How to resove above two question?
>
> Thanks
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Regards
Shengjun