You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2013/01/09 11:27:42 UTC

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548037#comment-13548037 ] 

Hudson commented on HIVE-3413:
------------------------------

Integrated in Hive-trunk-hadoop2 #54 (See [https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
    HIVE-3413. Fix pdk.PluginTest on hadoop23 (Zhenxiao Luo via cws) (Revision 1380478)

     Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1380478
Files : 
* /hive/trunk/builtins/build.xml
* /hive/trunk/builtins/ivy.xml
* /hive/trunk/pdk/scripts/build-plugin.xml
* /hive/trunk/pdk/test-plugin/test/conf
* /hive/trunk/pdk/test-plugin/test/conf/log4j.properties

                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>             Fix For: 0.10.0
>
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira