You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Zhenxiao Luo (JIRA)" <ji...@apache.org> on 2012/08/29 04:13:08 UTC

[jira] [Created] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Zhenxiao Luo created HIVE-3413:
----------------------------------

             Summary: Fix pdk.PluginTest on hadoop23
                 Key: HIVE-3413
                 URL: https://issues.apache.org/jira/browse/HIVE-3413
             Project: Hive
          Issue Type: Bug
    Affects Versions: 0.9.0
            Reporter: Zhenxiao Luo
            Assignee: Zhenxiao Luo


When running Hive test on Hadoop0.23, pdk.PluginTest is failing:

test:
    [junit] Running org.apache.hive.pdk.PluginTest
    [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
    [junit] Total MapReduce jobs = 1
    [junit] Launching Job 1 out of 1
    [junit] Number of reduce tasks determined at compile time: 1
    [junit] In order to change the average load for a reducer (in bytes):
    [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
    [junit] In order to limit the maximum number of reducers:
    [junit]   set hive.exec.reducers.max=<number>
    [junit] In order to set a constant number of reducers:
    [junit]   set mapred.reduce.tasks=<number>
    [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
    [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
    [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
    [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
    [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
    [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
    [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
    [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
    [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
    [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
    [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
    [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
    [junit] Execution failed with exit status: 1
    [junit] Obtaining error information
    [junit]
    [junit] Task failed!
    [junit] Task ID:
    [junit]   Stage-1
    [junit]
    [junit] Logs:
    [junit]
    [junit] /tmp/cloudera/hive.log
    [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
    [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec

With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:


Testsuite: org.apache.hive.pdk.PluginTest
Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
------------- Standard Error -----------------
GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
GLOBAL TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
OK
Time taken: 6.874 seconds
OK
Time taken: 0.512 seconds
------------- ---------------- ---------------

Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
    FAILED
expected:<[23]> but was:<[
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
    at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
    at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
Execution failed with exit status: 1
Obtaining error information

Task failed!
Task ID:
  Stage-1

Logs:

/tmp/cloudera/hive.log


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447695#comment-13447695 ] 

Hudson commented on HIVE-3413:
------------------------------

Integrated in Hive-trunk-h0.21 #1645 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1645/])
    HIVE-3413. Fix pdk.PluginTest on hadoop23 (Zhenxiao Luo via cws) (Revision 1380478)

     Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1380478
Files : 
* /hive/trunk/builtins/build.xml
* /hive/trunk/builtins/ivy.xml
* /hive/trunk/pdk/scripts/build-plugin.xml
* /hive/trunk/pdk/test-plugin/test/conf
* /hive/trunk/pdk/test-plugin/test/conf/log4j.properties

                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>             Fix For: 0.10.0
>
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Carl Steinbach (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Carl Steinbach updated HIVE-3413:
---------------------------------

    Component/s: Tests
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Status: Open  (was: Patch Available)
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Attachment: HIVE-3413.1.patch.txt
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443764#comment-13443764 ] 

Zhenxiao Luo commented on HIVE-3413:
------------------------------------

review request submitted at:
https://reviews.facebook.net/D5001
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443753#comment-13443753 ] 

Zhenxiao Luo commented on HIVE-3413:
------------------------------------

There are two problems with this bug:

#1. Missing Dependency:
The compile and test classpath in pdk/scripts/build-plugin.xml is based on build/ivy/lib/default, the following dependencies are missing when building hive-exec*.jar:

hadoop-mapreduce-client-jobclient
hadoop-minicluster

This dependency should be added to ql/ivy.xml, which is the place for hive-exec dependencies.

Note that hadoop-mapreduce-client-jobclient dependency should be updated by just changing the jar placement. putting the jar in build/ivy/lib/test/ would not be included in pdk PluginTest classpath.

#2. After fixing #1, the following log4j warning message appears in the output stream, which fails the testcase(pdk PluginTest diffs expected output with the output stream):

test:
    [junit] Running org.apache.hive.pdk.PluginTest
    [junit] 2012-08-28 19:05:20,679 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    [junit] 2012-08-28 19:05:20,680 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
    [junit] 2]3>)
    [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 42.318 sec


And the details in: ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt

Testsuite: org.apache.hive.pdk.PluginTest
Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 42.318 sec
------------- Standard Error -----------------
GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281905_427044653.txt
GLOBAL TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281905_95794698.txt
OK
Time taken: 6.585 seconds
OK
Time taken: 0.415 seconds
------------- ---------------- ---------------

Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 9.435 sec
    FAILED
expected:<2[]3> but was:<2[012-08-28 19:05:20,464 WARN  [main] conf.HiveConf (HiveConf.java:<clinit>(75)) - hive-site.xml not found on CLASSPATH
2012-08-28 19:05:20,679 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2012-08-28 19:05:20,680 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2]3>
junit.framework.ComparisonFailure: expected:<2[]3> but was:<2[012-08-28 19:05:20,464 WARN  [main] conf.HiveConf (HiveConf.java:<clinit>(75)) - hive-site.xml not found on CLASSPATH
2012-08-28 19:05:20,679 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2012-08-28 19:05:20,680 WARN  [main] conf.Configuration (Configuration.java:loadProperty(1621)) - file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2]3>
    at org.apache.hive.pdk.PluginTest.runTest(PluginTest.java:59)
    at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
    at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
    at junit.extensions.TestSetup.run(TestSetup.java:27)
    at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
    at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
    at junit.extensions.TestSetup.run(TestSetup.java:27)

This warning is printed to console, since it happens before hive configure its log4j(which happens in HiveConf.java static initialization), and hadoop's default log4j configuration is INFA,console. This does not happen on previous branches, since on hadoop0.23, there are code execution path change, and these warnings just appeared when running hive on hadoop0.23.

My proposed solution is to configure hadoop log4j, so that its warning message are printed to the log file, instead of printing on the console, log4j.properties file is added, which configure hadoop log4j to be DEBUG,DRFA.

A new log4j.properties file is added, not using hive-log4j.properties, because:

##1 this is for hadoop log4j configuration, not for hive log4j configuration(which configuration file named hive-log4j.properties), it might be possible for us to have different configuration for each

##2 hadoop defaults to find the configuration named log4j.properties(there is no such named file existed in hive configuration data)

I still take log file named hive.log, not hadoop.log, since these warning info are actually from running hive.

To make the pdk test running OK, without a full hive installation, I put the new log4j.properties file in pdk test directory.

                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Carl Steinbach (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Carl Steinbach updated HIVE-3413:
---------------------------------

    Status: Open  (was: Patch Available)

@Zhenxiao: Please see my comments on phabricator.
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443765#comment-13443765 ] 

Zhenxiao Luo commented on HIVE-3413:
------------------------------------

A quick note, to build hive on hadoop23:
$ant very-clean package.-Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 -Dhadoop.mr.rev=23

And run test:
$ant test -Dhadoop.version=0.23.1.-Dhadoop-0.23.version=0.23.1 -Dhadoop.mr.rev=23
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Attachment: HIVE-3413.3.patch.txt
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Status: Patch Available  (was: Open)
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Status: Patch Available  (was: Open)
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13445520#comment-13445520 ] 

Zhenxiao Luo commented on HIVE-3413:
------------------------------------

the missing hadoop-minicluster dependency should not go into ql/ivy.xml.
It should be in hadoop23.test, putting into build/ivy/lib/test
pdk plugin test is triggered via builtin/build.xml
add the dependency in builtin/ivy.xml, and also update builtin/build.xml,
so that ivy-retrieve-test dependency is added to target test

Updated patch submitted for review at:
https://reviews.facebook.net/D5001
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13444353#comment-13444353 ] 

Zhenxiao Luo commented on HIVE-3413:
------------------------------------

There is an error in my Jira ticket comment, actually, not necessary to put these dependencies in compile->default

pdk/scripts/build-plugin.xml has the following line(line 125):
  <fileset dir="${build.ivy.lib.dir}/test" includes="*.jar" excludes="hive*.jar"/>

which makes the build/ivy/lib/test directory also included in its test running classpath.
As Carl pointed, putting the dependency in compile->default will break 0.20 build. It should be put into hadoop0.23.test

@Carl: my updated patch is submitted at:
https://reviews.facebook.net/D5001

                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Attachment: HIVE-3413.2.patch.txt
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Zhenxiao Luo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenxiao Luo updated HIVE-3413:
-------------------------------

    Status: Patch Available  (was: Open)
    
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Carl Steinbach (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Carl Steinbach updated HIVE-3413:
---------------------------------

       Resolution: Fixed
    Fix Version/s: 0.10.0
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Zhenxiao!
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>             Fix For: 0.10.0
>
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

Posted by "Carl Steinbach (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13445728#comment-13445728 ] 

Carl Steinbach commented on HIVE-3413:
--------------------------------------

+1. Will commit if tests pass.
                
> Fix pdk.PluginTest on hadoop23
> ------------------------------
>
>                 Key: HIVE-3413
>                 URL: https://issues.apache.org/jira/browse/HIVE-3413
>             Project: Hive
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 0.9.0
>            Reporter: Zhenxiao Luo
>            Assignee: Zhenxiao Luo
>         Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, HIVE-3413.3.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
>     [junit] Running org.apache.hive.pdk.PluginTest
>     [junit] Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
>     [junit] Total MapReduce jobs = 1
>     [junit] Launching Job 1 out of 1
>     [junit] Number of reduce tasks determined at compile time: 1
>     [junit] In order to change the average load for a reducer (in bytes):
>     [junit]   set hive.exec.reducers.bytes.per.reducer=<number>
>     [junit] In order to limit the maximum number of reducers:
>     [junit]   set hive.exec.reducers.max=<number>
>     [junit] In order to set a constant number of reducers:
>     [junit]   set mapred.reduce.tasks=<number>
>     [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
>     [junit] Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
>     [junit] java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     [junit]     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     [junit]     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     [junit]     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     [junit]     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     [junit]     at java.lang.reflect.Method.invoke(Method.java:616)
>     [junit]     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>     [junit] Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
>     [junit] Execution failed with exit status: 1
>     [junit] Obtaining error information
>     [junit]
>     [junit] Task failed!
>     [junit] Task ID:
>     [junit]   Stage-1
>     [junit]
>     [junit] Logs:
>     [junit]
>     [junit] /tmp/cloudera/hive.log
>     [junit] FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask]>)
>     [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> ------------- Standard Error -----------------
> GLOBAL SETUP:  Copying file: file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_252250000.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> ------------- ---------------- ---------------
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 4.428 sec
>     FAILED
> expected:<[23]> but was:<[
> Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
> Execution log at: /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
>     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
>     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
>     at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
>     at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:466)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
>     at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> /tmp/cloudera/hive.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira