You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Araceli Henley (Resolved) (JIRA)" <ji...@apache.org> on 2012/01/26 23:45:41 UTC

[jira] [Resolved] (PIG-2392) merge join resutls in ERROR 2176

     [ https://issues.apache.org/jira/browse/PIG-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Araceli Henley resolved PIG-2392.
---------------------------------

       Resolution: Fixed
    Fix Version/s:     (was: 0.9.3)
                   0.9.2
     Hadoop Flags: Reviewed

Fixed on : 0.23.1.1201120103 \ 0.9.2.1201201216
                
> merge join resutls in ERROR 2176
> --------------------------------
>
>                 Key: PIG-2392
>                 URL: https://issues.apache.org/jira/browse/PIG-2392
>             Project: Pig
>          Issue Type: Bug
>          Components: impl
>         Environment: -bash-3.1$ hadoop version
> Hadoop 0.23.0.1111080202
> Subversion http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23.0/hadoop-common-project/hadoop-common -r 1196973
> Compiled by hadoopqa on Tue Nov  8 02:12:04 PST 2011
> From source with checksum 4e42b2d96c899a98a8ab8c7cc23f27ae
> -bash-3.1$ pig -version
> Apache Pig version 0.9.2.1111101150 (r1200499) 
> compiled Nov 10 2011, 19:50:15
> -Mountside tables are enabled, but files are being referenced as in Hadoop 20.
>            Reporter: Araceli Henley
>             Fix For: 0.9.2
>
>
> This is  regression for dotNext.
> a = load '/user/user1/pig/tests/data/singlefile/studenttab10k';
> b = load '/user/user1/pig/tests/data/singlefile/votertab10k';
> c = order a by $0;
> d = order b by $0;
> store c into '/user/user1/pig/out/user1.1322779146/dotNext_MergeJoin_1.out.intermediate1';
> store d into '/user/user1/pig/out/user1.1322779146/dotNext_MergeJoin_1.out.intermediate2';
> exec;
> e = load '/user/user1/pig/out/user1.1322779146/dotNext_MergeJoin_1.out.intermediate1';
> f = load '/user/user1/pig/out/user1.1322779146/dotNext_MergeJoin_1.out.intermediate2';
> g = join e by $0, f by $0 using 'merge';
> store g into '/user/user1/pig/out/user1.1322779146/dotNext_MergeJoin_1.out'
> Backend error message
> ---------------------
> AttemptID:attempt_1321041443489_3292_m_000000_0 Info:Error: org.apache.pig.backend.executionengine.ExecException: ERROR 2176: Error processing right input during merge join
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.throwProcessingException(POMergeJoin.java:458)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.getNext(POMergeJoin.java:188)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:267)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:262)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:711)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142)
> Caused by: java.io.IOException: Delegation Token can be issued only with kerberos or web authentication
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:4027)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:281)
>         at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1490)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1486)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1484)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1085)
>         at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
>         at $Proxy8.getDelegationToken(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
>         at $Proxy8.getDelegationToken(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:429)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:812)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationTokens(DistributedFileSystem.java:839)
>         at org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getDelegationTokens(ChRootedFileSystem.java:311)
>         at org.apache.hadoop.fs.viewfs.ViewFileSystem.getDelegationTokens(ViewFileSystem.java:490)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:134)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:90)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:83)
>         at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:205)
>         at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
>         at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:269)
>         at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:154)
>         at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:137)
>         at org.apache.pig.impl.builtin.DefaultIndexableLoader.initRightLoader(DefaultIndexableLoader.java:208)
>         at org.apache.pig.impl.builtin.DefaultIndexableLoader.seekNear(DefaultIndexableLoader.java:192)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.seekInRightStream(POMergeJoin.java:410)
>         at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.getNext(POMergeJoin.java:186)
>         ... 11 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira