You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "Daniel Dai (JIRA)" <ji...@apache.org> on 2009/12/10 01:27:18 UTC

[jira] Commented: (PIG-1142) Got NullPointerException merge join with pruning

    [ https://issues.apache.org/jira/browse/PIG-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788415#action_12788415 ] 

Daniel Dai commented on PIG-1142:
---------------------------------

I can reproduce it in regular join as well. Here is the script:
a = LOAD '1.txt' as (a0, a1, a2);
b = LOAD '2.txt' as (b0, b1, b2);
c = join a by a2, b by b2;
d = foreach c generate $0,  $1,  $2;
dump d;

The logical plan is wrong:
{code}
.........
    |---LOJoin 1-17 Schema: {a::a0: bytearray,a::a1: bytearray,a::a2: bytearray,b::b2: bytearray} Type: bag
        |   |
        |   Project 1-15 Projections: [2] Overloaded: false FieldSchema: a2: bytearray Type: bytearray
        |   Input: Load 1-13
        |   |
        |   Project 1-16 Projections: [1] Overloaded: false FieldSchema: Caught Exception: Attempt to fetch field 1 from schema of size 1 Type: Unknown
        |   Input: Load 1-14
        |
        |---Load 1-13 Schema: {a0: bytearray,a1: bytearray,a2: bytearray} Type: bag
        |
        |---Load 1-14 Schema: {b2: bytearray} Type: bag
{code}

The second project of LOJoin should project column 0

> Got NullPointerException merge join with pruning
> ------------------------------------------------
>
>                 Key: PIG-1142
>                 URL: https://issues.apache.org/jira/browse/PIG-1142
>             Project: Pig
>          Issue Type: Bug
>    Affects Versions: 0.6.0
>            Reporter: Jing Huang
>             Fix For: 0.7.0
>
>
> Here is my pig script:
> register $zebraJar;
> --fs -rmr $outputDir
> a1 = LOAD '$inputDir/small1' USING org.apache.hadoop.zebra.pig.TableLoader('count,seed,int1,str2');
> a2 = LOAD '$inputDir/small2' USING org.apache.hadoop.zebra.pig.TableLoader('count,seed,int1,str2');
> sort1 = order a1 by str2;
> sort2 = order a2 by str2;
> --store sort1 into '$outputDir/smallsorted11' using org.apache.hadoop.zebra.pig.TableStorer('[count,seed,int1,str2]');
> --store sort2 into '$outputDir/smallsorted21' using org.apache.hadoop.zebra.pig.TableStorer('[count,seed,int1,str2]');
> rec1 = load '$outputDir/smallsorted11' using org.apache.hadoop.zebra.pig.TableLoader();
> rec2 = load '$outputDir/smallsorted21' using org.apache.hadoop.zebra.pig.TableLoader();
> joina = join rec1 by str2, rec2 by str2 using "merge" ;
> E = foreach joina  generate $0 as count,  $1 as seed,  $2 as int1,  $3 as str2;
> --limitedVals = LIMIT E 5;
> --dump limitedVals;
> store E into '$outputDir/smalljoin2' using org.apache.hadoop.zebra.pig.TableStorer('');
> ============
> Here is the stacktrace:
> java.lang.NullPointerException at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNext(POLocalRearrange.java:312) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.extractKeysFromTuple(POMergeJoin.java:464) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POMergeJoin.getNext(POMergeJoin.java:341) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:260) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:237) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.runPipeline(PigMapBase.java:253) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.close(PigMapBase.java:107) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) at org.apache.hadoop.mapred.Child.main(Child.java:159) 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.