You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by "liyunzhang_intel (JIRA)" <ji...@apache.org> on 2016/06/27 06:35:52 UTC

[jira] [Updated] (PIG-4936) Fix NPE exception in TestCustomPartitioner#testCustomPartitionerParseJoins

     [ https://issues.apache.org/jira/browse/PIG-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

liyunzhang_intel updated PIG-4936:
----------------------------------
    Attachment: PIG-4936.patch

[~mohitsabharwal],[~kexianda] and [~pallavi.rao]: please help review.
The root cause of NPE is
MapReducePartitionerWrapper#getPartition
{code}
   public int getPartition(final Object key) {
        try {

            PigNullableWritable writeableKey = new PigNullableWritable() {
                public Object getValueAsPigType() {
                    return key;
                }
            };
{code}

Here we just return key, if the class typeof key is org.apache.pig.backend.hadoop.executionengine.spark.converter.IndexedKey, we should return ((IndexedKey)key).getKey().

> Fix NPE exception in TestCustomPartitioner#testCustomPartitionerParseJoins
> --------------------------------------------------------------------------
>
>                 Key: PIG-4936
>                 URL: https://issues.apache.org/jira/browse/PIG-4936
>             Project: Pig
>          Issue Type: Bug
>            Reporter: liyunzhang_intel
>            Assignee: liyunzhang_intel
>         Attachments: PIG-4936.patch
>
>
> After PIG-4919(upgrade spark to 1.6.1 version), TestCustomPartitioner#TestCustomPartitioner fails.
> {code}
> Testcase: testCustomPartitionerParseJoins took 1.66 sec
>         Caused an ERROR
> Unable to open iterator for alias hash. Backend error : Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 3, localhost): java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>         at org.apache.pig.backend.hadoop.executionengine.spark.MapReducePartitionerWrapper.getPartition(MapReducePartitionerWrapper.java:118)
>         at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.pig.backend.hadoop.executionengine.spark.MapReducePartitionerWrapper.getPartition(MapReducePartitionerWrapper.java:110)
>         ... 8 more 
> Caused by: java.lang.NullPointerException
>         at org.apache.pig.impl.io.PigNullableWritable.hashCode(PigNullableWritable.java:185)
>         at org.apache.pig.test.utils.SimpleCustomPartitioner.getPartition(SimpleCustomPartitioner.java:33)
>         ... 13 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)