You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Nick Cen <ce...@gmail.com> on 2009/03/02 02:38:39 UTC

What's the cause of this Exception

java.lang.ArrayIndexOutOfBoundsException: 4096
        at
org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
        at
org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
        at
org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
        at
org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
        at
org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
        at
org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
        at
org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
        at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
        at
org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
        at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
        at
org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
        at
org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
        at ufida.ReduceTask.reduce(ReduceTask.java:39)
        at ufida.ReduceTask.reduce(ReduceTask.java:1)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
        at org.apache.hadoop.mapred.Child.main(Child.java:155)

my hadoop version is 0.19.0, if i limit the input file number, the exception
wil not be thrown.
-- 
http://daily.appspot.com/food/

Re: What's the cause of this Exception

Posted by Nick Cen <ce...@gmail.com>.
Hi,

Just to provide more info. By setting the "mapred.job.tracker" to local
which make the program run locally, everything works fine. but turn to fully
cluster the exception comes.

2009/3/2 Nick Cen <ce...@gmail.com>

> Hi,
>
> I have set the seperator value, but the same exception is thrown.
> As i take the first part of the whole key as the key, so i think even i
> didn't set the correct seperator value, it will not thrown and array index
> out of bound exception.
>
>
> 2009/3/2 jason hadoop <ja...@gmail.com>
>
>> Did you by change set the separator character to ','?
>> map.output.key.field.separator is the key, the default is TAB.
>>
>> On Sun, Mar 1, 2009 at 6:13 PM, Nick Cen <ce...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > my key has the format "key1,key2,key3“,and
>> > conf.setKeyFieldPartitionerOptions("-k 1,1"). When i limit the input
>> size,
>> > it works fine, i think this because i limit the total number of the
>> > possible
>> > "key1,key2,key3" compositions. but when i increate the input size, this
>> > exception was thrown.
>> >
>> > 2009/3/2 jason hadoop <ja...@gmail.com>
>> >
>> > > The way you are specifying the section of your key to compare is
>> reaching
>> > > beyond the end of the last part of the key.
>> > >
>> > > Your key specification is not terminating explicitly on the last
>> > character
>> > > of the final field of the key.
>> > >
>> > > if your key splits in to N parts, and you are comparing on the Nth
>> part,
>> > > -kN,N will work while -kN will throw the exception.
>> > >
>> > > The way the comparator picks up a piece, is it takes the piece and the
>> > > trailing separator by default. For the last piece there is no trailing
>> > > separator and you get the array out of bounds exception..
>> > >
>> > >
>> > >
>> > > On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <ce...@gmail.com> wrote:
>> > >
>> > > > java.lang.ArrayIndexOutOfBoundsException: 4096
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
>> > > >        at
>> > > > org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
>> > > >        at
>> > > >
>> org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
>> > > >        at
>> > > >
>> org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
>> > > >        at
>> > > org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
>> > > >        at
>> > > >
>> org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
>> > > >        at
>> > > org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
>> > > >        at
>> > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
>> > > >        at ufida.ReduceTask.reduce(ReduceTask.java:39)
>> > > >        at ufida.ReduceTask.reduce(ReduceTask.java:1)
>> > > >        at
>> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>> > > >        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>> > > >
>> > > > my hadoop version is 0.19.0, if i limit the input file number, the
>> > > > exception
>> > > > wil not be thrown.
>> > > > --
>> > > > http://daily.appspot.com/food/
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > http://daily.appspot.com/food/
>> >
>>
>
>
>
> --
> http://daily.appspot.com/food/
>



-- 
http://daily.appspot.com/food/

Re: What's the cause of this Exception

Posted by Nick Cen <ce...@gmail.com>.
Hi,

I have set the seperator value, but the same exception is thrown.
As i take the first part of the whole key as the key, so i think even i
didn't set the correct seperator value, it will not thrown and array index
out of bound exception.

2009/3/2 jason hadoop <ja...@gmail.com>

> Did you by change set the separator character to ','?
> map.output.key.field.separator is the key, the default is TAB.
>
> On Sun, Mar 1, 2009 at 6:13 PM, Nick Cen <ce...@gmail.com> wrote:
>
> > Hi,
> >
> > my key has the format "key1,key2,key3“,and
> > conf.setKeyFieldPartitionerOptions("-k 1,1"). When i limit the input
> size,
> > it works fine, i think this because i limit the total number of the
> > possible
> > "key1,key2,key3" compositions. but when i increate the input size, this
> > exception was thrown.
> >
> > 2009/3/2 jason hadoop <ja...@gmail.com>
> >
> > > The way you are specifying the section of your key to compare is
> reaching
> > > beyond the end of the last part of the key.
> > >
> > > Your key specification is not terminating explicitly on the last
> > character
> > > of the final field of the key.
> > >
> > > if your key splits in to N parts, and you are comparing on the Nth
> part,
> > > -kN,N will work while -kN will throw the exception.
> > >
> > > The way the comparator picks up a piece, is it takes the piece and the
> > > trailing separator by default. For the last piece there is no trailing
> > > separator and you get the array out of bounds exception..
> > >
> > >
> > >
> > > On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <ce...@gmail.com> wrote:
> > >
> > > > java.lang.ArrayIndexOutOfBoundsException: 4096
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
> > > >        at
> > > > org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
> > > >        at
> > > > org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
> > > >        at
> > > >
> org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
> > > >        at
> > > org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
> > > >        at
> > > >
> org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
> > > >        at
> > > org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
> > > >        at ufida.ReduceTask.reduce(ReduceTask.java:39)
> > > >        at ufida.ReduceTask.reduce(ReduceTask.java:1)
> > > >        at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
> > > >        at org.apache.hadoop.mapred.Child.main(Child.java:155)
> > > >
> > > > my hadoop version is 0.19.0, if i limit the input file number, the
> > > > exception
> > > > wil not be thrown.
> > > > --
> > > > http://daily.appspot.com/food/
> > > >
> > >
> >
> >
> >
> > --
> > http://daily.appspot.com/food/
> >
>



-- 
http://daily.appspot.com/food/

Re: What's the cause of this Exception

Posted by jason hadoop <ja...@gmail.com>.
Did you by change set the separator character to ','?
map.output.key.field.separator is the key, the default is TAB.

On Sun, Mar 1, 2009 at 6:13 PM, Nick Cen <ce...@gmail.com> wrote:

> Hi,
>
> my key has the format "key1,key2,key3“,and
> conf.setKeyFieldPartitionerOptions("-k 1,1"). When i limit the input size,
> it works fine, i think this because i limit the total number of the
> possible
> "key1,key2,key3" compositions. but when i increate the input size, this
> exception was thrown.
>
> 2009/3/2 jason hadoop <ja...@gmail.com>
>
> > The way you are specifying the section of your key to compare is reaching
> > beyond the end of the last part of the key.
> >
> > Your key specification is not terminating explicitly on the last
> character
> > of the final field of the key.
> >
> > if your key splits in to N parts, and you are comparing on the Nth part,
> > -kN,N will work while -kN will throw the exception.
> >
> > The way the comparator picks up a piece, is it takes the piece and the
> > trailing separator by default. For the last piece there is no trailing
> > separator and you get the array out of bounds exception..
> >
> >
> >
> > On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <ce...@gmail.com> wrote:
> >
> > > java.lang.ArrayIndexOutOfBoundsException: 4096
> > >        at
> > >
> > >
> >
> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
> > >        at
> > > org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
> > >        at
> > > org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
> > >        at
> > > org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
> > >        at
> > org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
> > >        at
> > > org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
> > >        at
> > org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
> > >        at ufida.ReduceTask.reduce(ReduceTask.java:39)
> > >        at ufida.ReduceTask.reduce(ReduceTask.java:1)
> > >        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
> > >        at org.apache.hadoop.mapred.Child.main(Child.java:155)
> > >
> > > my hadoop version is 0.19.0, if i limit the input file number, the
> > > exception
> > > wil not be thrown.
> > > --
> > > http://daily.appspot.com/food/
> > >
> >
>
>
>
> --
> http://daily.appspot.com/food/
>

Re: What's the cause of this Exception

Posted by Nick Cen <ce...@gmail.com>.
Hi,

my key has the format "key1,key2,key3“,and
conf.setKeyFieldPartitionerOptions("-k 1,1"). When i limit the input size,
it works fine, i think this because i limit the total number of the possible
"key1,key2,key3" compositions. but when i increate the input size, this
exception was thrown.

2009/3/2 jason hadoop <ja...@gmail.com>

> The way you are specifying the section of your key to compare is reaching
> beyond the end of the last part of the key.
>
> Your key specification is not terminating explicitly on the last character
> of the final field of the key.
>
> if your key splits in to N parts, and you are comparing on the Nth part,
> -kN,N will work while -kN will throw the exception.
>
> The way the comparator picks up a piece, is it takes the piece and the
> trailing separator by default. For the last piece there is no trailing
> separator and you get the array out of bounds exception..
>
>
>
> On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <ce...@gmail.com> wrote:
>
> > java.lang.ArrayIndexOutOfBoundsException: 4096
> >        at
> >
> >
> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
> >        at
> >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
> >        at
> >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
> >        at
> > org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
> >        at
> > org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
> >        at
> > org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
> >        at
> >
> >
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
> >        at
> org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
> >        at
> > org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
> >        at
> org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
> >        at
> >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
> >        at
> >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
> >        at ufida.ReduceTask.reduce(ReduceTask.java:39)
> >        at ufida.ReduceTask.reduce(ReduceTask.java:1)
> >        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
> >        at org.apache.hadoop.mapred.Child.main(Child.java:155)
> >
> > my hadoop version is 0.19.0, if i limit the input file number, the
> > exception
> > wil not be thrown.
> > --
> > http://daily.appspot.com/food/
> >
>



-- 
http://daily.appspot.com/food/

Re: What's the cause of this Exception

Posted by jason hadoop <ja...@gmail.com>.
The way you are specifying the section of your key to compare is reaching
beyond the end of the last part of the key.

Your key specification is not terminating explicitly on the last character
of the final field of the key.

if your key splits in to N parts, and you are comparing on the Nth part,
-kN,N will work while -kN will throw the exception.

The way the comparator picks up a piece, is it takes the piece and the
trailing separator by default. For the last piece there is no trailing
separator and you get the array out of bounds exception..



On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <ce...@gmail.com> wrote:

> java.lang.ArrayIndexOutOfBoundsException: 4096
>        at
>
> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
>        at
>
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
>        at
>
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
>        at
> org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
>        at
> org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
>        at
> org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
>        at
>
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
>        at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
>        at
> org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
>        at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
>        at
>
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
>        at
>
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
>        at ufida.ReduceTask.reduce(ReduceTask.java:39)
>        at ufida.ReduceTask.reduce(ReduceTask.java:1)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> my hadoop version is 0.19.0, if i limit the input file number, the
> exception
> wil not be thrown.
> --
> http://daily.appspot.com/food/
>