You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Bulldog20630405 <bu...@gmail.com> on 2019/09/04 15:53:44 UTC

map error: cannot seek to negative offset

minor and major compaction hung with the following error (note the rfiles
are zero length).  has anyone seen this before? what is the root cause of
this?
(note: i can copy over empty rfiles to replace the zero length ones;
however, trying to know what went wrong):

Some problem opening map file hdfs://namenode/accumulo/tables/9/xyz.rf
Cannot seek to negative offset
java.io.EOFException: Cannot seek to negative offset
at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java.1459)
...
at org.apache.accumulo.core.file.RFile$Reader.<init>(RFile.java:1149)
...
at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2034)
at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2160)
...
at
org.apache.accumulo.fate.util.LoggingrRunnable.run(LoggingRunnable.java:35)

Re: map error: cannot seek to negative offset

Posted by Christopher <ct...@apache.org>.
Accumulo really shouldn't create such files at all. But, I have seen
such files when somebody put one there instead of an "empty" RFile in
response to a previous failure. It might also happen if the HDFS
client connection fails, I suppose.

Glad it's working for you now, though.

If you see this again or gather any further insight that would
indicate a bug in Accumulo, please report it at
https://github.com/apache/accumulo/issues

On Wed, Sep 4, 2019 at 5:53 PM Bulldog20630405
<bu...@gmail.com> wrote:
>
> we are running accumulo 1.9.3
>
> yes; i dont know why they are zero length... there ended up only 10 or so rfiles; however i was able to org.apache.accumulo.core.file.rfile.CreateEmpty over top of those bad files after offline the table and then online it... life is good :-)
> it was only a dev table; however, i was concerned if this happened in prod...
>
> the disks looked good and we have plenty of space; dont know what happened???
>
> thanx
>
>
>
> On Wed, Sep 4, 2019 at 5:37 PM Christopher <ct...@apache.org> wrote:
>>
>> I can't match those line numbers up exactly. What version are you running?
>>
>> Regardless, a zero-length RFile is not a valid RFile. It looks like it
>> is trying to read the meta information from the RFile to initialize
>> the file reader object.
>>
>> You will need to copy over empty RFiles to replace the zero length
>> ones, but there's no indication in the provided information about how
>> the zero length files appeared. Did you have an HDFS failure or some
>> other system failure prior to this? Do you have anything in your
>> tserver logs that shows the file name to indicate how it appeared with
>> no contents? Perhaps you had a disk failure? Might be worth
>> investigating just to understand the full situation, but the fix
>> should just be to copy over the file with an empty valid Accumulo
>> file.
>>
>> On Wed, Sep 4, 2019 at 11:54 AM Bulldog20630405
>> <bu...@gmail.com> wrote:
>> >
>> >
>> > minor and major compaction hung with the following error (note the rfiles are zero length).  has anyone seen this before? what is the root cause of this?
>> > (note: i can copy over empty rfiles to replace the zero length ones; however, trying to know what went wrong):
>> >
>> > Some problem opening map file hdfs://namenode/accumulo/tables/9/xyz.rf Cannot seek to negative offset
>> > java.io.EOFException: Cannot seek to negative offset
>> > at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java.1459)
>> > ...
>> > at org.apache.accumulo.core.file.RFile$Reader.<init>(RFile.java:1149)
>> > ...
>> > at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2034)
>> > at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2160)
>> > ...
>> > at org.apache.accumulo.fate.util.LoggingrRunnable.run(LoggingRunnable.java:35)
>> >
>> >
>> >

Re: map error: cannot seek to negative offset

Posted by Bulldog20630405 <bu...@gmail.com>.
we are running accumulo 1.9.3

yes; i dont know why they are zero length... there ended up only 10 or so
rfiles; however i was able to
org.apache.accumulo.core.file.rfile.CreateEmpty over top of those bad files
after offline the table and then online it... life is good :-)
it was only a dev table; however, i was concerned if this happened in
prod...

the disks looked good and we have plenty of space; dont know what
happened???

thanx



On Wed, Sep 4, 2019 at 5:37 PM Christopher <ct...@apache.org> wrote:

> I can't match those line numbers up exactly. What version are you running?
>
> Regardless, a zero-length RFile is not a valid RFile. It looks like it
> is trying to read the meta information from the RFile to initialize
> the file reader object.
>
> You will need to copy over empty RFiles to replace the zero length
> ones, but there's no indication in the provided information about how
> the zero length files appeared. Did you have an HDFS failure or some
> other system failure prior to this? Do you have anything in your
> tserver logs that shows the file name to indicate how it appeared with
> no contents? Perhaps you had a disk failure? Might be worth
> investigating just to understand the full situation, but the fix
> should just be to copy over the file with an empty valid Accumulo
> file.
>
> On Wed, Sep 4, 2019 at 11:54 AM Bulldog20630405
> <bu...@gmail.com> wrote:
> >
> >
> > minor and major compaction hung with the following error (note the
> rfiles are zero length).  has anyone seen this before? what is the root
> cause of this?
> > (note: i can copy over empty rfiles to replace the zero length ones;
> however, trying to know what went wrong):
> >
> > Some problem opening map file hdfs://namenode/accumulo/tables/9/xyz.rf
> Cannot seek to negative offset
> > java.io.EOFException: Cannot seek to negative offset
> > at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java.1459)
> > ...
> > at org.apache.accumulo.core.file.RFile$Reader.<init>(RFile.java:1149)
> > ...
> > at
> org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2034)
> > at
> org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2160)
> > ...
> > at
> org.apache.accumulo.fate.util.LoggingrRunnable.run(LoggingRunnable.java:35)
> >
> >
> >
>

Re: map error: cannot seek to negative offset

Posted by Christopher <ct...@apache.org>.
I can't match those line numbers up exactly. What version are you running?

Regardless, a zero-length RFile is not a valid RFile. It looks like it
is trying to read the meta information from the RFile to initialize
the file reader object.

You will need to copy over empty RFiles to replace the zero length
ones, but there's no indication in the provided information about how
the zero length files appeared. Did you have an HDFS failure or some
other system failure prior to this? Do you have anything in your
tserver logs that shows the file name to indicate how it appeared with
no contents? Perhaps you had a disk failure? Might be worth
investigating just to understand the full situation, but the fix
should just be to copy over the file with an empty valid Accumulo
file.

On Wed, Sep 4, 2019 at 11:54 AM Bulldog20630405
<bu...@gmail.com> wrote:
>
>
> minor and major compaction hung with the following error (note the rfiles are zero length).  has anyone seen this before? what is the root cause of this?
> (note: i can copy over empty rfiles to replace the zero length ones; however, trying to know what went wrong):
>
> Some problem opening map file hdfs://namenode/accumulo/tables/9/xyz.rf Cannot seek to negative offset
> java.io.EOFException: Cannot seek to negative offset
> at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java.1459)
> ...
> at org.apache.accumulo.core.file.RFile$Reader.<init>(RFile.java:1149)
> ...
> at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2034)
> at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2160)
> ...
> at org.apache.accumulo.fate.util.LoggingrRunnable.run(LoggingRunnable.java:35)
>
>
>