You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by khandelwalanuj <kh...@gmail.com> on 2014/09/09 08:49:10 UTC

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

I am also seeing the same exception with ActiveMQv5.10. It comes infrequent
and non-reproducible. 

I have already posted
http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305 


ActiveMQGods can you please help us out here. ? 



Thanks,
Anuj



--
View this message in context: http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

Posted by bharadwaj nakka <bh...@gmail.com>.
This is known bug in Jboss fuse 6.1 and the issue is fixed in JBoss Fuse 6.1
R2P5 rollup patch5. 



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

Posted by khandelwalanuj <kh...@gmail.com>.
I am using NFS. 

amqgod@txnref1.nyc:/u/amqgod> stat -f ~/kahadb/
  File: "/u/amqgod/kahadb/"
    ID: 0        Namelen: 255     Type: nfs
Block size: 65536      Fundamental block size: 65536
Blocks: Total: 245760     Free: 210550     Available: 210550
Inodes: Total: 1048576    Free: 1043878

Is this issue is because of NFS ? If this is because of NFS it should happen
for all destinations. I can see that only for 2,3 topics it is showing this
exception continuously. I think it is because of some kahadb corruption. Can
you guys please take a looks at this ? 


Thanks,
Anuj



--
View this message in context: http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4686019.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

Posted by Gary Tully <ga...@gmail.com>.
thanks for sharing this info Paul :-)

On 10 September 2014 17:33, Paul Gale <pa...@gmail.com> wrote:
> All of the following is assuming you're using Linux. I'm using RHEL 6.3 to
> mount an NFSv3 based device using autofs.
>
> I should have added that the issue for me was that I had specified the
> wrong block size values for the rsize/wsize parameters in the autofs mount
> configuration for the device I was mounting.
>
> I was operating in the mistaken belief that the larger the value for these
> parameters the better. Therefore I set their values to be 256K (in bytes).
> Problems with the message store followed.
>
> What I should have done, and ended up doing, was rather than guess the
> value of the device's block size was to determine the device's _actual_
> block size. You can either ask the administrator for the device what its
> block size is or use the stat command.
>
> If you really want to play it safe you could always use the default block
> size for a device that supports NFSv3 which is quite conservative at 8192
> bytes (I think - look it up). However, if the device you're mounting can
> support larger block sizes then the stat command is how you would find that
> out.
>
> First, mount the device using a _very_ conservative block size value, say
> 1024 bytes. Second, run the stat command on the mount point to see what the
> device's block size actually is. It might be the default 8192 or it could
> be larger. Either way you'll know.
>
> Here's an example. Say your local mount point is /NFS then the stat command
> to use is:
>
> stat -f /NFS
>
> The output should look something like:
>
> File: "/NFS"
> ID: 0        Namelen: 255    Type: nfs
> Block size: 32768      Fundamental block size: 32768
> Blocks: Total: 330424288  Free: 178080429  Available: 178080429
> Inodes: Total: 257949694  Free: 246974355
>
> The output indicates the block size in bytes (32768) for the device. This
> is the value that should be plugged into the rsize/wsize parameters for the
> mount's definition.
>
> I hope this helps.
>
> Thanks,
> Paul
>
>
> Thanks,
> Paul
>
> On Wed, Sep 10, 2014 at 10:27 AM, Paul Gale <pa...@gmail.com> wrote:
>
>> In my particular case I fixed it when I realized that I had the NFS mount
>> settings for the mount where the KahaDB message store was located
>> mis-configured. Since correcting the settings I've not had a single
>> problem.
>>
>> Are you using NFS?
>>
>>
>> Thanks,
>> Paul
>>
>> On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <
>> khandelwal.anuj90@gmail.com> wrote:
>>
>>> I am also seeing the same exception with ActiveMQv5.10. It comes
>>> infrequent
>>> and non-reproducible.
>>>
>>> I have already posted
>>>
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>>>
>>>
>>> ActiveMQGods can you please help us out here. ?
>>>
>>>
>>>
>>> Thanks,
>>> Anuj
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>
>>



-- 
http://redhat.com
http://blog.garytully.com

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

Posted by Paul Gale <pa...@gmail.com>.
All of the following is assuming you're using Linux. I'm using RHEL 6.3 to
mount an NFSv3 based device using autofs.

I should have added that the issue for me was that I had specified the
wrong block size values for the rsize/wsize parameters in the autofs mount
configuration for the device I was mounting.

I was operating in the mistaken belief that the larger the value for these
parameters the better. Therefore I set their values to be 256K (in bytes).
Problems with the message store followed.

What I should have done, and ended up doing, was rather than guess the
value of the device's block size was to determine the device's _actual_
block size. You can either ask the administrator for the device what its
block size is or use the stat command.

If you really want to play it safe you could always use the default block
size for a device that supports NFSv3 which is quite conservative at 8192
bytes (I think - look it up). However, if the device you're mounting can
support larger block sizes then the stat command is how you would find that
out.

First, mount the device using a _very_ conservative block size value, say
1024 bytes. Second, run the stat command on the mount point to see what the
device's block size actually is. It might be the default 8192 or it could
be larger. Either way you'll know.

Here's an example. Say your local mount point is /NFS then the stat command
to use is:

stat -f /NFS

The output should look something like:

File: "/NFS"
ID: 0        Namelen: 255    Type: nfs
Block size: 32768      Fundamental block size: 32768
Blocks: Total: 330424288  Free: 178080429  Available: 178080429
Inodes: Total: 257949694  Free: 246974355

The output indicates the block size in bytes (32768) for the device. This
is the value that should be plugged into the rsize/wsize parameters for the
mount's definition.

I hope this helps.

Thanks,
Paul


Thanks,
Paul

On Wed, Sep 10, 2014 at 10:27 AM, Paul Gale <pa...@gmail.com> wrote:

> In my particular case I fixed it when I realized that I had the NFS mount
> settings for the mount where the KahaDB message store was located
> mis-configured. Since correcting the settings I've not had a single
> problem.
>
> Are you using NFS?
>
>
> Thanks,
> Paul
>
> On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <
> khandelwal.anuj90@gmail.com> wrote:
>
>> I am also seeing the same exception with ActiveMQv5.10. It comes
>> infrequent
>> and non-reproducible.
>>
>> I have already posted
>>
>> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>>
>>
>> ActiveMQGods can you please help us out here. ?
>>
>>
>>
>> Thanks,
>> Anuj
>>
>>
>>
>> --
>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>
>

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

Posted by Paul Gale <pa...@gmail.com>.
In my particular case I fixed it when I realized that I had the NFS mount
settings for the mount where the KahaDB message store was located
mis-configured. Since correcting the settings I've not had a single
problem.

Are you using NFS?


Thanks,
Paul

On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <kh...@gmail.com>
wrote:

> I am also seeing the same exception with ActiveMQv5.10. It comes infrequent
> and non-reproducible.
>
> I have already posted
>
> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>
>
> ActiveMQGods can you please help us out here. ?
>
>
>
> Thanks,
> Anuj
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>