You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Yang <te...@gmail.com> on 2011/06/11 03:47:13 UTC

simple get_slice() gives error?

I'm using thrift.CassandraServer directly within the same cassandra JVM to
accomplish my application tasks.
(I understand that this is not the normal usage mode...., but the error here
may also be appearing in Cassandra server code
development, so I thought it could be of some value to look at )
I ran into some issues when I try to parse out values from column name
ByteBuffer:


something like the following

List<ColumnOrSuperColumn> cols = cassandra_svr.get_slice(key, path,
predicate, ConsistencyLevel.ONE);
for(ColumnOrSuperColumn colOrSup: cols) {
Column col = colOrSup.column;
long ts = col.name.getLong();  <======


sometimes the last sentence gives error "ByteBuffer underflow", I checked
that the buffer size is 8 bytes, and the "pos" points to 8 already.  it
seems that somewhere in
thrift.CassandraServer line 179---195

    public List<ColumnOrSuperColumn> thriftifyColumns(Collection<IColumn>
columns, boolean reverseOrder)
    {
...........................................................
            {
                Column thrift_column = new Column(column.name
()).setValue(column.value()).setTimestamp(column.timestamp());

the ctor of Column(Column ) is used, which just uses the old
column.namebuffer, which also preserves the pos.


is this some error on server code? (looks a flip() is needed somewhere)


Thanks
Yang

Re: simple get_slice() gives error?

Posted by Yang <te...@gmail.com>.
Thanks Jonathan,

I found more cases where I wrongly use getInt() directly.

it caused some random data errors: nio.BufferUnderflow.  but I did rewind()
these buffers before reading them. then the most probable cause is that some
other threads are sharing the same buffers.

then this prompts me to another question: in current cassandra daemon, is it
possible that a column (or more specifically its bytebuffer) is still being
held by the read
thread , while it was partially written by a write thread? this way the read
thread could possibly get garbled result.

Thanks
Yang

On Fri, Jun 10, 2011 at 7:09 PM, Jonathan Ellis <jb...@gmail.com> wrote:

> Don't use destructive operations on the bytebuffer, always use e.g.
> getLong(buffer.position)
>
> On Fri, Jun 10, 2011 at 8:47 PM, Yang <te...@gmail.com> wrote:
> > I'm using thrift.CassandraServer directly within the same cassandra JVM
> to
> > accomplish my application tasks.
> > (I understand that this is not the normal usage mode...., but the error
> here
> > may also be appearing in Cassandra server code
> > development, so I thought it could be of some value to look at )
> > I ran into some issues when I try to parse out values from column name
> > ByteBuffer:
> >
> > something like the following
> > List<ColumnOrSuperColumn> cols = cassandra_svr.get_slice(key, path,
> > predicate, ConsistencyLevel.ONE);
> > for(ColumnOrSuperColumn colOrSup: cols) {
> > Column col = colOrSup.column;
> > long ts = col.name.getLong();  <======
> >
> > sometimes the last sentence gives error "ByteBuffer underflow", I checked
> > that the buffer size is 8 bytes, and the "pos" points to 8 already.  it
> > seems that somewhere in
> > thrift.CassandraServer line 179---195
> >     public List<ColumnOrSuperColumn> thriftifyColumns(Collection<IColumn>
> > columns, boolean reverseOrder)
> >     {
> > ...........................................................
> >             {
> >                 Column thrift_column = new
> > Column(column.name
> ()).setValue(column.value()).setTimestamp(column.timestamp());
> > the ctor of Column(Column ) is used, which just uses the old column.name
> > buffer, which also preserves the pos.
> >
> > is this some error on server code? (looks a flip() is needed somewhere)
> >
> > Thanks
> > Yang
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>

Re: simple get_slice() gives error?

Posted by Yang <te...@gmail.com>.
I currently use buf.getLong(0), assuming the current position is 0.
(instead of using  buf.getLong(buf.getPosition()) )
my code seems to be working fine. ----- I even added just now a check
to throw an exception  if the position is not 0.

but I think I have seen cases where the position is no zero, possibly
due to the slab allocator.

so do you remember if  there is some place in the Cassandra readpath
that always copies the ByteBuffer to one that starts a 0 ?
or I'm having some bug incidences  I haven't noticed?

Thanks
Yang

On Fri, Jun 10, 2011 at 7:09 PM, Jonathan Ellis <jb...@gmail.com> wrote:
> Don't use destructive operations on the bytebuffer, always use e.g.
> getLong(buffer.position)
>
> On Fri, Jun 10, 2011 at 8:47 PM, Yang <te...@gmail.com> wrote:
>> I'm using thrift.CassandraServer directly within the same cassandra JVM to
>> accomplish my application tasks.
>> (I understand that this is not the normal usage mode...., but the error here
>> may also be appearing in Cassandra server code
>> development, so I thought it could be of some value to look at )
>> I ran into some issues when I try to parse out values from column name
>> ByteBuffer:
>>
>> something like the following
>> List<ColumnOrSuperColumn> cols = cassandra_svr.get_slice(key, path,
>> predicate, ConsistencyLevel.ONE);
>> for(ColumnOrSuperColumn colOrSup: cols) {
>> Column col = colOrSup.column;
>> long ts = col.name.getLong();  <======
>>
>> sometimes the last sentence gives error "ByteBuffer underflow", I checked
>> that the buffer size is 8 bytes, and the "pos" points to 8 already.  it
>> seems that somewhere in
>> thrift.CassandraServer line 179---195
>>     public List<ColumnOrSuperColumn> thriftifyColumns(Collection<IColumn>
>> columns, boolean reverseOrder)
>>     {
>> ...........................................................
>>             {
>>                 Column thrift_column = new
>> Column(column.name()).setValue(column.value()).setTimestamp(column.timestamp());
>> the ctor of Column(Column ) is used, which just uses the old column.name
>> buffer, which also preserves the pos.
>>
>> is this some error on server code? (looks a flip() is needed somewhere)
>>
>> Thanks
>> Yang
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>

Re: simple get_slice() gives error?

Posted by Jonathan Ellis <jb...@gmail.com>.
Don't use destructive operations on the bytebuffer, always use e.g.
getLong(buffer.position)

On Fri, Jun 10, 2011 at 8:47 PM, Yang <te...@gmail.com> wrote:
> I'm using thrift.CassandraServer directly within the same cassandra JVM to
> accomplish my application tasks.
> (I understand that this is not the normal usage mode...., but the error here
> may also be appearing in Cassandra server code
> development, so I thought it could be of some value to look at )
> I ran into some issues when I try to parse out values from column name
> ByteBuffer:
>
> something like the following
> List<ColumnOrSuperColumn> cols = cassandra_svr.get_slice(key, path,
> predicate, ConsistencyLevel.ONE);
> for(ColumnOrSuperColumn colOrSup: cols) {
> Column col = colOrSup.column;
> long ts = col.name.getLong();  <======
>
> sometimes the last sentence gives error "ByteBuffer underflow", I checked
> that the buffer size is 8 bytes, and the "pos" points to 8 already.  it
> seems that somewhere in
> thrift.CassandraServer line 179---195
>     public List<ColumnOrSuperColumn> thriftifyColumns(Collection<IColumn>
> columns, boolean reverseOrder)
>     {
> ...........................................................
>             {
>                 Column thrift_column = new
> Column(column.name()).setValue(column.value()).setTimestamp(column.timestamp());
> the ctor of Column(Column ) is used, which just uses the old column.name
> buffer, which also preserves the pos.
>
> is this some error on server code? (looks a flip() is needed somewhere)
>
> Thanks
> Yang



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com