You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Yan Chunlu <sp...@gmail.com> on 2011/08/22 07:08:45 UTC

get mycf['rowkey']['column_name'] return 'Value was not found' in cassandra-cli

connect to cassandra-cli and issue the list my cf I got

RowKey: comments_62559
=> (column=76616c7565,
value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
timestamp=1312791934150273)


and using
get mycf['comments_62559'] could return
(column=76616c7565,
value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
timestamp=1312791934150273)



but
get mycf['comments_62559'][76616c7565];

returns 'Value was not found'

did I do something wrong?

Re: get mycf['rowkey']['column_name'] return 'Value was not found' in cassandra-cli

Posted by Yan Chunlu <sp...@gmail.com>.
thanks a lot!

On Mon, Aug 22, 2011 at 10:14 PM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Mon, Aug 22, 2011 at 1:08 AM, Yan Chunlu <sp...@gmail.com> wrote:
>
>> connect to cassandra-cli and issue the list my cf I got
>>
>> RowKey: comments_62559
>> => (column=76616c7565,
>> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
>> timestamp=1312791934150273)
>>
>>
>> and using
>> get mycf['comments_62559'] could return
>> (column=76616c7565,
>> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
>> timestamp=1312791934150273)
>>
>>
>>
>> but
>> get mycf['comments_62559'][76616c7565];
>>
>> returns 'Value was not found'
>>
>> did I do something wrong?
>>
>
> Yes/Probably. Based on how you have defined your column families the data
> stored in your columns may be displayed differently. By default the storage
> is byte []. The Cli makes the decision to convert them to hex strings* (each
> major version 0.6. 0.7.X and 0.8.X of c* was selective about what it
> converted and why.)
>
> In any case there are two fixes:
> 1) Update the column family meta data and set the types correctly ASCII,
> UTF8, LONG, etc
>
> 2) Use the ASSUME keyword in the CLI to convert the rows to readable
> displays
> & when selecting columns use cli functions like : get
> CF[ascii('x')][ascii('y')] to make get what you are actually asking for.
>
> The CLI is more correct in current versions then it was in the past in
> regard to types and conversions, but if you do not define CF Meta Data it
> makes you scratch your head at times because it is not exactly clear that it
> is showing you a hex encoded byte [] and not an ascii string.
>
> Edward
>

Re: get mycf['rowkey']['column_name'] return 'Value was not found' in cassandra-cli

Posted by Edward Capriolo <ed...@gmail.com>.
On Mon, Aug 22, 2011 at 1:08 AM, Yan Chunlu <sp...@gmail.com> wrote:

> connect to cassandra-cli and issue the list my cf I got
>
> RowKey: comments_62559
> => (column=76616c7565,
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> timestamp=1312791934150273)
>
>
> and using
> get mycf['comments_62559'] could return
> (column=76616c7565,
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> timestamp=1312791934150273)
>
>
>
> but
> get mycf['comments_62559'][76616c7565];
>
> returns 'Value was not found'
>
> did I do something wrong?
>

Yes/Probably. Based on how you have defined your column families the data
stored in your columns may be displayed differently. By default the storage
is byte []. The Cli makes the decision to convert them to hex strings* (each
major version 0.6. 0.7.X and 0.8.X of c* was selective about what it
converted and why.)

In any case there are two fixes:
1) Update the column family meta data and set the types correctly ASCII,
UTF8, LONG, etc

2) Use the ASSUME keyword in the CLI to convert the rows to readable
displays
& when selecting columns use cli functions like : get
CF[ascii('x')][ascii('y')] to make get what you are actually asking for.

The CLI is more correct in current versions then it was in the past in
regard to types and conversions, but if you do not define CF Meta Data it
makes you scratch your head at times because it is not exactly clear that it
is showing you a hex encoded byte [] and not an ascii string.

Edward

Re: Recover from startup problems

Posted by Jonathan Ellis <jb...@gmail.com>.
Oh, right: my plan didn't work, because schema changes aren't really
commitlog-ified.  (So, commitlog replayed, but the schema didn't get
recreated to match its earlier state.)

Can you reproduce the old-fashioned way by sending the create + insert
commands, after nuking things?

On Mon, Aug 22, 2011 at 10:35 PM, Dave Brosius <db...@mebigfatguy.com> wrote:
> 0.8.4 seems to fail to load in the same way. When i deleted the data
> directory, it appears to start up correctly, I see
>
> INFO 23:30:44,067 JNA not found. Native methods will be disabled.
>  INFO 23:30:44,100 Loading settings from
> file:/home/dave/apache-cassandra-0.8.4/conf/cassandra.yaml
>  INFO 23:30:44,387 DiskAccessMode 'auto' determined to be standard,
> indexAccessMode is standard
>  INFO 23:30:44,710 Global memtable threshold is enabled at 245MB
>  INFO 23:30:44,713 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,715 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,716 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,717 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,718 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,719 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:44,934 Creating new commitlog segment
> /var/lib/cassandra/commitlog/CommitLog-1314070244934.log
>  INFO 23:30:44,994 Couldn't detect any schema definitions in local storage.
>  INFO 23:30:45,001 Found table data in data directories. Consider using the
> CLI to define your schema.
>  INFO 23:30:45,004 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,011 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,012 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,013 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,014 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,015 Removing compacted SSTable files (see
> http://wiki.apache.org/cassandra/MemtableSSTable)
>  INFO 23:30:45,134 Replaying
> /var/lib/cassandra/commitlog/CommitLog-1313973548456.log
>  INFO 23:30:45,239 Finished reading
> /var/lib/cassandra/commitlog/CommitLog-1313973548456.log
>  INFO 23:30:45,244 Skipped 44 mutations from unknown (probably removed) CF
> with id 1001
>  INFO 23:30:45,256 Enqueuing flush of Memtable-LocationInfo@18930675(186/232
> serialized/live bytes, 5 ops)
>  INFO 23:30:45,259 Writing Memtable-LocationInfo@18930675(186/232
> serialized/live bytes, 5 ops)
>  INFO 23:30:45,274 Enqueuing flush of
> Memtable-Migrations@10140210(23897/29871 serialized/live bytes, 3 ops)
>  INFO 23:30:45,275 Enqueuing flush of Memtable-Schema@3564915(11042/13802
> serialized/live bytes, 12 ops)
>  INFO 23:30:45,578 Completed flushing
> /var/lib/cassandra/data/system/LocationInfo-g-1-Data.db (296 bytes)
>  INFO 23:30:45,602 Writing Memtable-Migrations@10140210(23897/29871
> serialized/live bytes, 3 ops)
>  INFO 23:30:45,854 Completed flushing
> /var/lib/cassandra/data/system/Migrations-g-1-Data.db (23961 bytes)
>  INFO 23:30:45,857 Writing Memtable-Schema@3564915(11042/13802
> serialized/live bytes, 12 ops)
>  INFO 23:30:46,080 Completed flushing
> /var/lib/cassandra/data/system/Schema-g-1-Data.db (11274 bytes)
>  INFO 23:30:46,095 Log replay complete, 12 replayed mutations
>  INFO 23:30:46,166 Enqueuing flush of Memtable-LocationInfo@17711949(29/36
> serialized/live bytes, 1 ops)
>  INFO 23:30:46,167 Writing Memtable-LocationInfo@17711949(29/36
> serialized/live bytes, 1 ops)
>  INFO 23:30:46,440 Completed flushing
> /var/lib/cassandra/data/system/LocationInfo-g-2-Data.db (80 bytes)
>  INFO 23:30:47,205 Upgrading to 0.7. Purging hints if there are any. Old
> hints will be snapshotted.
>  INFO 23:30:47,352 Cassandra version: 0.8.4
>  INFO 23:30:47,353 Thrift API version: 19.10.0
>  INFO 23:30:47,353 Loading persisted ring state
>  INFO 23:30:47,355 Starting up server gossip
>  INFO 23:30:47,360 Enqueuing flush of Memtable-LocationInfo@5604828(123/153
> serialized/live bytes, 2 ops)
>  INFO 23:30:47,361 Writing Memtable-LocationInfo@5604828(123/153
> serialized/live bytes, 2 ops)
>  INFO 23:30:47,608 Completed flushing
> /var/lib/cassandra/data/system/LocationInfo-g-3-Data.db (231 bytes)
>  INFO 23:30:47,658 Starting Messaging Service on localhost/127.0.0.1:7000
>  INFO 23:30:47,689 Using saved token 118080104480458226093045236055971412420
>  INFO 23:30:47,691 Enqueuing flush of Memtable-LocationInfo@11872808(53/66
> serialized/live bytes, 2 ops)
>  INFO 23:30:47,692 Writing Memtable-LocationInfo@11872808(53/66
> serialized/live bytes, 2 ops)
>  INFO 23:30:47,968 Completed flushing
> /var/lib/cassandra/data/system/LocationInfo-g-4-Data.db (163 bytes)
>  INFO 23:30:47,974 Node localhost/127.0.0.1 state jump to normal
>  INFO 23:30:47,978 Will not load MX4J, mx4j-tools.jar is not in the
> classpath
>
>
> On 08/22/2011 11:24 PM, Jonathan Ellis wrote:
>>
>> It's erroring out trying to load the schema itself, though, which
>> isn't supposed to happen.
>>
>> On Mon, Aug 22, 2011 at 10:14 PM, Dave Brosius<db...@mebigfatguy.com>
>>  wrote:
>>>
>>> Sure i'll try that, but I'm pretty sure it was creating a column family
>>> without any column meta data (types), then, client.insert'ing a
>>> ByteBuffer
>>> that wasn't based on bytes from a String.getBytes call.
>>>
>>>
>>>
>>> On 08/22/2011 11:09 AM, Jonathan Ellis wrote:
>>>>
>>>> Yes, you can blow away both the data and commitlog directories and
>>>> restart, but can you try these first to troubleshoot?
>>>>
>>>> 1. make a copy of the commitlog directory
>>>> 2. downgrade to 0.8 with no other changes, to see if it's something on
>>>> the new read path
>>>> 2a. if 0.8 starts up then we will fix the read code in trunk
>>>> 2b. if 0.8 doesn't start up either, remove the data directory but NOT
>>>> commitlog, and restart.  this will cause commitlog to be replayed --
>>>> with luck whatever is causing the problem is still in there, so if it
>>>> breaks again, we have a reproducible case
>>>>
>>>> Thanks!
>>>>
>>>> On Mon, Aug 22, 2011 at 1:16 AM, Dave Brosius<db...@mebigfatguy.com>
>>>>  wrote:
>>>>>
>>>>> Greetings, I'm running head from source, and now when i try to start up
>>>>> the
>>>>> database, i get the following exception which causes client connection
>>>>> failures. I'm fine with blowing away the database, just playing, but
>>>>> wanted
>>>>> to know if there is a safe way to do this.
>>>>>
>>>>> Exception encountered during startup.
>>>>> java.lang.RuntimeException: error reading 1 of 3
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
>>>>>    at
>>>>>
>>>>>
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>>>>>    at
>>>>>
>>>>>
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
>>>>>    at
>>>>> org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
>>>>>    at
>>>>> org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
>>>>>    at
>>>>> org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
>>>>>    at
>>>>>
>>>>> org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
>>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
>>>>>    at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
>>>>>
>>>>
>>>
>>
>>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: Recover from startup problems

Posted by Dave Brosius <db...@mebigfatguy.com>.
0.8.4 seems to fail to load in the same way. When i deleted the data 
directory, it appears to start up correctly, I see

INFO 23:30:44,067 JNA not found. Native methods will be disabled.
  INFO 23:30:44,100 Loading settings from 
file:/home/dave/apache-cassandra-0.8.4/conf/cassandra.yaml
  INFO 23:30:44,387 DiskAccessMode 'auto' determined to be standard, 
indexAccessMode is standard
  INFO 23:30:44,710 Global memtable threshold is enabled at 245MB
  INFO 23:30:44,713 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,715 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,716 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,717 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,718 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,719 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:44,934 Creating new commitlog segment 
/var/lib/cassandra/commitlog/CommitLog-1314070244934.log
  INFO 23:30:44,994 Couldn't detect any schema definitions in local storage.
  INFO 23:30:45,001 Found table data in data directories. Consider using 
the CLI to define your schema.
  INFO 23:30:45,004 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,011 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,012 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,013 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,014 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,015 Removing compacted SSTable files (see 
http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 23:30:45,134 Replaying 
/var/lib/cassandra/commitlog/CommitLog-1313973548456.log
  INFO 23:30:45,239 Finished reading 
/var/lib/cassandra/commitlog/CommitLog-1313973548456.log
  INFO 23:30:45,244 Skipped 44 mutations from unknown (probably removed) 
CF with id 1001
  INFO 23:30:45,256 Enqueuing flush of 
Memtable-LocationInfo@18930675(186/232 serialized/live bytes, 5 ops)
  INFO 23:30:45,259 Writing Memtable-LocationInfo@18930675(186/232 
serialized/live bytes, 5 ops)
  INFO 23:30:45,274 Enqueuing flush of 
Memtable-Migrations@10140210(23897/29871 serialized/live bytes, 3 ops)
  INFO 23:30:45,275 Enqueuing flush of 
Memtable-Schema@3564915(11042/13802 serialized/live bytes, 12 ops)
  INFO 23:30:45,578 Completed flushing 
/var/lib/cassandra/data/system/LocationInfo-g-1-Data.db (296 bytes)
  INFO 23:30:45,602 Writing Memtable-Migrations@10140210(23897/29871 
serialized/live bytes, 3 ops)
  INFO 23:30:45,854 Completed flushing 
/var/lib/cassandra/data/system/Migrations-g-1-Data.db (23961 bytes)
  INFO 23:30:45,857 Writing Memtable-Schema@3564915(11042/13802 
serialized/live bytes, 12 ops)
  INFO 23:30:46,080 Completed flushing 
/var/lib/cassandra/data/system/Schema-g-1-Data.db (11274 bytes)
  INFO 23:30:46,095 Log replay complete, 12 replayed mutations
  INFO 23:30:46,166 Enqueuing flush of 
Memtable-LocationInfo@17711949(29/36 serialized/live bytes, 1 ops)
  INFO 23:30:46,167 Writing Memtable-LocationInfo@17711949(29/36 
serialized/live bytes, 1 ops)
  INFO 23:30:46,440 Completed flushing 
/var/lib/cassandra/data/system/LocationInfo-g-2-Data.db (80 bytes)
  INFO 23:30:47,205 Upgrading to 0.7. Purging hints if there are any. 
Old hints will be snapshotted.
  INFO 23:30:47,352 Cassandra version: 0.8.4
  INFO 23:30:47,353 Thrift API version: 19.10.0
  INFO 23:30:47,353 Loading persisted ring state
  INFO 23:30:47,355 Starting up server gossip
  INFO 23:30:47,360 Enqueuing flush of 
Memtable-LocationInfo@5604828(123/153 serialized/live bytes, 2 ops)
  INFO 23:30:47,361 Writing Memtable-LocationInfo@5604828(123/153 
serialized/live bytes, 2 ops)
  INFO 23:30:47,608 Completed flushing 
/var/lib/cassandra/data/system/LocationInfo-g-3-Data.db (231 bytes)
  INFO 23:30:47,658 Starting Messaging Service on localhost/127.0.0.1:7000
  INFO 23:30:47,689 Using saved token 
118080104480458226093045236055971412420
  INFO 23:30:47,691 Enqueuing flush of 
Memtable-LocationInfo@11872808(53/66 serialized/live bytes, 2 ops)
  INFO 23:30:47,692 Writing Memtable-LocationInfo@11872808(53/66 
serialized/live bytes, 2 ops)
  INFO 23:30:47,968 Completed flushing 
/var/lib/cassandra/data/system/LocationInfo-g-4-Data.db (163 bytes)
  INFO 23:30:47,974 Node localhost/127.0.0.1 state jump to normal
  INFO 23:30:47,978 Will not load MX4J, mx4j-tools.jar is not in the 
classpath


On 08/22/2011 11:24 PM, Jonathan Ellis wrote:
> It's erroring out trying to load the schema itself, though, which
> isn't supposed to happen.
>
> On Mon, Aug 22, 2011 at 10:14 PM, Dave Brosius<db...@mebigfatguy.com>  wrote:
>> Sure i'll try that, but I'm pretty sure it was creating a column family
>> without any column meta data (types), then, client.insert'ing a ByteBuffer
>> that wasn't based on bytes from a String.getBytes call.
>>
>>
>>
>> On 08/22/2011 11:09 AM, Jonathan Ellis wrote:
>>> Yes, you can blow away both the data and commitlog directories and
>>> restart, but can you try these first to troubleshoot?
>>>
>>> 1. make a copy of the commitlog directory
>>> 2. downgrade to 0.8 with no other changes, to see if it's something on
>>> the new read path
>>> 2a. if 0.8 starts up then we will fix the read code in trunk
>>> 2b. if 0.8 doesn't start up either, remove the data directory but NOT
>>> commitlog, and restart.  this will cause commitlog to be replayed --
>>> with luck whatever is causing the problem is still in there, so if it
>>> breaks again, we have a reproducible case
>>>
>>> Thanks!
>>>
>>> On Mon, Aug 22, 2011 at 1:16 AM, Dave Brosius<db...@mebigfatguy.com>
>>>   wrote:
>>>> Greetings, I'm running head from source, and now when i try to start up
>>>> the
>>>> database, i get the following exception which causes client connection
>>>> failures. I'm fine with blowing away the database, just playing, but
>>>> wanted
>>>> to know if there is a safe way to do this.
>>>>
>>>> Exception encountered during startup.
>>>> java.lang.RuntimeException: error reading 1 of 3
>>>>     at
>>>>
>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
>>>>     at
>>>>
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>>>>     at
>>>>
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
>>>>     at
>>>>
>>>> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
>>>>     at
>>>> org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
>>>>     at
>>>>
>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
>>>>     at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
>>>>     at
>>>> org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
>>>>     at
>>>>
>>>> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
>>>>     at
>>>>
>>>> org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
>>>>     at
>>>>
>>>> org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
>>>>     at
>>>> org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>     at
>>>>
>>>> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
>>>>     at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
>>>>     at
>>>>
>>>> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
>>>>     at
>>>>
>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
>>>>     at
>>>>
>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
>>>>
>>>
>>
>
>


Re: Recover from startup problems

Posted by Jonathan Ellis <jb...@gmail.com>.
It's erroring out trying to load the schema itself, though, which
isn't supposed to happen.

On Mon, Aug 22, 2011 at 10:14 PM, Dave Brosius <db...@mebigfatguy.com> wrote:
> Sure i'll try that, but I'm pretty sure it was creating a column family
> without any column meta data (types), then, client.insert'ing a ByteBuffer
> that wasn't based on bytes from a String.getBytes call.
>
>
>
> On 08/22/2011 11:09 AM, Jonathan Ellis wrote:
>>
>> Yes, you can blow away both the data and commitlog directories and
>> restart, but can you try these first to troubleshoot?
>>
>> 1. make a copy of the commitlog directory
>> 2. downgrade to 0.8 with no other changes, to see if it's something on
>> the new read path
>> 2a. if 0.8 starts up then we will fix the read code in trunk
>> 2b. if 0.8 doesn't start up either, remove the data directory but NOT
>> commitlog, and restart.  this will cause commitlog to be replayed --
>> with luck whatever is causing the problem is still in there, so if it
>> breaks again, we have a reproducible case
>>
>> Thanks!
>>
>> On Mon, Aug 22, 2011 at 1:16 AM, Dave Brosius<db...@mebigfatguy.com>
>>  wrote:
>>>
>>> Greetings, I'm running head from source, and now when i try to start up
>>> the
>>> database, i get the following exception which causes client connection
>>> failures. I'm fine with blowing away the database, just playing, but
>>> wanted
>>> to know if there is a safe way to do this.
>>>
>>> Exception encountered during startup.
>>> java.lang.RuntimeException: error reading 1 of 3
>>>    at
>>>
>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
>>>    at
>>>
>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
>>>    at
>>>
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>>>    at
>>>
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>>>    at
>>>
>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
>>>    at
>>>
>>> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
>>>    at
>>> org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
>>>    at
>>>
>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
>>>    at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
>>>    at
>>>
>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
>>>    at
>>>
>>> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
>>>    at
>>>
>>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
>>>    at
>>> org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
>>>    at
>>>
>>> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
>>>    at
>>>
>>> org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
>>>    at
>>>
>>> org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
>>>    at
>>> org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
>>> Caused by: java.nio.channels.ClosedChannelException
>>>    at
>>>
>>> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
>>>    at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
>>>    at
>>>
>>> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
>>>    at
>>>
>>> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
>>>    at
>>>
>>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
>>>    at
>>>
>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
>>>
>>
>>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: Recover from startup problems

Posted by Dave Brosius <db...@mebigfatguy.com>.
Sure i'll try that, but I'm pretty sure it was creating a column family 
without any column meta data (types), then, client.insert'ing a 
ByteBuffer that wasn't based on bytes from a String.getBytes call.



On 08/22/2011 11:09 AM, Jonathan Ellis wrote:
> Yes, you can blow away both the data and commitlog directories and
> restart, but can you try these first to troubleshoot?
>
> 1. make a copy of the commitlog directory
> 2. downgrade to 0.8 with no other changes, to see if it's something on
> the new read path
> 2a. if 0.8 starts up then we will fix the read code in trunk
> 2b. if 0.8 doesn't start up either, remove the data directory but NOT
> commitlog, and restart.  this will cause commitlog to be replayed --
> with luck whatever is causing the problem is still in there, so if it
> breaks again, we have a reproducible case
>
> Thanks!
>
> On Mon, Aug 22, 2011 at 1:16 AM, Dave Brosius<db...@mebigfatguy.com>  wrote:
>> Greetings, I'm running head from source, and now when i try to start up the
>> database, i get the following exception which causes client connection
>> failures. I'm fine with blowing away the database, just playing, but wanted
>> to know if there is a safe way to do this.
>>
>> Exception encountered during startup.
>> java.lang.RuntimeException: error reading 1 of 3
>>     at
>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
>>     at
>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
>>     at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>>     at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>>     at
>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
>>     at
>> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
>>     at org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
>>     at
>> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
>>     at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
>>     at
>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
>>     at
>> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
>>     at
>> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
>>     at
>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
>>     at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
>>     at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
>>     at org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
>>     at
>> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
>>     at
>> org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
>>     at
>> org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
>>     at
>> org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
>> Caused by: java.nio.channels.ClosedChannelException
>>     at
>> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
>>     at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
>>     at
>> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
>>     at
>> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
>>     at
>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
>>     at
>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
>>     at
>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
>>     at
>> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
>>     at
>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
>>
>
>


Re: Recover from startup problems

Posted by Jonathan Ellis <jb...@gmail.com>.
Yes, you can blow away both the data and commitlog directories and
restart, but can you try these first to troubleshoot?

1. make a copy of the commitlog directory
2. downgrade to 0.8 with no other changes, to see if it's something on
the new read path
2a. if 0.8 starts up then we will fix the read code in trunk
2b. if 0.8 doesn't start up either, remove the data directory but NOT
commitlog, and restart.  this will cause commitlog to be replayed --
with luck whatever is causing the problem is still in there, so if it
breaks again, we have a reproducible case

Thanks!

On Mon, Aug 22, 2011 at 1:16 AM, Dave Brosius <db...@mebigfatguy.com> wrote:
> Greetings, I'm running head from source, and now when i try to start up the
> database, i get the following exception which causes client connection
> failures. I'm fine with blowing away the database, just playing, but wanted
> to know if there is a safe way to do this.
>
> Exception encountered during startup.
> java.lang.RuntimeException: error reading 1 of 3
>    at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
>    at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
>    at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>    at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>    at
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
>    at
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
>    at org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
>    at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
>    at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
>    at
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
>    at
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
>    at
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
>    at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
>    at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
>    at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
>    at org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
>    at
> org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
>    at
> org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
>    at
> org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
>    at
> org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
> Caused by: java.nio.channels.ClosedChannelException
>    at
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
>    at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
>    at
> org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
>    at
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
>    at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
>    at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
>    at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
>    at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
>    at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Recover from startup problems

Posted by Dave Brosius <db...@mebigfatguy.com>.
Greetings, I'm running head from source, and now when i try to start up 
the database, i get the following exception which causes client 
connection failures. I'm fine with blowing away the database, just 
playing, but wanted to know if there is a safe way to do this.

Exception encountered during startup.
java.lang.RuntimeException: error reading 1 of 3
     at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
     at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
     at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
     at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
     at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
     at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:194)
     at 
org.apache.cassandra.utils.MergeIterator.<init>(MergeIterator.java:47)
     at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:142)
     at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:66)
     at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:96)
     at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:221)
     at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:63)
     at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1285)
     at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1169)
     at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1120)
     at org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:83)
     at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:507)
     at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
     at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:335)
     at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:91)
Caused by: java.nio.channels.ClosedChannelException
     at 
org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:268)
     at java.io.RandomAccessFile.readByte(RandomAccessFile.java:589)
     at 
org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:356)
     at 
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:367)
     at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:87)
     at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
     at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
     at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
     at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)

Re: get mycf['rowkey']['column_name'] return 'Value was not found' in cassandra-cli

Posted by Yan Chunlu <sp...@gmail.com>.
the cassandra-cli version I am using is shipped with the cassandra 0.7.4
package;

but I could get results by the column name "14np_20nl":
get mycf2[14np][14np_20nl];




On Mon, Aug 22, 2011 at 1:20 PM, Jonathan Ellis <jb...@gmail.com> wrote:

> My guess: you're using an old version of the cli that isn't dealing
> with bytestype column names correctly
>
> On Mon, Aug 22, 2011 at 12:08 AM, Yan Chunlu <sp...@gmail.com>
> wrote:
> > connect to cassandra-cli and issue the list my cf I got
> > RowKey: comments_62559
> > => (column=76616c7565,
> >
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> > timestamp=1312791934150273)
> >
> > and using
> > get mycf['comments_62559'] could return
> > (column=76616c7565,
> >
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> > timestamp=1312791934150273)
> >
> >
> > but
> > get mycf['comments_62559'][76616c7565];
> > returns 'Value was not found'
> > did I do something wrong?
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>

Re: get mycf['rowkey']['column_name'] return 'Value was not found' in cassandra-cli

Posted by Jonathan Ellis <jb...@gmail.com>.
My guess: you're using an old version of the cli that isn't dealing
with bytestype column names correctly

On Mon, Aug 22, 2011 at 12:08 AM, Yan Chunlu <sp...@gmail.com> wrote:
> connect to cassandra-cli and issue the list my cf I got
> RowKey: comments_62559
> => (column=76616c7565,
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> timestamp=1312791934150273)
>
> and using
> get mycf['comments_62559'] could return
> (column=76616c7565,
> value=28286c70310a4c3236373632334c0a614c3236373733304c0a614c3236373737304c0a614c3236373932324c0a614c3236373934364c0a614c3236383137314c0a614c3236383330334c0a614c3236383934314c0a614c3236383938394c0,
> timestamp=1312791934150273)
>
>
> but
> get mycf['comments_62559'][76616c7565];
> returns 'Value was not found'
> did I do something wrong?



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com