You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Kant Kodali <ka...@peernova.com> on 2016/11/01 01:08:14 UTC

Re: question on an article

Hi Peter,

Thanks for sending this over. I dont know how 100 Bytes (10 bytes of data *
10 columns) can represent anything useful? These days it is better to
benchmark things around 1KB.

Thanks!

On Mon, Oct 31, 2016 at 4:58 PM, Peter Reilly <pe...@gmail.com>
wrote:

> The original article
> http://techblog.netflix.com/2011/11/benchmarking-
> cassandra-scalability-on.html
>
>
> On Mon, Oct 31, 2016 at 5:57 PM, Peter Reilly <peter.kitt.reilly@gmail.com
> > wrote:
>
>> From the article:
>> java -jar stress.jar -d "144 node ids" -e ONE -n 27000000 -l 3 -i 1 -t
>> 200 -p 7102 -o INSERT -c 10 -r
>>
>> The client is writing 10 columns per row key, row key randomly chosen
>> from 27 million ids, each column has a key and 10 bytes of data. The total
>> on disk size for each write including all overhead is about 400 bytes.
>>
>> Note to sure able the batching - it may be one of the parameters to
>> stress.jar.
>>
>> Peter
>>
>> On Mon, Oct 31, 2016 at 4:07 PM, Kant Kodali <ka...@peernova.com> wrote:
>>
>>> Hi Guys,
>>>
>>>
>>> I keep reading the articles below but the biggest questions for me are
>>> as follows
>>>
>>> 1) what is the "data size" per request? without data size it hard for me
>>> to see anything sensible
>>> 2) is there batching here?
>>>
>>> http://www.datastax.com/1-million-writes
>>>
>>> http://techblog.netflix.com/2014/07/revisiting-1-million-wri
>>> tes-per-second.html
>>>
>>> Thanks!
>>>
>>>
>>>
>>>
>>
>

回复:"java.io.IOError: java.io.EOFException: EOF after 13889 bytes out of 460861" occured when I query from a table

Posted by 赵升/赵荣生 <ro...@qq.com>.
the stackStrace is:





------------------ 原始邮件 ------------------
发件人: "赵升/赵荣生";<ro...@qq.com>;
发送时间: 2016年11月1日(星期二) 上午9:41
收件人: "user"<us...@cassandra.apache.org>; 

主题: "java.io.IOError: java.io.EOFException: EOF after 13889 bytes out of 460861" occured when I query from a table



Hi, all
    I hava a problem. I create a table named "tblA" in c* and create a materialized view name viewA on tblA. I run spark job to processing data from 'viewA'.
    In the beginning, it works well. But in the next day, the spark job failed. And when I select data from the 'viewA' and 'tblA' using cql, it throw the follwing exception.
    query from viewA:
         "ServerError: <ErrorMessage code=0000 [Server error] message="java.lang.ArrayIndexOutOfBoundsException">"
    and query from tblA:
         "ServerError: <ErrorMessage code=0000 [Server error] message="java.io.IOError: java.io.EOFException: EOF after 13889 bytes out of 460861">"


    My system version is :
        Cassandra 3.7  +   spark1.6.2   +  Spark Cassandra Connector 1.6


If anyone know about this problem? Look forward to your reply.


Thanks

"java.io.IOError: java.io.EOFException: EOF after 13889 bytes out of 460861" occured when I query from a table

Posted by 赵升/赵荣生 <ro...@qq.com>.
Hi, all
    I hava a problem. I create a table named "tblA" in c* and create a materialized view name viewA on tblA. I run spark job to processing data from 'viewA'.
    In the beginning, it works well. But in the next day, the spark job failed. And when I select data from the 'viewA' and 'tblA' using cql, it throw the follwing exception.
    query from viewA:
         "ServerError: <ErrorMessage code=0000 [Server error] message="java.lang.ArrayIndexOutOfBoundsException">"
    and query from tblA:
         "ServerError: <ErrorMessage code=0000 [Server error] message="java.io.IOError: java.io.EOFException: EOF after 13889 bytes out of 460861">"


    My system version is :
        Cassandra 3.7  +   spark1.6.2   +  Spark Cassandra Connector 1.6


If anyone know about this problem? Look forward to your reply.


Thanks