You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Mck SembWever (Issue Comment Edited) (JIRA)" <ji...@apache.org> on 2011/10/30 20:46:32 UTC

[jira] [Issue Comment Edited] (CASSANDRA-3150) ColumnFormatRecordReader loops forever (StorageService.getSplits(..) out of whack)

    [ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13139728#comment-13139728 ] 

Mck SembWever edited comment on CASSANDRA-3150 at 10/30/11 7:46 PM:
--------------------------------------------------------------------

Back after an upgrade to cassandra-1.0.0

Example job start logs{noformat}[ INFO] 20:39:21  Restricting input range: 3589a548d20f80a7b41368b59973bcbc -- 36f0bedaf02a49a3b41368b59973bcbc []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.configureCountToAggregateMapper(GenericCountAggregation.java:222)
[ INFO] 20:39:21  Corresponding time range is Sun Oct 30 00:00:00 CEST 2011 (1319925600000) -- Sun Oct 30 23:00:00 CET 2011 (1320012000000) []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.configureCountToAggregateMapper(GenericCountAggregation.java:225)
[ INFO] 20:39:21  Starting AdIdCountAggregation-phase1-DAY_2011303 ( to=1320002999000)) []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.run(GenericCountAggregation.java:142)
[DEBUG] 20:39:21  adding ColumnFamilySplit{startToken='3589a548d20f80a7b41368b59973bcbc', endToken='36f0bedaf02a49a3b41368b59973bcbc', dataNodes=[0.0.0.0]} []  at org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:210)
{noformat}

In this split there is in fact ~40 million rows, and with a split size of 393216, it is expected ~100 splits to be returned.

I'm also very confused by the dataNodes=[0.0.0.0]. this looks to be wrong, i know that data lies on the third node: cassandra03, where-else the job is running on the first node: cassandra01.
                
      was (Author: michaelsembwever):
    Back after an upgrade to cassandra-1.0.0

Example job start logs{noformat}[ INFO] 20:39:21  Restricting input range: 3589a548d20f80a7b41368b59973bcbc -- 36f0bedaf02a49a3b41368b59973bcbc []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.configureCountToAggregateMapper(GenericCountAggregation.java:222)
[ INFO] 20:39:21  Corresponding time range is Sun Oct 30 00:00:00 CEST 2011 (1319925600000) -- Sun Oct 30 23:00:00 CET 2011 (1320012000000) []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.configureCountToAggregateMapper(GenericCountAggregation.java:225)
[ INFO] 20:39:21  Starting AdIdCountAggregation-phase1-DAY_2011303 ( to=1320002999000)) []  at no.finntech.countstats.reduce.rolled.internal.GenericCountAggregation.run(GenericCountAggregation.java:142)
[DEBUG] 20:39:21  adding ColumnFamilySplit{startToken='3589a548d20f80a7b41368b59973bcbc', endToken='36f0bedaf02a49a3b41368b59973bcbc', dataNodes=[0.0.0.0]} []  at org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:210)
{noformat}

In this split there is in fact ~40 million rows, and with a split size of 393216, it is expected ~100 splits to be returned.

I'm also very confused by the {{dataNodes=[0.0.0.0]}}. this looks to be wrong, i know that data lies on the third node: cassandra03, where-else the job is running on the first node: cassandra01.
                  
> ColumnFormatRecordReader loops forever (StorageService.getSplits(..) out of whack)
> ----------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-3150
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3150
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Hadoop
>    Affects Versions: 0.8.4, 0.8.5
>            Reporter: Mck SembWever
>            Assignee: Mck SembWever
>            Priority: Critical
>             Fix For: 0.8.6
>
>         Attachments: CASSANDRA-3150.patch, Screenshot-Counters for task_201109212019_1060_m_000029 - Mozilla Firefox.png, Screenshot-Hadoop map task list for job_201109212019_1060 on cassandra01 - Mozilla Firefox.png, attempt_201109071357_0044_m_003040_0.grep-get_range_slices.log, fullscan-example1.log
>
>
> From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039
> {quote}
> bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner
> bq. CFIF's inputSplitSize=196608
> bq. 3 map tasks (from 4013) is still running after read 25 million rows.
> bq. Can this be a bug in StorageService.getSplits(..) ?
> getSplits looks pretty foolproof to me but I guess we'd need to add
> more debug logging to rule out a bug there for sure.
> I guess the main alternative would be a bug in the recordreader paging.
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira