You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "ryan rawson (JIRA)" <ji...@apache.org> on 2011/03/04 01:39:36 UTC
[jira] Reopened: (HBASE-3514) Speedup HFile.Writer append
[ https://issues.apache.org/jira/browse/HBASE-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ryan rawson reopened HBASE-3514:
--------------------------------
not fixed yet causing test failures and has a problem with the design
> Speedup HFile.Writer append
> ---------------------------
>
> Key: HBASE-3514
> URL: https://issues.apache.org/jira/browse/HBASE-3514
> Project: HBase
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.90.0
> Reporter: Matteo Bertozzi
> Priority: Minor
> Attachments: HBASE-3514-append-0.90-v2.patch, HBASE-3514-append-0.90-v2b.patch, HBASE-3514-append-0.90-v3.patch, HBASE-3514-append-0.90.patch, HBASE-3514-append-trunk-v2.patch, HBASE-3514-append-trunk-v2b.patch, HBASE-3514-append-trunk-v3.patch, HBASE-3514-append.patch, HBASE-3514-metaBlock-bsearch.patch
>
>
> Remove double writes when block cache is specified, by using, only, the ByteArrayDataStream.
> baos is flushed with the compress stream on finishBlock.
> On my machines HFilePerformanceEvaluation SequentialWriteBenchmark passes from 4000ms to 2500ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4247ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4512ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4498ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2697ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2770ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2721ms.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira