You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@bookkeeper.apache.org by dennis zhuang <ki...@gmail.com> on 2011/11/27 17:48:07 UTC

Could bookie server lost data between flush?

Hi,to add a entry to bookie server,it has four steps as follow:

   1. Append the entry in *Entry Log*, return its position { logId , offset
   } ;
   2. Update the index of this entry in *Ledger Cache* ;
   3. Append a transaction of update of this entry in *Journal* ;
   4. Respond to BookKeeper client ;

And the EntryLogger,Journal and Index files are not forced to device right
now,but waiting for SyncThread to flush them.
My question is that bookie server would lost data between data flushs?


-- 
庄晓丹
Email:        killme2008@gmail.com
伯岩(花名)  boyan@taobao.com
Site:           http://fnil.net

淘宝(中国)软件有限公司 / 产品技术部 / Java中间件

Re: Could bookie server lost data between flush?

Posted by dennis zhuang <ki...@gmail.com>.
Hmm,i see.
I am sorry,i don't notice the difference of SyncThread and bookie
thread.The bookie thread doesn't wait.

在 2011年11月28日 下午1:20,Sijie Guo <gu...@gmail.com>写道:

>
> Responses are sent immediately after entries are persisted to journal
> files (running in bookie thread), will not wait until their index are
> flushed from ledgerCache to index files (running in SyncThread).
>
> Thanks,
> Sijie
>
> 2011/11/28 dennis zhuang <ki...@gmail.com>
>
>> I still have a question about the performance,
>>
>> If the bookie server reponsed to client until SyncThread flush data to
>> disk,it means that client have to wait at least flush interlval time for
>> response.
>> If i set flush_interval to 100ms, then in one thread i only can add 10
>> entries per second.I known that i can use asynAddEntry,but i need to use
>> addEntry method for waiting the added result to ensure that entry is
>> recorded in bookkeeper.
>>
>> Am i right? or any suggestion?
>>
>> 在 2011年11月28日 上午11:32,Sijie Guo <gu...@gmail.com>写道:
>>
>>
>>> This is a good question. In theory, this would happend.
>>> But I doubt that it would be a rare case in practice. since the max
>>> bytes of a znode in zookeeper is 1MB. suppose we have bookkeeper configured
>>> as ensemble size is 3. then ~100 bytes would be enough to store an ensemble
>>> information, so we can store ~10000 ensembles for a LedgerMetadata, which I
>>> think it is enough for a ledger written life cycle.
>>>
>>> BTW, a temp solution for this situation (if it did happen) is closing
>>> the ledger when necessary (which would make the ledger read-only and no
>>> metadata would be appended) and open a new one to write.
>>>
>>> Thanks,
>>> Sijie
>>>
>>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>>
>>>> It seems that when client add entry to one bookie server failed,it will
>>>> select an extra bookie server and try to add entry to the new one
>>>> server,then put this relationship to ensembles map in LedgerMetadata.
>>>> I am worry about that this map will grow up and reach the limit size of
>>>> zookeeper,because the metadata is stored in zookeeper.Could it happen? Or
>>>> there is a solution for this situation.
>>>>
>>>> 在 2011年11月28日 上午10:25,dennis zhuang <ki...@gmail.com>写道:
>>>>
>>>> Thanks for your answer.
>>>>> Another question about bookkeeper is that when a bookie server failed
>>>>> forever(such as disk damage etc.),will bookeeper try to replicate it's
>>>>> entries to other bookie server automically?Or just let it go,then some
>>>>> entries will lose their replications?
>>>>>
>>>>> 在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:
>>>>>
>>>>> Hello dennis,
>>>>>>
>>>>>> SyncThread only flushed Entry Logs and Index files not journal files.
>>>>>>
>>>>>> Step 4 only happened when entries in journal files are flushed to
>>>>>> disk, which means that when bookkeeper client received responses, the
>>>>>> entries are already persisted in journal files.
>>>>>>
>>>>>> The index may be still in Ledger Cache without being persisted when
>>>>>> bookie server shuts down or crashes. But it is OK. When bookie server
>>>>>> restarted, it can replay entries persisted in journal files to recover
>>>>>> index.
>>>>>>
>>>>>> So no entries will be lost when bookkeeper client received their
>>>>>> responses.
>>>>>>
>>>>>> You can read 'Data Management in Bookie Server' section in
>>>>>> doc/bookkeeperOverview.textile for reference.
>>>>>>
>>>>>> Thanks,
>>>>>> Sijie
>>>>>>
>>>>>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>>>>>
>>>>>>> Hi,to add a entry to bookie server,it has four steps as follow:
>>>>>>>
>>>>>>>    1. Append the entry in *Entry Log*, return its position { logId
>>>>>>>    , offset } ;
>>>>>>>    2. Update the index of this entry in *Ledger Cache* ;
>>>>>>>    3. Append a transaction of update of this entry in *Journal* ;
>>>>>>>    4. Respond to BookKeeper client ;
>>>>>>>
>>>>>>> And the EntryLogger,Journal and Index files are not forced to device
>>>>>>> right now,but waiting for SyncThread to flush them.
>>>>>>> My question is that bookie server would lost data between data
>>>>>>> flushs?
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> 庄晓丹
>>>>>>> Email:        killme2008@gmail.com
>>>>>>> 伯岩(花名)  boyan@taobao.com
>>>>>>> Site:           http://fnil.net
>>>>>>>
>>>>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> 庄晓丹
>>>>> Email:        killme2008@gmail.com
>>>>> 伯岩(花名)  boyan@taobao.com
>>>>> Site:           http://fnil.net
>>>>>
>>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> 庄晓丹
>>>> Email:        killme2008@gmail.com
>>>> 伯岩(花名)  boyan@taobao.com
>>>> Site:           http://fnil.net
>>>>
>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>
>>>>
>>>
>>>
>>
>>
>> --
>> 庄晓丹
>> Email:        killme2008@gmail.com
>> 伯岩(花名)  boyan@taobao.com
>> Site:           http://fnil.net
>>
>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>
>>
>


-- 
庄晓丹
Email:        killme2008@gmail.com
伯岩(花名)  boyan@taobao.com
Site:           http://fnil.net

淘宝(中国)软件有限公司 / 产品技术部 / Java中间件

Re: Could bookie server lost data between flush?

Posted by Sijie Guo <gu...@gmail.com>.
Responses are sent immediately after entries are persisted to journal files
(running in bookie thread), will not wait until their index are flushed
from ledgerCache to index files (running in SyncThread).

Thanks,
Sijie

2011/11/28 dennis zhuang <ki...@gmail.com>

> I still have a question about the performance,
>
> If the bookie server reponsed to client until SyncThread flush data to
> disk,it means that client have to wait at least flush interlval time for
> response.
> If i set flush_interval to 100ms, then in one thread i only can add 10
> entries per second.I known that i can use asynAddEntry,but i need to use
> addEntry method for waiting the added result to ensure that entry is
> recorded in bookkeeper.
>
> Am i right? or any suggestion?
>
> 在 2011年11月28日 上午11:32,Sijie Guo <gu...@gmail.com>写道:
>
>
>> This is a good question. In theory, this would happend.
>> But I doubt that it would be a rare case in practice. since the max bytes
>> of a znode in zookeeper is 1MB. suppose we have bookkeeper configured as
>> ensemble size is 3. then ~100 bytes would be enough to store an ensemble
>> information, so we can store ~10000 ensembles for a LedgerMetadata, which I
>> think it is enough for a ledger written life cycle.
>>
>> BTW, a temp solution for this situation (if it did happen) is closing the
>> ledger when necessary (which would make the ledger read-only and no
>> metadata would be appended) and open a new one to write.
>>
>> Thanks,
>> Sijie
>>
>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>
>>> It seems that when client add entry to one bookie server failed,it will
>>> select an extra bookie server and try to add entry to the new one
>>> server,then put this relationship to ensembles map in LedgerMetadata.
>>> I am worry about that this map will grow up and reach the limit size of
>>> zookeeper,because the metadata is stored in zookeeper.Could it happen? Or
>>> there is a solution for this situation.
>>>
>>> 在 2011年11月28日 上午10:25,dennis zhuang <ki...@gmail.com>写道:
>>>
>>> Thanks for your answer.
>>>> Another question about bookkeeper is that when a bookie server failed
>>>> forever(such as disk damage etc.),will bookeeper try to replicate it's
>>>> entries to other bookie server automically?Or just let it go,then some
>>>> entries will lose their replications?
>>>>
>>>> 在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:
>>>>
>>>> Hello dennis,
>>>>>
>>>>> SyncThread only flushed Entry Logs and Index files not journal files.
>>>>>
>>>>> Step 4 only happened when entries in journal files are flushed to
>>>>> disk, which means that when bookkeeper client received responses, the
>>>>> entries are already persisted in journal files.
>>>>>
>>>>> The index may be still in Ledger Cache without being persisted when
>>>>> bookie server shuts down or crashes. But it is OK. When bookie server
>>>>> restarted, it can replay entries persisted in journal files to recover
>>>>> index.
>>>>>
>>>>> So no entries will be lost when bookkeeper client received their
>>>>> responses.
>>>>>
>>>>> You can read 'Data Management in Bookie Server' section in
>>>>> doc/bookkeeperOverview.textile for reference.
>>>>>
>>>>> Thanks,
>>>>> Sijie
>>>>>
>>>>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>>>>
>>>>>> Hi,to add a entry to bookie server,it has four steps as follow:
>>>>>>
>>>>>>    1. Append the entry in *Entry Log*, return its position { logId ,
>>>>>>    offset } ;
>>>>>>    2. Update the index of this entry in *Ledger Cache* ;
>>>>>>    3. Append a transaction of update of this entry in *Journal* ;
>>>>>>    4. Respond to BookKeeper client ;
>>>>>>
>>>>>> And the EntryLogger,Journal and Index files are not forced to device
>>>>>> right now,but waiting for SyncThread to flush them.
>>>>>> My question is that bookie server would lost data between data flushs?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> 庄晓丹
>>>>>> Email:        killme2008@gmail.com
>>>>>> 伯岩(花名)  boyan@taobao.com
>>>>>> Site:           http://fnil.net
>>>>>>
>>>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> 庄晓丹
>>>> Email:        killme2008@gmail.com
>>>> 伯岩(花名)  boyan@taobao.com
>>>> Site:           http://fnil.net
>>>>
>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>
>>>>
>>>
>>>
>>> --
>>> 庄晓丹
>>> Email:        killme2008@gmail.com
>>> 伯岩(花名)  boyan@taobao.com
>>> Site:           http://fnil.net
>>>
>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>
>>>
>>
>>
>
>
> --
> 庄晓丹
> Email:        killme2008@gmail.com
> 伯岩(花名)  boyan@taobao.com
> Site:           http://fnil.net
>
> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>
>

Re: Could bookie server lost data between flush?

Posted by Sijie Guo <gu...@gmail.com>.
This is a good question. In theory, this would happend.
But I doubt that it would be a rare case in practice. since the max bytes
of a znode in zookeeper is 1MB. suppose we have bookkeeper configured as
ensemble size is 3. then ~100 bytes would be enough to store an ensemble
information, so we can store ~10000 ensembles for a LedgerMetadata, which I
think it is enough for a ledger written life cycle.

BTW, a temp solution for this situation (if it did happen) is closing the
ledger when necessary (which would make the ledger read-only and no
metadata would be appended) and open a new one to write.

Thanks,
Sijie

2011/11/28 dennis zhuang <ki...@gmail.com>

> It seems that when client add entry to one bookie server failed,it will
> select an extra bookie server and try to add entry to the new one
> server,then put this relationship to ensembles map in LedgerMetadata.
> I am worry about that this map will grow up and reach the limit size of
> zookeeper,because the metadata is stored in zookeeper.Could it happen? Or
> there is a solution for this situation.
>
> 在 2011年11月28日 上午10:25,dennis zhuang <ki...@gmail.com>写道:
>
> Thanks for your answer.
>> Another question about bookkeeper is that when a bookie server failed
>> forever(such as disk damage etc.),will bookeeper try to replicate it's
>> entries to other bookie server automically?Or just let it go,then some
>> entries will lose their replications?
>>
>> 在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:
>>
>> Hello dennis,
>>>
>>> SyncThread only flushed Entry Logs and Index files not journal files.
>>>
>>> Step 4 only happened when entries in journal files are flushed to disk,
>>> which means that when bookkeeper client received responses, the entries are
>>> already persisted in journal files.
>>>
>>> The index may be still in Ledger Cache without being persisted when
>>> bookie server shuts down or crashes. But it is OK. When bookie server
>>> restarted, it can replay entries persisted in journal files to recover
>>> index.
>>>
>>> So no entries will be lost when bookkeeper client received their
>>> responses.
>>>
>>> You can read 'Data Management in Bookie Server' section in
>>> doc/bookkeeperOverview.textile for reference.
>>>
>>> Thanks,
>>> Sijie
>>>
>>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>>
>>>> Hi,to add a entry to bookie server,it has four steps as follow:
>>>>
>>>>    1. Append the entry in *Entry Log*, return its position { logId ,
>>>>    offset } ;
>>>>    2. Update the index of this entry in *Ledger Cache* ;
>>>>    3. Append a transaction of update of this entry in *Journal* ;
>>>>    4. Respond to BookKeeper client ;
>>>>
>>>> And the EntryLogger,Journal and Index files are not forced to device
>>>> right now,but waiting for SyncThread to flush them.
>>>> My question is that bookie server would lost data between data flushs?
>>>>
>>>>
>>>> --
>>>> 庄晓丹
>>>> Email:        killme2008@gmail.com
>>>> 伯岩(花名)  boyan@taobao.com
>>>> Site:           http://fnil.net
>>>>
>>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>>
>>>>
>>>
>>
>>
>> --
>> 庄晓丹
>> Email:        killme2008@gmail.com
>> 伯岩(花名)  boyan@taobao.com
>> Site:           http://fnil.net
>>
>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>
>>
>
>
> --
> 庄晓丹
> Email:        killme2008@gmail.com
> 伯岩(花名)  boyan@taobao.com
> Site:           http://fnil.net
>
> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>
>

Re: Could bookie server lost data between flush?

Posted by dennis zhuang <ki...@gmail.com>.
It seems that when client add entry to one bookie server failed,it will
select an extra bookie server and try to add entry to the new one
server,then put this relationship to ensembles map in LedgerMetadata.
I am worry about that this map will grow up and reach the limit size of
zookeeper,because the metadata is stored in zookeeper.Could it happen? Or
there is a solution for this situation.

在 2011年11月28日 上午10:25,dennis zhuang <ki...@gmail.com>写道:

> Thanks for your answer.
> Another question about bookkeeper is that when a bookie server failed
> forever(such as disk damage etc.),will bookeeper try to replicate it's
> entries to other bookie server automically?Or just let it go,then some
> entries will lose their replications?
>
> 在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:
>
> Hello dennis,
>>
>> SyncThread only flushed Entry Logs and Index files not journal files.
>>
>> Step 4 only happened when entries in journal files are flushed to disk,
>> which means that when bookkeeper client received responses, the entries are
>> already persisted in journal files.
>>
>> The index may be still in Ledger Cache without being persisted when
>> bookie server shuts down or crashes. But it is OK. When bookie server
>> restarted, it can replay entries persisted in journal files to recover
>> index.
>>
>> So no entries will be lost when bookkeeper client received their
>> responses.
>>
>> You can read 'Data Management in Bookie Server' section in
>> doc/bookkeeperOverview.textile for reference.
>>
>> Thanks,
>> Sijie
>>
>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>
>>> Hi,to add a entry to bookie server,it has four steps as follow:
>>>
>>>    1. Append the entry in *Entry Log*, return its position { logId ,
>>>    offset } ;
>>>    2. Update the index of this entry in *Ledger Cache* ;
>>>    3. Append a transaction of update of this entry in *Journal* ;
>>>    4. Respond to BookKeeper client ;
>>>
>>> And the EntryLogger,Journal and Index files are not forced to device
>>> right now,but waiting for SyncThread to flush them.
>>> My question is that bookie server would lost data between data flushs?
>>>
>>>
>>> --
>>> 庄晓丹
>>> Email:        killme2008@gmail.com
>>> 伯岩(花名)  boyan@taobao.com
>>> Site:           http://fnil.net
>>>
>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>
>>>
>>
>
>
> --
> 庄晓丹
> Email:        killme2008@gmail.com
> 伯岩(花名)  boyan@taobao.com
> Site:           http://fnil.net
>
> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>
>


-- 
庄晓丹
Email:        killme2008@gmail.com
伯岩(花名)  boyan@taobao.com
Site:           http://fnil.net

淘宝(中国)软件有限公司 / 产品技术部 / Java中间件

Re: Could bookie server lost data between flush?

Posted by Sijie Guo <gu...@gmail.com>.
Currently bookkeeper doesn't provide automatic re-replication, since it
might be difficult to identify a bookie server failed is due to disk damage
or other issues.
Instead, BookKeeper just supports manually re-replication now. There is a
BookKeeperTool provided in tools package used for recovery bookie servers
to other specified bookies manually.
You can try " BookKeeperTools zkServers bookieSrc [bookieDest]", :-)

Thanks,
Sijie

2011/11/28 dennis zhuang <ki...@gmail.com>

> Thanks for your answer.
> Another question about bookkeeper is that when a bookie server failed
> forever(such as disk damage etc.),will bookeeper try to replicate it's
> entries to other bookie server automically?Or just let it go,then some
> entries will lose their replications?
>
> 在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:
>
> Hello dennis,
>>
>> SyncThread only flushed Entry Logs and Index files not journal files.
>>
>> Step 4 only happened when entries in journal files are flushed to disk,
>> which means that when bookkeeper client received responses, the entries are
>> already persisted in journal files.
>>
>> The index may be still in Ledger Cache without being persisted when
>> bookie server shuts down or crashes. But it is OK. When bookie server
>> restarted, it can replay entries persisted in journal files to recover
>> index.
>>
>> So no entries will be lost when bookkeeper client received their
>> responses.
>>
>> You can read 'Data Management in Bookie Server' section in
>> doc/bookkeeperOverview.textile for reference.
>>
>> Thanks,
>> Sijie
>>
>> 2011/11/28 dennis zhuang <ki...@gmail.com>
>>
>>> Hi,to add a entry to bookie server,it has four steps as follow:
>>>
>>>    1. Append the entry in *Entry Log*, return its position { logId ,
>>>    offset } ;
>>>    2. Update the index of this entry in *Ledger Cache* ;
>>>    3. Append a transaction of update of this entry in *Journal* ;
>>>    4. Respond to BookKeeper client ;
>>>
>>> And the EntryLogger,Journal and Index files are not forced to device
>>> right now,but waiting for SyncThread to flush them.
>>> My question is that bookie server would lost data between data flushs?
>>>
>>>
>>> --
>>> 庄晓丹
>>> Email:        killme2008@gmail.com
>>> 伯岩(花名)  boyan@taobao.com
>>> Site:           http://fnil.net
>>>
>>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>>
>>>
>>
>
>
> --
> 庄晓丹
> Email:        killme2008@gmail.com
> 伯岩(花名)  boyan@taobao.com
> Site:           http://fnil.net
>
> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>
>

Re: Could bookie server lost data between flush?

Posted by dennis zhuang <ki...@gmail.com>.
Thanks for your answer.
Another question about bookkeeper is that when a bookie server failed
forever(such as disk damage etc.),will bookeeper try to replicate it's
entries to other bookie server automically?Or just let it go,then some
entries will lose their replications?

在 2011年11月28日 上午10:13,Samuel Guo <gu...@gmail.com>写道:

> Hello dennis,
>
> SyncThread only flushed Entry Logs and Index files not journal files.
>
> Step 4 only happened when entries in journal files are flushed to disk,
> which means that when bookkeeper client received responses, the entries are
> already persisted in journal files.
>
> The index may be still in Ledger Cache without being persisted when bookie
> server shuts down or crashes. But it is OK. When bookie server restarted,
> it can replay entries persisted in journal files to recover index.
>
> So no entries will be lost when bookkeeper client received their
> responses.
>
> You can read 'Data Management in Bookie Server' section in
> doc/bookkeeperOverview.textile for reference.
>
> Thanks,
> Sijie
>
> 2011/11/28 dennis zhuang <ki...@gmail.com>
>
>> Hi,to add a entry to bookie server,it has four steps as follow:
>>
>>    1. Append the entry in *Entry Log*, return its position { logId ,
>>    offset } ;
>>    2. Update the index of this entry in *Ledger Cache* ;
>>    3. Append a transaction of update of this entry in *Journal* ;
>>    4. Respond to BookKeeper client ;
>>
>> And the EntryLogger,Journal and Index files are not forced to device
>> right now,but waiting for SyncThread to flush them.
>> My question is that bookie server would lost data between data flushs?
>>
>>
>> --
>> 庄晓丹
>> Email:        killme2008@gmail.com
>> 伯岩(花名)  boyan@taobao.com
>> Site:           http://fnil.net
>>
>> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>>
>>
>


-- 
庄晓丹
Email:        killme2008@gmail.com
伯岩(花名)  boyan@taobao.com
Site:           http://fnil.net

淘宝(中国)软件有限公司 / 产品技术部 / Java中间件

Re: Could bookie server lost data between flush?

Posted by Samuel Guo <gu...@gmail.com>.
Hello dennis,

SyncThread only flushed Entry Logs and Index files not journal files.

Step 4 only happened when entries in journal files are flushed to disk,
which means that when bookkeeper client received responses, the entries are
already persisted in journal files.

The index may be still in Ledger Cache without being persisted when bookie
server shuts down or crashes. But it is OK. When bookie server restarted,
it can replay entries persisted in journal files to recover index.

So no entries will be lost when bookkeeper client received their responses.

You can read 'Data Management in Bookie Server' section in
doc/bookkeeperOverview.textile for reference.

Thanks,
Sijie

2011/11/28 dennis zhuang <ki...@gmail.com>

> Hi,to add a entry to bookie server,it has four steps as follow:
>
>    1. Append the entry in *Entry Log*, return its position { logId ,
>    offset } ;
>    2. Update the index of this entry in *Ledger Cache* ;
>    3. Append a transaction of update of this entry in *Journal* ;
>    4. Respond to BookKeeper client ;
>
> And the EntryLogger,Journal and Index files are not forced to device right
> now,but waiting for SyncThread to flush them.
> My question is that bookie server would lost data between data flushs?
>
>
> --
> 庄晓丹
> Email:        killme2008@gmail.com
> 伯岩(花名)  boyan@taobao.com
> Site:           http://fnil.net
>
> 淘宝(中国)软件有限公司 / 产品技术部 / Java中间件
>
>