You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Robert Buck <bu...@gmail.com> on 2010/05/26 20:37:18 UTC

Newbie question: compaction and mvcc consistency?

Here is a newbie question for you that relates MVCC, compaction, and versioning.

I read that old versions of documents are cleaned up upon compaction.
I read that every b-tree root will point to a consistent snapshot of
the database.
I read that there could be readers reading three different versions
[of a document] at the same time [due to timing of reads vs writes].

So long as reader (i) is reading a document (r), if updates to the
document introduce an effective change from r->s, if after 's' has
been added the database were compacted, what would reader (i) see?
Would document (r) be pulled out from under reader (i) or would the
document remain until such time as the reader no longer refers to it?

Bob

Re: Re: Re: Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Newson <ro...@gmail.com>.
I'm sorry if I wasn't clear. I was listing all the reasons why the
patch has not been applied.

B.

On Wed, May 26, 2010 at 10:46 PM, Markus Jelsma
<ma...@buyways.nl> wrote:
> It's a good question. I wrote the patch because I saw the problem and
> it concerned me. Since it's difficult to induce the problem, and the
> patch is not subtle in its actions, it has not been committed to the
> project (this was, I think, my first couchdb patch).
>
> It remains theoretically possible but given the difficulty of inducing
> it it's not being addressed yet.
>
> But is it addressed in .10? If so, how?
> Storing writes in RAM would violate the durability semantics of
> couchdb and would mean you would have to be more careful during
> compaction.
>
> Of course, a loss of power would not flush a RAM buffer to disk.
> Clients shouldn't need to know or care compaction, which
> is just a system maintenance tasks.
>
>
> Obviously it must be transparent to clients, it would spoil the fun =)
> B.
>
> On Wed, May 26, 2010 at 10:19 PM, Markus Jelsma
> <ma...@buyways.nl> wrote:
>> How is it that you couldn't reproduce the scenario with .10 and onwards? The patch you supplied for that JIRA ticket you mention in the other post doesn't seem to be incorporated in .10 at all. Are there other useful counter measures in .10?
>>
>>
>>
>> Also, on the subject of your ticket and especially Adam's comment to it, would storing incoming writes during the wait in a RAM buffer help to allow for writes during a compaction that can't cope with the amount of writes?
>>
>> -----Original message-----
>> From: Robert Newson <ro...@gmail.com>
>> Sent: Wed 26-05-2010 22:56
>> To: user@couchdb.apache.org;
>> Subject: Re: Re: Newbie question: compaction and mvcc consistency?
>>
>> I succeeding in preventing compaction completion back in the 0.9 days
>> but I've been unable to reproduce since 0.10 onwards. compaction
>> retries until it succeeds (or you hit the end of the disk). I've not
>> managed to make it retry more than five times before it succeeds.
>>
>> B.
>>
>> On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <ma...@buyways.nl> wrote:
>>> On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?
>>>
>>>
>>> This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
>>> -----Original message-----
>>> From: Randall Leeds <ra...@gmail.com>
>>> Sent: Wed 26-05-2010 22:36
>>> To: user@couchdb.apache.org;
>>> Subject: Re: Newbie question: compaction and mvcc consistency?
>>>
>>> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>>>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>>>> The switch to the new, compacted database won't happen so long as
>>>>> there are references to the old one. (r) will not disappear until (i)
>>>>> is done with it.
>>>>
>>>> Curious, you said "switch to the new [database]". Does this imply that
>>>> compaction works by creating a new database file adjacent to the old
>>>> one?
>>>
>>> Yes.
>>>
>>>>
>>>> If this is what you are suggesting, I have another question... I also
>>>> read that compaction process may never catch up with the writes if
>>>> they never let up. So along this specific train of thought, does Couch
>>>> perform compaction by walking through the database in a forward-only
>>>> manner?
>>>
>>> If I understand correctly the answer is 'yes'. Meanwhile, new writes
>>> still hit the old database file as the compactor walks the old tree.
>>> If there are new changes when the compactor finishes it will walk the
>>> new changes starting from the root. Typically this process quickly
>>> gets faster and faster on busy databases until it catches up
>>> completely and the switch can be made.
>>>
>>> That said, you can construct an environment where compaction will
>>> never finish, but I haven't seen reports of it happening in the wild.
>>>
>>
>
>
>

RE: Re: Re: Re: Newbie question: compaction and mvcc consistency?

Posted by Markus Jelsma <ma...@buyways.nl>.
It's a good question. I wrote the patch because I saw the problem and
it concerned me. Since it's difficult to induce the problem, and the
patch is not subtle in its actions, it has not been committed to the
project (this was, I think, my first couchdb patch).

It remains theoretically possible but given the difficulty of inducing
it it's not being addressed yet.

But is it addressed in .10? If so, how?
Storing writes in RAM would violate the durability semantics of
couchdb and would mean you would have to be more careful during
compaction. 

Of course, a loss of power would not flush a RAM buffer to disk.
Clients shouldn't need to know or care compaction, which
is just a system maintenance tasks.


Obviously it must be transparent to clients, it would spoil the fun =)
B.

On Wed, May 26, 2010 at 10:19 PM, Markus Jelsma
<ma...@buyways.nl> wrote:
> How is it that you couldn't reproduce the scenario with .10 and onwards? The patch you supplied for that JIRA ticket you mention in the other post doesn't seem to be incorporated in .10 at all. Are there other useful counter measures in .10?
>
>
>
> Also, on the subject of your ticket and especially Adam's comment to it, would storing incoming writes during the wait in a RAM buffer help to allow for writes during a compaction that can't cope with the amount of writes?
>
> -----Original message-----
> From: Robert Newson <ro...@gmail.com>
> Sent: Wed 26-05-2010 22:56
> To: user@couchdb.apache.org;
> Subject: Re: Re: Newbie question: compaction and mvcc consistency?
>
> I succeeding in preventing compaction completion back in the 0.9 days
> but I've been unable to reproduce since 0.10 onwards. compaction
> retries until it succeeds (or you hit the end of the disk). I've not
> managed to make it retry more than five times before it succeeds.
>
> B.
>
> On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <ma...@buyways.nl> wrote:
>> On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?
>>
>>
>> This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
>> -----Original message-----
>> From: Randall Leeds <ra...@gmail.com>
>> Sent: Wed 26-05-2010 22:36
>> To: user@couchdb.apache.org;
>> Subject: Re: Newbie question: compaction and mvcc consistency?
>>
>> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>>> The switch to the new, compacted database won't happen so long as
>>>> there are references to the old one. (r) will not disappear until (i)
>>>> is done with it.
>>>
>>> Curious, you said "switch to the new [database]". Does this imply that
>>> compaction works by creating a new database file adjacent to the old
>>> one?
>>
>> Yes.
>>
>>>
>>> If this is what you are suggesting, I have another question... I also
>>> read that compaction process may never catch up with the writes if
>>> they never let up. So along this specific train of thought, does Couch
>>> perform compaction by walking through the database in a forward-only
>>> manner?
>>
>> If I understand correctly the answer is 'yes'. Meanwhile, new writes
>> still hit the old database file as the compactor walks the old tree.
>> If there are new changes when the compactor finishes it will walk the
>> new changes starting from the root. Typically this process quickly
>> gets faster and faster on busy databases until it catches up
>> completely and the switch can be made.
>>
>> That said, you can construct an environment where compaction will
>> never finish, but I haven't seen reports of it happening in the wild.
>>
>

 

Re: Re: Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Newson <ro...@gmail.com>.
It's a good question. I wrote the patch because I saw the problem and
it concerned me. Since it's difficult to induce the problem, and the
patch is not subtle in its actions, it has not been committed to the
project (this was, I think, my first couchdb patch).

It remains theoretically possible but given the difficulty of inducing
it it's not being addressed yet.

Storing writes in RAM would violate the durability semantics of
couchdb and would mean you would have to be more careful during
compaction. Clients shouldn't need to know or care compaction, which
is just a system maintenance tasks.

B.

On Wed, May 26, 2010 at 10:19 PM, Markus Jelsma
<ma...@buyways.nl> wrote:
> How is it that you couldn't reproduce the scenario with .10 and onwards? The patch you supplied for that JIRA ticket you mention in the other post doesn't seem to be incorporated in .10 at all. Are there other useful counter measures in .10?
>
>
>
> Also, on the subject of your ticket and especially Adam's comment to it, would storing incoming writes during the wait in a RAM buffer help to allow for writes during a compaction that can't cope with the amount of writes?
>
> -----Original message-----
> From: Robert Newson <ro...@gmail.com>
> Sent: Wed 26-05-2010 22:56
> To: user@couchdb.apache.org;
> Subject: Re: Re: Newbie question: compaction and mvcc consistency?
>
> I succeeding in preventing compaction completion back in the 0.9 days
> but I've been unable to reproduce since 0.10 onwards. compaction
> retries until it succeeds (or you hit the end of the disk). I've not
> managed to make it retry more than five times before it succeeds.
>
> B.
>
> On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <ma...@buyways.nl> wrote:
>> On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?
>>
>>
>> This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
>> -----Original message-----
>> From: Randall Leeds <ra...@gmail.com>
>> Sent: Wed 26-05-2010 22:36
>> To: user@couchdb.apache.org;
>> Subject: Re: Newbie question: compaction and mvcc consistency?
>>
>> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>>> The switch to the new, compacted database won't happen so long as
>>>> there are references to the old one. (r) will not disappear until (i)
>>>> is done with it.
>>>
>>> Curious, you said "switch to the new [database]". Does this imply that
>>> compaction works by creating a new database file adjacent to the old
>>> one?
>>
>> Yes.
>>
>>>
>>> If this is what you are suggesting, I have another question... I also
>>> read that compaction process may never catch up with the writes if
>>> they never let up. So along this specific train of thought, does Couch
>>> perform compaction by walking through the database in a forward-only
>>> manner?
>>
>> If I understand correctly the answer is 'yes'. Meanwhile, new writes
>> still hit the old database file as the compactor walks the old tree.
>> If there are new changes when the compactor finishes it will walk the
>> new changes starting from the root. Typically this process quickly
>> gets faster and faster on busy databases until it catches up
>> completely and the switch can be made.
>>
>> That said, you can construct an environment where compaction will
>> never finish, but I haven't seen reports of it happening in the wild.
>>
>

RE: Re: Re: Newbie question: compaction and mvcc consistency?

Posted by Markus Jelsma <ma...@buyways.nl>.
How is it that you couldn't reproduce the scenario with .10 and onwards? The patch you supplied for that JIRA ticket you mention in the other post doesn't seem to be incorporated in .10 at all. Are there other useful counter measures in .10?

 

Also, on the subject of your ticket and especially Adam's comment to it, would storing incoming writes during the wait in a RAM buffer help to allow for writes during a compaction that can't cope with the amount of writes?
 
-----Original message-----
From: Robert Newson <ro...@gmail.com>
Sent: Wed 26-05-2010 22:56
To: user@couchdb.apache.org; 
Subject: Re: Re: Newbie question: compaction and mvcc consistency?

I succeeding in preventing compaction completion back in the 0.9 days
but I've been unable to reproduce since 0.10 onwards. compaction
retries until it succeeds (or you hit the end of the disk). I've not
managed to make it retry more than five times before it succeeds.

B.

On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <ma...@buyways.nl> wrote:
> On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?
>
>
> This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
> -----Original message-----
> From: Randall Leeds <ra...@gmail.com>
> Sent: Wed 26-05-2010 22:36
> To: user@couchdb.apache.org;
> Subject: Re: Newbie question: compaction and mvcc consistency?
>
> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>> The switch to the new, compacted database won't happen so long as
>>> there are references to the old one. (r) will not disappear until (i)
>>> is done with it.
>>
>> Curious, you said "switch to the new [database]". Does this imply that
>> compaction works by creating a new database file adjacent to the old
>> one?
>
> Yes.
>
>>
>> If this is what you are suggesting, I have another question... I also
>> read that compaction process may never catch up with the writes if
>> they never let up. So along this specific train of thought, does Couch
>> perform compaction by walking through the database in a forward-only
>> manner?
>
> If I understand correctly the answer is 'yes'. Meanwhile, new writes
> still hit the old database file as the compactor walks the old tree.
> If there are new changes when the compactor finishes it will walk the
> new changes starting from the root. Typically this process quickly
> gets faster and faster on busy databases until it catches up
> completely and the switch can be made.
>
> That said, you can construct an environment where compaction will
> never finish, but I haven't seen reports of it happening in the wild.
>

Re: Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Newson <ro...@gmail.com>.
I succeeding in preventing compaction completion back in the 0.9 days
but I've been unable to reproduce since 0.10 onwards. compaction
retries until it succeeds (or you hit the end of the disk). I've not
managed to make it retry more than five times before it succeeds.

B.

On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <ma...@buyways.nl> wrote:
> On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?
>
>
> This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
> -----Original message-----
> From: Randall Leeds <ra...@gmail.com>
> Sent: Wed 26-05-2010 22:36
> To: user@couchdb.apache.org;
> Subject: Re: Newbie question: compaction and mvcc consistency?
>
> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>> The switch to the new, compacted database won't happen so long as
>>> there are references to the old one. (r) will not disappear until (i)
>>> is done with it.
>>
>> Curious, you said "switch to the new [database]". Does this imply that
>> compaction works by creating a new database file adjacent to the old
>> one?
>
> Yes.
>
>>
>> If this is what you are suggesting, I have another question... I also
>> read that compaction process may never catch up with the writes if
>> they never let up. So along this specific train of thought, does Couch
>> perform compaction by walking through the database in a forward-only
>> manner?
>
> If I understand correctly the answer is 'yes'. Meanwhile, new writes
> still hit the old database file as the compactor walks the old tree.
> If there are new changes when the compactor finishes it will walk the
> new changes starting from the root. Typically this process quickly
> gets faster and faster on busy databases until it catches up
> completely and the switch can be made.
>
> That said, you can construct an environment where compaction will
> never finish, but I haven't seen reports of it happening in the wild.
>

RE: Re: Newbie question: compaction and mvcc consistency?

Posted by Markus Jelsma <ma...@buyways.nl>.
On the subject of a compaction that cannot deal with the magnitude of writes, can that (or has it already) theory be put to the test? Does anyone know a certain setup that consists of machine specifications relative to the amount of writes/second?  
 

This is a theoretical obstacle that could use some factual numbers that could help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation in practice especially if compaction is triggered by some process that monitors available disk space or whatever other condition.
-----Original message-----
From: Randall Leeds <ra...@gmail.com>
Sent: Wed 26-05-2010 22:36
To: user@couchdb.apache.org; 
Subject: Re: Newbie question: compaction and mvcc consistency?

On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>> The switch to the new, compacted database won't happen so long as
>> there are references to the old one. (r) will not disappear until (i)
>> is done with it.
>
> Curious, you said "switch to the new [database]". Does this imply that
> compaction works by creating a new database file adjacent to the old
> one?

Yes.

>
> If this is what you are suggesting, I have another question... I also
> read that compaction process may never catch up with the writes if
> they never let up. So along this specific train of thought, does Couch
> perform compaction by walking through the database in a forward-only
> manner?

If I understand correctly the answer is 'yes'. Meanwhile, new writes
still hit the old database file as the compactor walks the old tree.
If there are new changes when the compactor finishes it will walk the
new changes starting from the root. Typically this process quickly
gets faster and faster on busy databases until it catches up
completely and the switch can be made.

That said, you can construct an environment where compaction will
never finish, but I haven't seen reports of it happening in the wild.

Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Buck <bu...@gmail.com>.
On Wed, May 26, 2010 at 4:35 PM, Randall Leeds <ra...@gmail.com> wrote:
> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>> The switch to the new, compacted database won't happen so long as
>>> there are references to the old one. (r) will not disappear until (i)
>>> is done with it.
>>
>> Curious, you said "switch to the new [database]". Does this imply that
>> compaction works by creating a new database file adjacent to the old
>> one?
>
> Yes.
>
>>
>> If this is what you are suggesting, I have another question... I also
>> read that compaction process may never catch up with the writes if
>> they never let up. So along this specific train of thought, does Couch
>> perform compaction by walking through the database in a forward-only
>> manner?
>
> If I understand correctly the answer is 'yes'. Meanwhile, new writes
> still hit the old database file as the compactor walks the old tree.
> If there are new changes when the compactor finishes it will walk the
> new changes starting from the root. Typically this process quickly
> gets faster and faster on busy databases until it catches up
> completely and the switch can be made.
>
> That said, you can construct an environment where compaction will
> never finish, but I haven't seen reports of it happening in the wild.

Thank you. I am just trying to translate my understanding from years
of experience with ObjectStore to the simpler document-orientation in
Couch. Your feedback helps.

Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Newson <ro...@gmail.com>.
The potential inability to complete compaction in write-saturated
environments is captured in
http://issues.apache.org/jira/browse/COUCHDB-487 with a patch.

I think kocolosk has recently written a patch that improves the way
that data needs to be read to perform compaction, which in turn
reduces the likelihood of not flipping over.

There's also the @dev thread on using a sequence of files (like JE)
instead of a single one, which also reduces the problem.

B.

On Wed, May 26, 2010 at 9:35 PM, Randall Leeds <ra...@gmail.com> wrote:
> On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>>> The switch to the new, compacted database won't happen so long as
>>> there are references to the old one. (r) will not disappear until (i)
>>> is done with it.
>>
>> Curious, you said "switch to the new [database]". Does this imply that
>> compaction works by creating a new database file adjacent to the old
>> one?
>
> Yes.
>
>>
>> If this is what you are suggesting, I have another question... I also
>> read that compaction process may never catch up with the writes if
>> they never let up. So along this specific train of thought, does Couch
>> perform compaction by walking through the database in a forward-only
>> manner?
>
> If I understand correctly the answer is 'yes'. Meanwhile, new writes
> still hit the old database file as the compactor walks the old tree.
> If there are new changes when the compactor finishes it will walk the
> new changes starting from the root. Typically this process quickly
> gets faster and faster on busy databases until it catches up
> completely and the switch can be made.
>
> That said, you can construct an environment where compaction will
> never finish, but I haven't seen reports of it happening in the wild.
>

Re: Newbie question: compaction and mvcc consistency?

Posted by Randall Leeds <ra...@gmail.com>.
On Wed, May 26, 2010 at 13:29, Robert Buck <bu...@gmail.com> wrote:
> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
>> The switch to the new, compacted database won't happen so long as
>> there are references to the old one. (r) will not disappear until (i)
>> is done with it.
>
> Curious, you said "switch to the new [database]". Does this imply that
> compaction works by creating a new database file adjacent to the old
> one?

Yes.

>
> If this is what you are suggesting, I have another question... I also
> read that compaction process may never catch up with the writes if
> they never let up. So along this specific train of thought, does Couch
> perform compaction by walking through the database in a forward-only
> manner?

If I understand correctly the answer is 'yes'. Meanwhile, new writes
still hit the old database file as the compactor walks the old tree.
If there are new changes when the compactor finishes it will walk the
new changes starting from the root. Typically this process quickly
gets faster and faster on busy databases until it catches up
completely and the switch can be made.

That said, you can construct an environment where compaction will
never finish, but I haven't seen reports of it happening in the wild.

Re: Newbie question: compaction and mvcc consistency?

Posted by Robert Buck <bu...@gmail.com>.
On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <ra...@gmail.com> wrote:
> On Wed, May 26, 2010 at 11:37, Robert Buck <bu...@gmail.com> wrote:
>> Here is a newbie question for you that relates MVCC, compaction, and versioning.
>>
>> I read that old versions of documents are cleaned up upon compaction.
>> I read that every b-tree root will point to a consistent snapshot of
>> the database.
>> I read that there could be readers reading three different versions
>> [of a document] at the same time [due to timing of reads vs writes].
>
> Yay!
>
>>
>> So long as reader (i) is reading a document (r), if updates to the
>> document introduce an effective change from r->s, if after 's' has
>> been added the database were compacted, what would reader (i) see?
>> Would document (r) be pulled out from under reader (i) or would the
>> document remain until such time as the reader no longer refers to it?
>
> The switch to the new, compacted database won't happen so long as
> there are references to the old one. (r) will not disappear until (i)
> is done with it.

Curious, you said "switch to the new [database]". Does this imply that
compaction works by creating a new database file adjacent to the old
one?

If this is what you are suggesting, I have another question... I also
read that compaction process may never catch up with the writes if
they never let up. So along this specific train of thought, does Couch
perform compaction by walking through the database in a forward-only
manner?

Thanks so much.

Re: Newbie question: compaction and mvcc consistency?

Posted by Jason Benesch <ja...@realestatetomato.com>.
I can't really speak to the compaction side, but I know you check for
changes to the database using Change Notifications...

curl -X GET "$HOST/db/_changes?feed=continuous"

http://books.couchdb.org/relax/reference/change-notifications


> So long as reader (i) is reading a document (r), if updates to the
> document introduce an effective change from r->s, if after 's' has
> been added the database were compacted, what would reader (i) see?

You can always proactively attack the issue by dynamically updating the
document...


On Wed, May 26, 2010 at 12:00 PM, Randall Leeds <ra...@gmail.com>wrote:

> On Wed, May 26, 2010 at 11:37, Robert Buck <bu...@gmail.com>
> wrote:
> > Here is a newbie question for you that relates MVCC, compaction, and
> versioning.
> >
> > I read that old versions of documents are cleaned up upon compaction.
> > I read that every b-tree root will point to a consistent snapshot of
> > the database.
> > I read that there could be readers reading three different versions
> > [of a document] at the same time [due to timing of reads vs writes].
>
> Yay!
>
> >
> > So long as reader (i) is reading a document (r), if updates to the
> > document introduce an effective change from r->s, if after 's' has
> > been added the database were compacted, what would reader (i) see?
> > Would document (r) be pulled out from under reader (i) or would the
> > document remain until such time as the reader no longer refers to it?
>
> The switch to the new, compacted database won't happen so long as
> there are references to the old one. (r) will not disappear until (i)
> is done with it.
>



-- 
Jason Benesch

We just launched www.TomatoUniversity.com - Join for free!
Technology Training for the Real Estate Industry.

Real Estate Tomato
Co-owner
www.realestatetomato.com
(619) 770-1950
jason@realestatetomato.com

ListingPress
Owner, Founder
www.listingpress.com
(619) 955-7465
jason@listingpress.com

Re: Newbie question: compaction and mvcc consistency?

Posted by Randall Leeds <ra...@gmail.com>.
On Wed, May 26, 2010 at 11:37, Robert Buck <bu...@gmail.com> wrote:
> Here is a newbie question for you that relates MVCC, compaction, and versioning.
>
> I read that old versions of documents are cleaned up upon compaction.
> I read that every b-tree root will point to a consistent snapshot of
> the database.
> I read that there could be readers reading three different versions
> [of a document] at the same time [due to timing of reads vs writes].

Yay!

>
> So long as reader (i) is reading a document (r), if updates to the
> document introduce an effective change from r->s, if after 's' has
> been added the database were compacted, what would reader (i) see?
> Would document (r) be pulled out from under reader (i) or would the
> document remain until such time as the reader no longer refers to it?

The switch to the new, compacted database won't happen so long as
there are references to the old one. (r) will not disappear until (i)
is done with it.