You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@couchdb.apache.org by "Mike Leddy (JIRA)" <ji...@apache.org> on 2011/01/11 16:34:45 UTC
[jira] Created: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Compacting a database does not preserve the purge_seq
-----------------------------------------------------
Key: COUCHDB-1021
URL: https://issues.apache.org/jira/browse/COUCHDB-1021
Project: CouchDB
Issue Type: Bug
Components: Database Core
Affects Versions: 1.0.1
Environment: All platforms
Reporter: Mike Leddy
Priority: Minor
On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
--- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
+++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
@@ -857,7 +857,7 @@
commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
-start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
+start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
CompactFile = Filepath ++ ".compact",
?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
case couch_file:open(CompactFile) of
@@ -869,7 +869,7 @@
couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
{ok, Fd} = couch_file:open(CompactFile, [create]),
Retry = false,
- ok = couch_file:write_header(Fd, Header=#db_header{})
+ ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
end,
NewDb = init_db(Name, CompactFile, Fd, Header),
unlink(Fd),
I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980536#action_12980536 ]
Mike Leddy commented on COUCHDB-1021:
-------------------------------------
Ok, no problem. Strangely when I pasted here it mangled the spacing. I am attaching the patch against trunk.
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980380#action_12980380 ]
Mike Leddy commented on COUCHDB-1021:
-------------------------------------
Thanks once again Adam. Hopefully I'm getting closer...... and learning along the way:
mike@mike:/usr/src/couchdb$ cat debian/patches/keep_purge_state_on_compaction.patch
--- couchdb-1.0.1/src/couchdb/couch_db_updater.erl 2011-01-11 21:45:32.000000000 +0000
+++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl 2011-01-11 22:00:07.000000000 +0000
@@ -847,7 +847,7 @@
commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
-start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
+start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
CompactFile = Filepath ++ ".compact",
?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
case couch_file:open(CompactFile) of
@@ -866,9 +866,18 @@
Retry = false,
ok = couch_file:write_header(Fd, Header=#db_header{})
end,
+
NewDb = init_db(Name, CompactFile, Fd, Header),
+ NewDb2 = if PurgeSeq > 0 ->
+ {ok, PurgedIdsRevs} = couch_db:get_last_purged(Db),
+ {ok, Pointer} = couch_file:append_term(Fd, PurgedIdsRevs),
+ NewDb#db{header=Header#db_header{purge_seq=PurgeSeq, purged_docs=Pointer}};
+ true ->
+ NewDb
+ end,
unlink(Fd),
- NewDb2 = copy_compact(Db, NewDb, Retry),
- close_db(NewDb2),
+
+ NewDb3 = copy_compact(Db, NewDb2, Retry),
+ close_db(NewDb3),
gen_server:cast(Db#db.update_pid, {compact_done, CompactFile}).
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Leddy updated COUCHDB-1021:
--------------------------------
Attachment: (was: keep_purge_state_on_compaction.patch)
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Adam Kocoloski (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980152#action_12980152 ]
Adam Kocoloski commented on COUCHDB-1021:
-----------------------------------------
Hi Mike, thanks for the bug report and the patch. It looks like that patch is not quite right, though. If a user purges some docs, compacts the DB, then queries an out-of-date view, the #db_header.purged_docs pointer will be nil and the view updater will probably just crash. I think you'd need to read the purged_docs term from the old file, write it to the new file, and update #db_header.purged_docs for the compacted DB with the new pointer.
I think the decision to do this in start_copy_compact is just fine. If a user just purged a huge block of documents it'll be nice to copy that block to the compacted file outside of the db_updater server loop. Purging during compaction can never happen, so no worries there.
Do you have time to take a crack at updating the patch?
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Adam Kocoloski (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Adam Kocoloski resolved COUCHDB-1021.
-------------------------------------
Resolution: Fixed
Fix Version/s: 1.1
1.0.2
Applied to trunk, 1.1.x and 1.0.x. Extended the purge.js test to confirm that purge_seq is preserved across compactions.
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
> Fix For: 1.0.2, 1.1
>
> Attachments: keep_purge_state_on_compaction.patch
>
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Adam Kocoloski (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980508#action_12980508 ]
Adam Kocoloski commented on COUCHDB-1021:
-----------------------------------------
Hi Mike, that looks like a correct patch to me. I'm having trouble applying it cleanly, though. It appears to be against 1.0.1, but even with a big fuzz factor the second hunk fails for me. Can you provide a patch against trunk? Also, a couple of small coding style things: the bodies of the clauses in the if statement should be indented, and you should try to avoid lines longer than 80 characters when feasible.
Thanks for the help!
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Leddy updated COUCHDB-1021:
--------------------------------
Attachment: keep_purge_state_on_compaction.patch
Corrected layout
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
> Attachments: keep_purge_state_on_compaction.patch
>
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Adam Kocoloski (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980347#action_12980347 ]
Adam Kocoloski commented on COUCHDB-1021:
-----------------------------------------
Hi Mike, one problem I see is that you created a new header to save the purge data rather than modifying Header, which might contain valuable information if the compaction is being retried.
Other than that, I think it looks pretty good. In my opinion you don't need to do a commit_data there, you can let the purge info get saved on the next commit. It's probably sufficient to just update Header with the purge info and make sure that the #db{} sent to copy_compact uses that updated #header{}.
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980317#action_12980317 ]
Mike Leddy commented on COUCHDB-1021:
-------------------------------------
Yes I do :-) ..... Thanks for the outline of what is necessary. I ended up with this patch:
--- couchdb-1.0.1/src/couchdb/couch_db_updater.erl 2011-01-11 15:08:15.000000000 -0300
+++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl 2011-01-11 15:25:32.000000000 -0300
@@ -847,7 +847,7 @@
commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
-start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
+start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
CompactFile = Filepath ++ ".compact",
?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
case couch_file:open(CompactFile) of
@@ -866,9 +866,19 @@
Retry = false,
ok = couch_file:write_header(Fd, Header=#db_header{})
end,
+
NewDb = init_db(Name, CompactFile, Fd, Header),
- unlink(Fd),
- NewDb2 = copy_compact(Db, NewDb, Retry),
- close_db(NewDb2),
+ NewDb2 = if PurgeSeq > 0 ->
+ {ok, PurgedIdsRevs} = couch_db:get_last_purged(Db),
+ {ok, Pointer} = couch_file:append_term(Fd, PurgedIdsRevs),
+ unlink(Fd),
+ commit_data(NewDb#db{header=#db_header{purge_seq=PurgeSeq, purged_docs=Pointer}});
+ true ->
+ unlink(Fd),
+ NewDb
+ end,
+
+ NewDb3 = copy_compact(Db, NewDb2, Retry),
+ close_db(NewDb3),
gen_server:cast(Db#db.update_pid, {compact_done, CompactFile}).
Maybe I'm being paranoid duplicating the unlink but I wasn't sure if it needed to be done before the commit_data.
Better safe than sorry....
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Leddy updated COUCHDB-1021:
--------------------------------
Attachment: keep_purge_state_on_compaction.patch
Patch against trunk
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
> Attachments: keep_purge_state_on_compaction.patch
>
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (COUCHDB-1021) Compacting a database does not
preserve the purge_seq
Posted by "Mike Leddy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/COUCHDB-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Leddy updated COUCHDB-1021:
--------------------------------
Comment: was deleted
(was: Patch against trunk)
> Compacting a database does not preserve the purge_seq
> -----------------------------------------------------
>
> Key: COUCHDB-1021
> URL: https://issues.apache.org/jira/browse/COUCHDB-1021
> Project: CouchDB
> Issue Type: Bug
> Components: Database Core
> Affects Versions: 1.0.1
> Environment: All platforms
> Reporter: Mike Leddy
> Priority: Minor
>
> On compacting a database the purge_seq becomes zero. As a result subsequently accessing any view will cause the view to be rebuilt from scratch. I resolved the issue for me by patching start_copy_compact, but this only works if you can guarantee there will be no purging done during compaction:
> --- couchdb-1.0.1/src/couchdb/couch_db_updater.erl
> +++ couchdb-1.0.1.new/src/couchdb/couch_db_updater.erl
> @@ -857,7 +857,7 @@
>
> commit_data(NewDb4#db{update_seq=Db#db.update_seq}).
>
> -start_copy_compact(#db{name=Name,filepath=Filepath}=Db) ->
> +start_copy_compact(#db{name=Name,filepath=Filepath,header=#db_header{purge_seq=PurgeSeq}}=Db) ->
> CompactFile = Filepath ++ ".compact",
> ?LOG_DEBUG("Compaction process spawned for db \"~s\"", [Name]),
> case couch_file:open(CompactFile) of
> @@ -869,7 +869,7 @@
> couch_task_status:add_task(<<"Database Compaction">>, Name, <<"Starting">>),
> {ok, Fd} = couch_file:open(CompactFile, [create]),
> Retry = false,
> - ok = couch_file:write_header(Fd, Header=#db_header{})
> + ok = couch_file:write_header(Fd, Header=#db_header{purge_seq=PurgeSeq})
> end,
> NewDb = init_db(Name, CompactFile, Fd, Header),
> unlink(Fd),
> I am sure that there must be a better way of doing this.....
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.