You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Rudi Benkovič <ru...@whiletrue.com> on 2012/09/22 11:56:04 UTC

recovering data from an unfinished compaction db

Hi,

I have a .couch file where compaction hasn't finished its job and
we've lost the pre-compaction production DB file (an unfortunate
sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
in disk format version 6, with snappy compression.

I've tried using recover-couchdb
(https://github.com/jhs/recover-couchdb), but it crashes with the
message that disk format 6 isn't supported. I've also tried dropping
in 1.2.0 sources, but that also didn't work.

Anyway, any hints on how to recover the data? 180GB file, lots of attachments.

Many thanks!

--Rudi

Re: recovering data from an unfinished compaction db

Posted by Rudi Benkovic <ru...@whiletrue.com>.
Hello Paul,

Monday, September 24, 2012, 6:45:46 PM, you wrote:

> The compactor is written to flush batches of docs every 5K bytes and
> then write a header out ever 5M bytes (assuming default batch sizes).
> Its important to remember that this judged against #doc_info{} records
> which don't contain a full doc body. For documents with relatively few
> revisions we're looking at (rough guess) ~100 bytes per record, which
> is going to give us 50K docs per header commit. Seeing as the OP
> mentions lots of attachments this could give us a relatively large gap
> in the file to search for a header.

FWIW, indeed, the last valid header in this compaction DB was written
around 20GB from the end of the file. The easiest way to get CouchDB
up and running at that revision:

Open up the .couch file in a hex editor, search for "db_header" from
the end up. Header looks like this, in hex:

01 00 00 00 .... .db_header .... 00 00 03 E8

In-place truncate the file (Linux: truncate -c --size=<bytes>
file.couch) up to the last full header (byte size: position of E8 +
1). Feed it to CouchDB, it should work.

This still leaves me with 20GB of data to resurrect from the dead.
Recover-couchdb hacking ahead. :)

-- 
Best regards,
 Rudi                            mailto:rudib@whiletrue.com


Re: recovering data from an unfinished compaction db

Posted by Paul Davis <pa...@gmail.com>.
The compactor is written to flush batches of docs every 5K bytes and
then write a header out ever 5M bytes (assuming default batch sizes).
Its important to remember that this judged against #doc_info{} records
which don't contain a full doc body. For documents with relatively few
revisions we're looking at (rough guess) ~100 bytes per record, which
is going to give us 50K docs per header commit. Seeing as the OP
mentions lots of attachments this could give us a relatively large gap
in the file to search for a header.

On Mon, Sep 24, 2012 at 11:17 AM, Tim Tisdall <ti...@gmail.com> wrote:
> Since this is the result of a compaction, shouldn't the header be at
> the beginning of the file?  (just testing my knowledge on how all this
> works...)
>
> -Tim
>
> On Mon, Sep 24, 2012 at 12:09 PM, Robert Newson <rn...@apache.org> wrote:
>> That does imply that the last valid header is a long way back up the
>> file, though.
>>
>> On 24 September 2012 17:00, Paul Davis <pa...@gmail.com> wrote:
>>> I'd ignore the snappy error for now. There's no way this thing ran for
>>> an hour and then suddenly hit an error in that code. If this is like a
>>> bug I've seen before the reason that this runs out of RAM is due to
>>> the code that's searching for a header not releasing binary ref counts
>>> as it should be.
>>>
>>> The quickest way to fix this would probably be to go back and update
>>> recover-couchdb to recognize the new disk format. Although that gets
>>> harder now that snappy compression is involved.
>>>
>>> On Mon, Sep 24, 2012 at 10:32 AM, Dave Cottlehuber <dc...@jsonified.com> wrote:
>>>> On 24 September 2012 15:02, Robert Newson <rn...@apache.org> wrote:
>>>>> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
>>>>> installation is even right.
>>>>>
>>>>> Can someone enlighten me? Is it possible to get this error spuriously?
>>>>
>>>> No. I'd be keen to see a bit of logfiles to understand what's not working.
>>>>
>>>>> Does running out of RAM cause erlang to unload NIF's?
>>>>
>>>> I don't think so on Windows.
>>>>
>>>> There's an R15B01 based build here:
>>>> https://www.dropbox.com/sh/jeifcxpbtpo78ak/GG9fjWOyDt/Snapshots/20120524
>>>> that has a fix for a more recent version of Windows server than I have
>>>> to address one NIF loading error, although there are a number of
>>>> possible causes.
>>>>
>>>> @Rudi can you give this a go & report back?
>>>>
>>>> A+
>>>> Dave

Re: recovering data from an unfinished compaction db

Posted by Tim Tisdall <ti...@gmail.com>.
Since this is the result of a compaction, shouldn't the header be at
the beginning of the file?  (just testing my knowledge on how all this
works...)

-Tim

On Mon, Sep 24, 2012 at 12:09 PM, Robert Newson <rn...@apache.org> wrote:
> That does imply that the last valid header is a long way back up the
> file, though.
>
> On 24 September 2012 17:00, Paul Davis <pa...@gmail.com> wrote:
>> I'd ignore the snappy error for now. There's no way this thing ran for
>> an hour and then suddenly hit an error in that code. If this is like a
>> bug I've seen before the reason that this runs out of RAM is due to
>> the code that's searching for a header not releasing binary ref counts
>> as it should be.
>>
>> The quickest way to fix this would probably be to go back and update
>> recover-couchdb to recognize the new disk format. Although that gets
>> harder now that snappy compression is involved.
>>
>> On Mon, Sep 24, 2012 at 10:32 AM, Dave Cottlehuber <dc...@jsonified.com> wrote:
>>> On 24 September 2012 15:02, Robert Newson <rn...@apache.org> wrote:
>>>> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
>>>> installation is even right.
>>>>
>>>> Can someone enlighten me? Is it possible to get this error spuriously?
>>>
>>> No. I'd be keen to see a bit of logfiles to understand what's not working.
>>>
>>>> Does running out of RAM cause erlang to unload NIF's?
>>>
>>> I don't think so on Windows.
>>>
>>> There's an R15B01 based build here:
>>> https://www.dropbox.com/sh/jeifcxpbtpo78ak/GG9fjWOyDt/Snapshots/20120524
>>> that has a fix for a more recent version of Windows server than I have
>>> to address one NIF loading error, although there are a number of
>>> possible causes.
>>>
>>> @Rudi can you give this a go & report back?
>>>
>>> A+
>>> Dave

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
That does imply that the last valid header is a long way back up the
file, though.

On 24 September 2012 17:00, Paul Davis <pa...@gmail.com> wrote:
> I'd ignore the snappy error for now. There's no way this thing ran for
> an hour and then suddenly hit an error in that code. If this is like a
> bug I've seen before the reason that this runs out of RAM is due to
> the code that's searching for a header not releasing binary ref counts
> as it should be.
>
> The quickest way to fix this would probably be to go back and update
> recover-couchdb to recognize the new disk format. Although that gets
> harder now that snappy compression is involved.
>
> On Mon, Sep 24, 2012 at 10:32 AM, Dave Cottlehuber <dc...@jsonified.com> wrote:
>> On 24 September 2012 15:02, Robert Newson <rn...@apache.org> wrote:
>>> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
>>> installation is even right.
>>>
>>> Can someone enlighten me? Is it possible to get this error spuriously?
>>
>> No. I'd be keen to see a bit of logfiles to understand what's not working.
>>
>>> Does running out of RAM cause erlang to unload NIF's?
>>
>> I don't think so on Windows.
>>
>> There's an R15B01 based build here:
>> https://www.dropbox.com/sh/jeifcxpbtpo78ak/GG9fjWOyDt/Snapshots/20120524
>> that has a fix for a more recent version of Windows server than I have
>> to address one NIF loading error, although there are a number of
>> possible causes.
>>
>> @Rudi can you give this a go & report back?
>>
>> A+
>> Dave

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
s/main/many/

On 24 September 2012 18:02, Robert Newson <rn...@apache.org> wrote:
> re: Tim, no, the database headers are written at the end of the file
> and a database will therefore contain main database headers over time
> (the compactor will not preserve old headers, though). This also
> implies (correctly) that truncating a couchdb .couch file will give
> you the state of the database at some point in the past.
>
> It used to be true that we overwrote the first 8k of the file with 2
> copies of the 4k header, but that's not been the case for a few years
> now.
>
> B.
>
> On 24 September 2012 17:47, Paul Davis <pa...@gmail.com> wrote:
>> On Mon, Sep 24, 2012 at 11:42 AM, Rudi Benkovič <ru...@whiletrue.com> wrote:
>>> On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis <pa...@gmail.com> wrote:
>>>> The quickest way to fix this would probably be to go back and update
>>>> recover-couchdb to recognize the new disk format. Although that gets
>>>> harder now that snappy compression is involved.
>>>
>>> I've tried upgrading recover-couchdb to 1.2.0 couch codebase, but my
>>> total lack of Erlang experience and CouchDB's internal really isn't
>>> helping. :) I've contacted the maintainer of that project, hopefully
>>> it isn't that big of change. BTW, if anyone else wants to do that, I'm
>>> happy to sponsor the work.
>>>
>>> How hard would it be to just grep the compacted DB, extract data
>>> around kv_node headers and decompress Snappy data with an external,
>>> non-CouchDB-Erlang program? I'm willing to write the thing in C#, just
>>> some basic pointers to the DB structure and what data gets compressed
>>> and where the Document IDs and attachments data gets stored.
>>>
>>> Thanks.
>>>
>>> --Rudi
>>
>> I'm not familiar enough with the project to comment. I wouldn't think
>> it'd be that hard, but its possible something changed enough to
>> increase the difficulty. As to reading the format from C# or some
>> other language its not something I would be all that interested in
>> trying as the bang/buck ratio isn't all that favorable.

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
re: Tim, no, the database headers are written at the end of the file
and a database will therefore contain main database headers over time
(the compactor will not preserve old headers, though). This also
implies (correctly) that truncating a couchdb .couch file will give
you the state of the database at some point in the past.

It used to be true that we overwrote the first 8k of the file with 2
copies of the 4k header, but that's not been the case for a few years
now.

B.

On 24 September 2012 17:47, Paul Davis <pa...@gmail.com> wrote:
> On Mon, Sep 24, 2012 at 11:42 AM, Rudi Benkovič <ru...@whiletrue.com> wrote:
>> On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis <pa...@gmail.com> wrote:
>>> The quickest way to fix this would probably be to go back and update
>>> recover-couchdb to recognize the new disk format. Although that gets
>>> harder now that snappy compression is involved.
>>
>> I've tried upgrading recover-couchdb to 1.2.0 couch codebase, but my
>> total lack of Erlang experience and CouchDB's internal really isn't
>> helping. :) I've contacted the maintainer of that project, hopefully
>> it isn't that big of change. BTW, if anyone else wants to do that, I'm
>> happy to sponsor the work.
>>
>> How hard would it be to just grep the compacted DB, extract data
>> around kv_node headers and decompress Snappy data with an external,
>> non-CouchDB-Erlang program? I'm willing to write the thing in C#, just
>> some basic pointers to the DB structure and what data gets compressed
>> and where the Document IDs and attachments data gets stored.
>>
>> Thanks.
>>
>> --Rudi
>
> I'm not familiar enough with the project to comment. I wouldn't think
> it'd be that hard, but its possible something changed enough to
> increase the difficulty. As to reading the format from C# or some
> other language its not something I would be all that interested in
> trying as the bang/buck ratio isn't all that favorable.

Re: recovering data from an unfinished compaction db

Posted by Paul Davis <pa...@gmail.com>.
On Mon, Sep 24, 2012 at 11:42 AM, Rudi Benkovič <ru...@whiletrue.com> wrote:
> On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis <pa...@gmail.com> wrote:
>> The quickest way to fix this would probably be to go back and update
>> recover-couchdb to recognize the new disk format. Although that gets
>> harder now that snappy compression is involved.
>
> I've tried upgrading recover-couchdb to 1.2.0 couch codebase, but my
> total lack of Erlang experience and CouchDB's internal really isn't
> helping. :) I've contacted the maintainer of that project, hopefully
> it isn't that big of change. BTW, if anyone else wants to do that, I'm
> happy to sponsor the work.
>
> How hard would it be to just grep the compacted DB, extract data
> around kv_node headers and decompress Snappy data with an external,
> non-CouchDB-Erlang program? I'm willing to write the thing in C#, just
> some basic pointers to the DB structure and what data gets compressed
> and where the Document IDs and attachments data gets stored.
>
> Thanks.
>
> --Rudi

I'm not familiar enough with the project to comment. I wouldn't think
it'd be that hard, but its possible something changed enough to
increase the difficulty. As to reading the format from C# or some
other language its not something I would be all that interested in
trying as the bang/buck ratio isn't all that favorable.

Re: recovering data from an unfinished compaction db

Posted by Rudi Benkovič <ru...@whiletrue.com>.
On Mon, Sep 24, 2012 at 6:00 PM, Paul Davis <pa...@gmail.com> wrote:
> The quickest way to fix this would probably be to go back and update
> recover-couchdb to recognize the new disk format. Although that gets
> harder now that snappy compression is involved.

I've tried upgrading recover-couchdb to 1.2.0 couch codebase, but my
total lack of Erlang experience and CouchDB's internal really isn't
helping. :) I've contacted the maintainer of that project, hopefully
it isn't that big of change. BTW, if anyone else wants to do that, I'm
happy to sponsor the work.

How hard would it be to just grep the compacted DB, extract data
around kv_node headers and decompress Snappy data with an external,
non-CouchDB-Erlang program? I'm willing to write the thing in C#, just
some basic pointers to the DB structure and what data gets compressed
and where the Document IDs and attachments data gets stored.

Thanks.

--Rudi

Re: recovering data from an unfinished compaction db

Posted by Paul Davis <pa...@gmail.com>.
I'd ignore the snappy error for now. There's no way this thing ran for
an hour and then suddenly hit an error in that code. If this is like a
bug I've seen before the reason that this runs out of RAM is due to
the code that's searching for a header not releasing binary ref counts
as it should be.

The quickest way to fix this would probably be to go back and update
recover-couchdb to recognize the new disk format. Although that gets
harder now that snappy compression is involved.

On Mon, Sep 24, 2012 at 10:32 AM, Dave Cottlehuber <dc...@jsonified.com> wrote:
> On 24 September 2012 15:02, Robert Newson <rn...@apache.org> wrote:
>> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
>> installation is even right.
>>
>> Can someone enlighten me? Is it possible to get this error spuriously?
>
> No. I'd be keen to see a bit of logfiles to understand what's not working.
>
>> Does running out of RAM cause erlang to unload NIF's?
>
> I don't think so on Windows.
>
> There's an R15B01 based build here:
> https://www.dropbox.com/sh/jeifcxpbtpo78ak/GG9fjWOyDt/Snapshots/20120524
> that has a fix for a more recent version of Windows server than I have
> to address one NIF loading error, although there are a number of
> possible causes.
>
> @Rudi can you give this a go & report back?
>
> A+
> Dave

Re: recovering data from an unfinished compaction db

Posted by Dave Cottlehuber <dc...@jsonified.com>.
On 24 September 2012 15:02, Robert Newson <rn...@apache.org> wrote:
> {badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
> installation is even right.
>
> Can someone enlighten me? Is it possible to get this error spuriously?

No. I'd be keen to see a bit of logfiles to understand what's not working.

> Does running out of RAM cause erlang to unload NIF's?

I don't think so on Windows.

There's an R15B01 based build here:
https://www.dropbox.com/sh/jeifcxpbtpo78ak/GG9fjWOyDt/Snapshots/20120524
that has a fix for a more recent version of Windows server than I have
to address one NIF loading error, although there are a number of
possible causes.

@Rudi can you give this a go & report back?

A+
Dave

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
{badmatch,{error,snappy_nif_not_loaded} makes me wonder if this 1.2
installation is even right.

Can someone enlighten me? Is it possible to get this error spuriously?
Does running out of RAM cause erlang to unload NIF's?

B.

On 24 September 2012 13:32, Rudi Benkovic <ru...@whiletrue.com> wrote:
> Hello Robert,
>
> Saturday, September 22, 2012, 2:49:16 PM, you wrote:
>
>> Yup, CouchDB starts from the end of the file and looks backwards until
>> it finds a valid footer, it can take some time if that's a long way
>> from the end. It's not so much that CouchDB is skipping over "random
>> binary data", it just doesn't have any pointers to that area of the
>> file.
>
>> How long as couchdb been seeking backward for a footer? When you say
>> "doesn't recognize it", are you getting an error message?
>
> OK, I left CouchDB 1.2.0 (Windows) running for a good hour on the
> incomplete compaction DB. In the process it consumed all available OS
> memory (12GB), then crashes with an exception. Excerpt from the log
> file (debug priority) attached.
>
> --
> Best regards,
>  Rudi                            mailto:rudib@whiletrue.com

Re: recovering data from an unfinished compaction db

Posted by Rudi Benkovic <ru...@whiletrue.com>.
Hello Robert,

Saturday, September 22, 2012, 2:49:16 PM, you wrote:

> Yup, CouchDB starts from the end of the file and looks backwards until
> it finds a valid footer, it can take some time if that's a long way
> from the end. It's not so much that CouchDB is skipping over "random
> binary data", it just doesn't have any pointers to that area of the
> file.

> How long as couchdb been seeking backward for a footer? When you say
> "doesn't recognize it", are you getting an error message?

OK, I left CouchDB 1.2.0 (Windows) running for a good hour on the
incomplete compaction DB. In the process it consumed all available OS
memory (12GB), then crashes with an exception. Excerpt from the log
file (debug priority) attached.

-- 
Best regards,
 Rudi                            mailto:rudib@whiletrue.com

Re: recovering data from an unfinished compaction db

Posted by Rudi Benkovič <ru...@whiletrue.com>.
Ok, thanks for the tip. I left it running for about 15 minutes,
however that was on hardware with terribly slow IO (something like
10MB/s). There was no exception (yet?), however Futon or direct HTTP
access to DB didn't produce any output.

I'm copying the 180gig file across the Atlantic to try rebuilding it
on some fast SSD drives. Will let you know how it goes.

Thanks!

--Rudi

On Sat, Sep 22, 2012 at 2:49 PM, Robert Newson <rn...@apache.org> wrote:
> Yup, CouchDB starts from the end of the file and looks backwards until
> it finds a valid footer, it can take some time if that's a long way
> from the end. It's not so much that CouchDB is skipping over "random
> binary data", it just doesn't have any pointers to that area of the
> file.
>
> How long as couchdb been seeking backward for a footer? When you say
> "doesn't recognize it", are you getting an error message?
>
> B.
>
> On 22 September 2012 13:46, Rudi Benkovič <ru...@whiletrue.com> wrote:
>> CouchDB doesn't recognize it. It's probably corrupted because the
>> partition ran out of free space during compaction itself. Does CouchDB
>> try to find a valid root node by reading the DB file from the tail and
>> skipping over "random" binary data? In that case I might just have to
>> let it run for some time before it finds it.
>>
>> --Rudi
>>
>> On Sat, Sep 22, 2012 at 2:40 PM, Robert Newson <rn...@apache.org> wrote:
>>> The compacted file is a valid couchdb database, it should not be
>>> "corrupted", simply rename it to .couch.
>>>
>>> Obviously you will have lost any data that didn't make it over to the
>>> .compact file from the original .couch file that you have mistakenly
>>> deleted.
>>>
>>> B.
>>>
>>> On 22 September 2012 10:56, Rudi Benkovič <ru...@whiletrue.com> wrote:
>>>> Hi,
>>>>
>>>> I have a .couch file where compaction hasn't finished its job and
>>>> we've lost the pre-compaction production DB file (an unfortunate
>>>> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
>>>> in disk format version 6, with snappy compression.
>>>>
>>>> I've tried using recover-couchdb
>>>> (https://github.com/jhs/recover-couchdb), but it crashes with the
>>>> message that disk format 6 isn't supported. I've also tried dropping
>>>> in 1.2.0 sources, but that also didn't work.
>>>>
>>>> Anyway, any hints on how to recover the data? 180GB file, lots of attachments.
>>>>
>>>> Many thanks!
>>>>
>>>> --Rudi

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
Yup, CouchDB starts from the end of the file and looks backwards until
it finds a valid footer, it can take some time if that's a long way
from the end. It's not so much that CouchDB is skipping over "random
binary data", it just doesn't have any pointers to that area of the
file.

How long as couchdb been seeking backward for a footer? When you say
"doesn't recognize it", are you getting an error message?

B.

On 22 September 2012 13:46, Rudi Benkovič <ru...@whiletrue.com> wrote:
> CouchDB doesn't recognize it. It's probably corrupted because the
> partition ran out of free space during compaction itself. Does CouchDB
> try to find a valid root node by reading the DB file from the tail and
> skipping over "random" binary data? In that case I might just have to
> let it run for some time before it finds it.
>
> --Rudi
>
> On Sat, Sep 22, 2012 at 2:40 PM, Robert Newson <rn...@apache.org> wrote:
>> The compacted file is a valid couchdb database, it should not be
>> "corrupted", simply rename it to .couch.
>>
>> Obviously you will have lost any data that didn't make it over to the
>> .compact file from the original .couch file that you have mistakenly
>> deleted.
>>
>> B.
>>
>> On 22 September 2012 10:56, Rudi Benkovič <ru...@whiletrue.com> wrote:
>>> Hi,
>>>
>>> I have a .couch file where compaction hasn't finished its job and
>>> we've lost the pre-compaction production DB file (an unfortunate
>>> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
>>> in disk format version 6, with snappy compression.
>>>
>>> I've tried using recover-couchdb
>>> (https://github.com/jhs/recover-couchdb), but it crashes with the
>>> message that disk format 6 isn't supported. I've also tried dropping
>>> in 1.2.0 sources, but that also didn't work.
>>>
>>> Anyway, any hints on how to recover the data? 180GB file, lots of attachments.
>>>
>>> Many thanks!
>>>
>>> --Rudi

Re: recovering data from an unfinished compaction db

Posted by Rudi Benkovič <ru...@whiletrue.com>.
CouchDB doesn't recognize it. It's probably corrupted because the
partition ran out of free space during compaction itself. Does CouchDB
try to find a valid root node by reading the DB file from the tail and
skipping over "random" binary data? In that case I might just have to
let it run for some time before it finds it.

--Rudi

On Sat, Sep 22, 2012 at 2:40 PM, Robert Newson <rn...@apache.org> wrote:
> The compacted file is a valid couchdb database, it should not be
> "corrupted", simply rename it to .couch.
>
> Obviously you will have lost any data that didn't make it over to the
> .compact file from the original .couch file that you have mistakenly
> deleted.
>
> B.
>
> On 22 September 2012 10:56, Rudi Benkovič <ru...@whiletrue.com> wrote:
>> Hi,
>>
>> I have a .couch file where compaction hasn't finished its job and
>> we've lost the pre-compaction production DB file (an unfortunate
>> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
>> in disk format version 6, with snappy compression.
>>
>> I've tried using recover-couchdb
>> (https://github.com/jhs/recover-couchdb), but it crashes with the
>> message that disk format 6 isn't supported. I've also tried dropping
>> in 1.2.0 sources, but that also didn't work.
>>
>> Anyway, any hints on how to recover the data? 180GB file, lots of attachments.
>>
>> Many thanks!
>>
>> --Rudi

Re: recovering data from an unfinished compaction db

Posted by Robert Newson <rn...@apache.org>.
The compacted file is a valid couchdb database, it should not be
"corrupted", simply rename it to .couch.

Obviously you will have lost any data that didn't make it over to the
.compact file from the original .couch file that you have mistakenly
deleted.

B.

On 22 September 2012 10:56, Rudi Benkovič <ru...@whiletrue.com> wrote:
> Hi,
>
> I have a .couch file where compaction hasn't finished its job and
> we've lost the pre-compaction production DB file (an unfortunate
> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
> in disk format version 6, with snappy compression.
>
> I've tried using recover-couchdb
> (https://github.com/jhs/recover-couchdb), but it crashes with the
> message that disk format 6 isn't supported. I've also tried dropping
> in 1.2.0 sources, but that also didn't work.
>
> Anyway, any hints on how to recover the data? 180GB file, lots of attachments.
>
> Many thanks!
>
> --Rudi

Re: recovering data from an unfinished compaction db

Posted by Dave Cottlehuber <dc...@jsonified.com>.
On 22 September 2012 11:56, Rudi Benkovič <ru...@whiletrue.com> wrote:
> Hi,
>
> I have a .couch file where compaction hasn't finished its job and
> we've lost the pre-compaction production DB file (an unfortunate
> sysadmin error). Running CouchDB 1.2.0, so the new, corrupted file is
> in disk format version 6, with snappy compression.
>
> I've tried using recover-couchdb
> (https://github.com/jhs/recover-couchdb), but it crashes with the
> message that disk format 6 isn't supported. I've also tried dropping
> in 1.2.0 sources, but that also didn't work.
>
> Anyway, any hints on how to recover the data? 180GB file, lots of attachments.
>
> Many thanks!
>
> --Rudi

Hi Rudi,

If you can, jump on #couchdb IRC channel on freenode & let's get you
some real-time help on this.

A+
Dave