You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Rob Pettefar <rp...@gpslsolutions.com> on 2011/02/08 18:20:08 UTC

Views crash

  Hi guys
I have an issue with views crashing.
This has occurred on both Linux and Windows distributions of CouchDB 1.0.1

Sometimes the views for a particular database will break. In Futon they 
appear unresponsive.
The only way round this that I have seen is to delete and rebuild the 
view file for the database in question.

This seems to happen more often on the Windows version but I don't think 
it is anything to do with the >4Gb file issue.
Any help you could lend would be invaluable.

Thanks
Rob

I have included the error that was logged in the couchdb log file:

[Fri, 21 Jan 2011 12:18:28 GMT] [debug] [<0.738.0>] Exit from linked 
pid: {<0.742.0>,
                        {timeout,
                            {gen_server,call,
                                [couch_query_servers,
                                 {get_proc,<<"javascript">>}]}}}

[Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.738.0>] ** Generic server 
<0.738.0> terminating
** Last message in was {'EXIT',<0.742.0>,
                            {timeout,
                                {gen_server,call,
                                    [couch_query_servers,
                                     {get_proc,<<"javascript">>}]}}}
** When Server state == {group_state,undefined,<<"testdb">>,
                          {"../var/lib/couchdb",<<"testdb">>,
                           {group,
<<166,184,63,42,190,3,207,140,145,79,103,251,0,220,
                              240,226>>,
                            
nil,nil,<<"_design/testdb">>,<<"javascript">>,[],
                            [{view,0,
                              [<<"recent-items">>],
<<"/** \n * View: recent-items\n * A list of recently added items.\n * 
Possibly Obsolete\n */\nfunction(doc) {\n  if (doc.created_at) {\n    
emit(doc.created_at, doc);\n  }\n};">>,
                              nil,[],[]},
                             {view,1,
                              [<<"ReportJobs">>],
<<"/** \r\n * View: ReportJobs\r\n * Reporting function map. \r\n * 
Allow basic customer/job/page display\r\n * Changed 7th Dec 2010 - don't 
believed this is used\r\n */\r\nfunction(doc)\r\n{\r\n\tvar docType = 
doc.type.toLowerCase();\r\n\t\r\n\tif( docType == 
'job'){\r\n\t\temit([doc.customer, doc.jobname, 0, 0], 
doc);\t\r\n\t\t\r\n\t} else if ( docType == 'page'){\r\n\t\t\r\n\t\t// 
Add order job before page\r\n\t\tvar ji = 
doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji) ji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\temit([doc.customer, doc.jobname, ji, 1], 
doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,2,
                              [<<"archive">>],
<<"/** \r\n * View: archive\r\n * Get all documents of a job name chosen 
in the key for archiving and restoring\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() != 
'user'){\r\n\t\t\r\n\t\temit(doc.jobname, doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,3,
                              [<<"clear">>],
<<"/** \r\n * View: clear\r\n * Return a list of all things so we can 
delete them all\r\n */\r\nfunction(doc)\r\n{\t\r\n\tvar docType = 
doc.type.toLowerCase();\r\n\tif(docType == 'job' || docType == 'npc' || 
docType == 'cpc' || docType == 'txt' || docType == 'page' || docType == 
'style' || docType == 'tag')\r\n\t\temit(doc._id, doc._rev);\r\n}">>,
                              nil,[],[]},
                             {view,4,
                              [<<"cpc">>],
<<"/** \r\n * View: cpc\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'cpc'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,5,
                              [<<"document">>],
<<"/** \r\n * View: document\r\n * Return a list of job documents 
ordered by the job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job'){\r\n\t\t// JN: Changed for test but now this can be extended as 
required so good change\r\n\t\temit(doc.jobname, doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,6,
                              [<<"job">>],
<<"/** \r\n * View: joblist\r\n * Check if a document is of type 'job'. 
If so, The job and order by job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc.jobname, doc);\r\n}">>,
                              nil,[],[]},
                             {view,7,
                              [<<"joblist">>],
<<"/** \r\n * View: joblist\r\n * Check if a document is of type 'job'. 
If so, return its id and job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc._id, doc.jobname);\r\n}">>,
                              nil,[],[]},
                             {view,8,
                              [<<"jobrevlevels">>],
<<"/** \r\n * View: jobrevlevels\r\n * Return a list of jobs and their 
revision levels\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc.jobname, doc.revlevels);\r\n}">>,
                              nil,[],[]},
                             {view,9,
                              [<<"npc">>],
<<"/** \r\n * View: npc\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'npc'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,10,
                              [<<"page">>],
<<"/** \r\n * View: page\r\n * Select all page documents and output them 
ordered by the job name and page number\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'page'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar 
ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t\r\n\t\t// JN: Changed for test but now 
this can be extended as required so good 
change\r\n\t\temit([doc.jobname, ji], doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,11,
                              [<<"pagearraylist">>],
<<"/** \r\n * View: pagearraylist\r\n * Select all page documents and 
output a summary containing the page number, section ID and status, 
ordered by the job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// JN: Changed for test 
but now this can be extended as required so good 
change\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// Split up job index into an 
array\r\n\t\temit(doc.jobname, {\"jobindex\" : ji, \"sectionID\" : 
doc.sectionID, \"status\" : doc.status });\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,12,
                              [<<"pagelist">>],
<<"/** \r\n * View: pagelist\r\n * Return the job index for all 
documents of type 'page' that have a specific job name. \r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'page')\r\n\t{\r\n\t\tvar batch = ( doc.batch == undefined ) ? \"NONE\" 
: doc.batch;\r\n\t\t\r\n\t\t// JN: Changed for test but now this can be 
extended as required so good change\r\n\t\temit(doc.jobname,{ 
\"jobindex\": doc.jobindex , \"sectionID\" : doc.sectionID , \"status\" 
: doc.status , \"batch\" : batch, \"table\" : doc.table});\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,13,
                              [<<"pages">>],
<<"/** \r\n * View: pages\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// Split up job index 
into an array\r\n\t\tvar ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// 
Convert from strings to numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit({\"jobname\" : 
doc.jobname, \"jobindex\" : ji},{ \"jobindex\": doc.jobindex, \"data\" : 
doc.data });\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,14,
                              [<<"pagesarray">>],
<<"/** \r\n * View: pagesarray\r\n * Pages map function but converting 
page index to an array.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// Split up job index 
into an array\r\n\t\tvar ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// 
Convert from strings to numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit({\"jobname\" : 
doc.jobname, \"jobindex\" : ji},{ \"jobindex\": doc.jobindex, \"data\" : 
doc.data });\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,15,
                              [<<"style">>],
<<"/** \r\n * View: style\r\n * Return the Style doc that have a 
specific style name job name. \r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'style')\r\n\t{\r\n\t\t// JN: Changed for test but now this can be 
extended as required so good 
change\r\n\t\temit(doc.stylename,doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,16,
                              [<<"stylelist">>],
<<"/** \r\n * View: stylelist\r\n * Check if a document is of type 
'job'. If so, return its id and job name.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'style')\r\n\t\temit(doc._id, doc.stylename);\r\n}">>,
                              nil,[],[]},
                             {view,17,
                              [<<"tag">>],
<<"/** \r\n * View: tag\r\n * Check if a document is of type 'tag'. 
Return the document ordered by the document name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
\"tag\")\r\n\t{\r\n\t\temit(doc.name, doc);\r\n\t}\r\n};">>,
                              nil,[],[]},
                             {view,18,
                              [<<"txt">>],
<<"/** \r\n * View: txt\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'txt'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                              nil,[],[]},
                             {view,19,
                              [<<"user">>],
<<"/** \r\n * View: txt\r\n * Select all documents of type 'user' and 
output the user name, password and user levels ordered by the document 
ID.\r\n */\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'user'){\r\n\t\temit(doc._id, {\"username\" : doc.username, 
\r\n\t\t\t\t\t  \"password\" : doc.password, \r\n\t\t\t\t\t  
\"user_level_user\" : doc.user_level_user, \r\n\t\t\t\t\t  
\"user_level_dev\" : doc.user_level_dev, \r\n\t\t\t\t\t  
\"user_level_admin\" : doc.user_level_admin});\r\n\t}\r\n}">>,
                              nil,[],[]}],
                            nil,0,0,nil,nil}},
                          {group,
<<166,184,63,42,190,3,207,140,145,79,103,251,0,220,
                             240,226>>,
                           
{db,<0.173.0>,<0.174.0>,nil,<<"1295611751291001">>,
<0.171.0>,<0.176.0>,
                            {db_header,5,32387,0,
                             {375680498,{11560,19361}},
                             {375682738,30921},
                             {370499732,[]},
                             0,nil,nil,1000},
                            32387,
                            {btree,<0.171.0>,
                             {375680498,{11560,19361}},
                             #Fun<couch_db_updater.7.69395062>,
                             #Fun<couch_db_updater.8.86519079>,
                             #Fun<couch_btree.5.124754102>,
                             #Fun<couch_db_updater.9.24674233>},
                            {btree,<0.171.0>,
                             {375682738,30921},
                             #Fun<couch_db_updater.10.90337910>,
                             #Fun<couch_db_updater.11.13595824>,
                             #Fun<couch_btree.5.124754102>,
                             #Fun<couch_db_updater.12.34906778>},
                            {btree,<0.171.0>,
                             {370499732,[]},
                             #Fun<couch_btree.0.83553141>,
                             #Fun<couch_btree.1.30790806>,
                             #Fun<couch_btree.2.124754102>,nil},
                            32387,<<"testdb">>,
                            "../var/lib/couchdb/testdb.couch",[],[],nil,
                            {user_ctx,null,[],undefined},
                            nil,1000,
                            [before_header,after_header,on_file_open],
                            false},
<0.740.0>,<<"_design/testdb">>,<<"javascript">>,[],
                           [{view,0,
                             [<<"recent-items">>],
<<"/** \n * View: recent-items\n * A list of recently added items.\n * 
Possibly Obsolete\n */\nfunction(doc) {\n  if (doc.created_at) {\n    
emit(doc.created_at, doc);\n  }\n};">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,1,
                             [<<"ReportJobs">>],
<<"/** \r\n * View: ReportJobs\r\n * Reporting function map. \r\n * 
Allow basic customer/job/page display\r\n * Changed 7th Dec 2010 - don't 
believed this is used\r\n */\r\nfunction(doc)\r\n{\r\n\tvar docType = 
doc.type.toLowerCase();\r\n\t\r\n\tif( docType == 
'job'){\r\n\t\temit([doc.customer, doc.jobname, 0, 0], 
doc);\t\r\n\t\t\r\n\t} else if ( docType == 'page'){\r\n\t\t\r\n\t\t// 
Add order job before page\r\n\t\tvar ji = 
doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji) ji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\temit([doc.customer, doc.jobname, ji, 1], 
doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,2,
                             [<<"archive">>],
<<"/** \r\n * View: archive\r\n * Get all documents of a job name chosen 
in the key for archiving and restoring\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() != 
'user'){\r\n\t\t\r\n\t\temit(doc.jobname, doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,3,
                             [<<"clear">>],
<<"/** \r\n * View: clear\r\n * Return a list of all things so we can 
delete them all\r\n */\r\nfunction(doc)\r\n{\t\r\n\tvar docType = 
doc.type.toLowerCase();\r\n\tif(docType == 'job' || docType == 'npc' || 
docType == 'cpc' || docType == 'txt' || docType == 'page' || docType == 
'style' || docType == 'tag')\r\n\t\temit(doc._id, doc._rev);\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,4,
                             [<<"cpc">>],
<<"/** \r\n * View: cpc\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'cpc'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,5,
                             [<<"document">>],
<<"/** \r\n * View: document\r\n * Return a list of job documents 
ordered by the job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job'){\r\n\t\t// JN: Changed for test but now this can be extended as 
required so good change\r\n\t\temit(doc.jobname, doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,6,
                             [<<"job">>],
<<"/** \r\n * View: joblist\r\n * Check if a document is of type 'job'. 
If so, The job and order by job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc.jobname, doc);\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,7,
                             [<<"joblist">>],
<<"/** \r\n * View: joblist\r\n * Check if a document is of type 'job'. 
If so, return its id and job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc._id, doc.jobname);\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,8,
                             [<<"jobrevlevels">>],
<<"/** \r\n * View: jobrevlevels\r\n * Return a list of jobs and their 
revision levels\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'job')\r\n\t\temit(doc.jobname, doc.revlevels);\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,9,
                             [<<"npc">>],
<<"/** \r\n * View: npc\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'npc'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,10,
                             [<<"page">>],
<<"/** \r\n * View: page\r\n * Select all page documents and output them 
ordered by the job name and page number\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'page'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar 
ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t\r\n\t\t// JN: Changed for test but now 
this can be extended as required so good 
change\r\n\t\temit([doc.jobname, ji], doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,11,
                             [<<"pagearraylist">>],
<<"/** \r\n * View: pagearraylist\r\n * Select all page documents and 
output a summary containing the page number, section ID and status, 
ordered by the job name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// JN: Changed for test 
but now this can be extended as required so good 
change\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// Split up job index into an 
array\r\n\t\temit(doc.jobname, {\"jobindex\" : ji, \"sectionID\" : 
doc.sectionID, \"status\" : doc.status });\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,12,
                             [<<"pagelist">>],
<<"/** \r\n * View: pagelist\r\n * Return the job index for all 
documents of type 'page' that have a specific job name. \r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'page')\r\n\t{\r\n\t\tvar batch = ( doc.batch == undefined ) ? \"NONE\" 
: doc.batch;\r\n\t\t\r\n\t\t// JN: Changed for test but now this can be 
extended as required so good change\r\n\t\temit(doc.jobname,{ 
\"jobindex\": doc.jobindex , \"sectionID\" : doc.sectionID , \"status\" 
: doc.status , \"batch\" : batch, \"table\" : doc.table});\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,13,
                             [<<"pages">>],
<<"/** \r\n * View: pages\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// Split up job index 
into an array\r\n\t\tvar ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// 
Convert from strings to numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit({\"jobname\" : 
doc.jobname, \"jobindex\" : ji},{ \"jobindex\": doc.jobindex, \"data\" : 
doc.data });\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,14,
                             [<<"pagesarray">>],
<<"/** \r\n * View: pagesarray\r\n * Pages map function but converting 
page index to an array.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 'job' || 
doc.type.toLowerCase() == 'page')\r\n\t{\r\n\t\t// Split up job index 
into an array\r\n\t\tvar ji = doc.jobindex.split('.');\r\n\t\t\r\n\t\t// 
Convert from strings to numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit({\"jobname\" : 
doc.jobname, \"jobindex\" : ji},{ \"jobindex\": doc.jobindex, \"data\" : 
doc.data });\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,15,
                             [<<"style">>],
<<"/** \r\n * View: style\r\n * Return the Style doc that have a 
specific style name job name. \r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'style')\r\n\t{\r\n\t\t// JN: Changed for test but now this can be 
extended as required so good 
change\r\n\t\temit(doc.stylename,doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,16,
                             [<<"stylelist">>],
<<"/** \r\n * View: stylelist\r\n * Check if a document is of type 
'job'. If so, return its id and job name.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'style')\r\n\t\temit(doc._id, doc.stylename);\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,17,
                             [<<"tag">>],
<<"/** \r\n * View: tag\r\n * Check if a document is of type 'tag'. 
Return the document ordered by the document name\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
\"tag\")\r\n\t{\r\n\t\temit(doc.name, doc);\r\n\t}\r\n};">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,18,
                             [<<"txt">>],
<<"/** \r\n * View: txt\r\n * For all documents of type 'job' and 
'page', return all the data ordered by job name and job index.\r\n 
*/\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'txt'){\r\n\t\t\r\n\t\t// Split up job index into an array\r\n\t\tvar ji 
= doc.jobindex.split('.');\r\n\t\t\r\n\t\t// Convert from strings to 
numbers\r\n\t\tfor (i in ji)\r\n\t\t\tji[i] = 
parseInt(ji[i]);\r\n\t\t\r\n\t\t// JN: Changed for test but now this can 
be extended as required so good change\r\n\t\temit([doc.jobname, ji], 
doc);\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]},
                            {view,19,
                             [<<"user">>],
<<"/** \r\n * View: txt\r\n * Select all documents of type 'user' and 
output the user name, password and user levels ordered by the document 
ID.\r\n */\r\nfunction(doc)\r\n{\r\n\tif(doc.type.toLowerCase() == 
'user'){\r\n\t\temit(doc._id, {\"username\" : doc.username, 
\r\n\t\t\t\t\t  \"password\" : doc.password, \r\n\t\t\t\t\t  
\"user_level_user\" : doc.user_level_user, \r\n\t\t\t\t\t  
\"user_level_dev\" : doc.user_level_dev, \r\n\t\t\t\t\t  
\"user_level_admin\" : doc.user_level_admin});\r\n\t}\r\n}">>,
                             {btree,<0.740.0>,nil,
                              #Fun<couch_btree.3.83553141>,
                              #Fun<couch_btree.4.30790806>,
                              #Fun<couch_view.less_json_ids.2>,
                              #Fun<couch_view_group.10.120246376>},
                             [],[]}],
                           
{btree,<0.740.0>,nil,#Fun<couch_btree.0.83553141>,
                            #Fun<couch_btree.1.30790806>,
                            #Fun<couch_btree.2.124754102>,nil},
                           0,0,nil,nil},
<0.742.0>,nil,false,
                          [{{<0.728.0>,#Ref<0.0.0.60542>},32387}],
<0.743.0>}
** Reason for termination ==
** {timeout,{gen_server,call,
                         [couch_query_servers,{get_proc,<<"javascript">>}]}}


[Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.738.0>] {error_report,<0.33.0>,
     {<0.738.0>,crash_report,
      [[{initial_call,{couch_view_group,init,['Argument__1']}},
        {pid,<0.738.0>},
        {registered_name,[]},
        {error_info,
            {exit,
                {timeout,
                    {gen_server,call,
                        [couch_query_servers,{get_proc,<<"javascript">>}]}},
                [{gen_server,terminate,6},{proc_lib,init_p_do_apply,3}]}},
        {ancestors,
            
[couch_view,couch_secondary_services,couch_server_sup,<0.34.0>]},
        {messages,[]},
        {links,[<0.740.0>,<0.103.0>]},
        {dictionary,[]},
        {trap_exit,true},
        {status,running},
        {heap_size,2584},
        {stack_size,24},
        {reductions,855}],
       []]}}

[Fri, 21 Jan 2011 12:18:28 GMT] [debug] [<0.728.0>] request_group Error 
{timeout,
                         {gen_server,call,
                             [couch_query_servers,
                              {get_proc,<<"javascript">>}]}}

[Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.740.0>] ** Generic server 
<0.740.0> terminating
** Last message in was {'EXIT',<0.738.0>,
                            {timeout,
                                {gen_server,call,
                                    [couch_query_servers,
                                     {get_proc,<<"javascript">>}]}}}
** When Server state == 
{file,{file_descriptor,prim_file,{#Port<0.4047>,596}},
                               0,51}
** Reason for termination ==
** {timeout,{gen_server,call,
                         [couch_query_servers,{get_proc,<<"javascript">>}]}}


[Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.740.0>] {error_report,<0.33.0>,
     {<0.740.0>,crash_report,
      [[{initial_call,{couch_file,init,['Argument__1']}},
        {pid,<0.740.0>},
        {registered_name,[]},
        {error_info,
            {exit,
                {timeout,
                    {gen_server,call,
                        [couch_query_servers,{get_proc,<<"javascript">>}]}},
                [{gen_server,terminate,6},{proc_lib,init_p_do_apply,3}]}},
        {ancestors,
            [<0.738.0>,couch_view,couch_secondary_services,couch_server_sup,
<0.34.0>]},
        {messages,[]},
        {links,[#Port<0.4047>,<0.743.0>]},
        {dictionary,[]},
        {trap_exit,true},
        {status,running},
        {heap_size,610},
        {stack_size,24},
        {reductions,1435}],
       [{neighbour,
            [{pid,<0.743.0>},
             {registered_name,[]},
             {initial_call,{couch_ref_counter,init,['Argument__1']}},
             {current_function,{gen_server,loop,6}},
             {ancestors,
                 [<0.738.0>,couch_view,couch_secondary_services,
                  couch_server_sup,<0.34.0>]},
             {messages,
                 [{'DOWN',#Ref<0.0.0.60538>,process,<0.738.0>,
                      {timeout,
                          {gen_server,call,
                              [couch_query_servers,
                               {get_proc,<<"javascript">>}]}}}]},
             {links,[<0.740.0>]},
             {dictionary,[]},
             {trap_exit,false},
             {status,runnable},
             {heap_size,233},
             {stack_size,9},
             {reductions,47}]}]]}}

Re: Views crash

Posted by Nils Breunese <N....@vpro.nl>.
Rob Pettefar wrote:

> I shall give that "incluse_docs=true" thing a try. Is this just appended
> to the url like a key?

Yes, check out the list of querying options in the CouchDB view API: http://wiki.apache.org/couchdb/HTTP_view_API#Querying_Options

Nils.
------------------------------------------------------------------------
 VPRO   www.vpro.nl
------------------------------------------------------------------------

Re: Views crash

Posted by Rob Pettefar <rp...@gpslsolutions.com>.
  Hi again
I here are the map functions we have.
I shall give that "incluse_docs=true" thing a try. Is this just appended 
to the url like a key?

Thanks for your help guys.
Rob





function(doc)
{
     if(doc.type.toLowerCase() != 'user' && doc.jobname != undefined){

         emit(doc.jobname, doc);
     }
}

function(doc)
{    if(doc.type != undefined && doc._id != undefined && doc._rev != 
undefined){
         var docType = doc.type.toLowerCase();
         if(docType == 'job' || docType == 'npc' || docType == 'cpc' || 
docType == 'txt' || docType == 'page' || docType == 'style' || docType 
== 'tag')
             emit(doc._id, doc._rev);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'cpc' && doc.jobindex != undefined){

         // Split up job index into an array
         var ji = doc.jobindex.split('.');

         // Convert from strings to numbers
         for (i in ji)
             ji[i] = parseInt(ji[i]);

         emit([doc.jobname, ji], doc);
     }
}

function(doc)
{
     if(doc.type != undefined && doc._id != undefined && doc._rev != 
undefined && doc.jobname != undefined){

         var docType = doc.type.toLowerCase();

         if(docType == 'job' || docType == 'npc' || docType == 'cpc' ||  
docType == 'txt' || docType == 'page' || docType == 'tag')

             emit(doc.jobname, {"_id":doc._id, "_rev":doc._rev, 
"type":doc.type});
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'job' && doc.jobname != undefined){
         emit(doc.jobname, doc);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'job' && doc._id != undefined && 
doc.jobname != undefined)
         emit(doc._id, doc.jobname);
}

function(doc)
{
     if(doc.type.toLowerCase() == 'job' && doc.jobname != undefined && 
doc.revlevels != undefined)
         emit(doc.jobname, doc.revlevels);
}

function(doc)
{
     if(doc.type.toLowerCase() == 'npc' && doc.jobindex != undefined && 
doc.jobname != undefined){

         // Split up job index into an array
         var ji = doc.jobindex.split('.');

         // Convert from strings to numbers
         for (i in ji)
             ji[i] = parseInt(ji[i]);

         emit([doc.jobname, ji], doc);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'page' && doc.jobname != undefined && 
doc.jobindex != undefined){

         // Split up job index into an array
         var ji = doc.jobindex.split('.');

         // Convert from strings to numbers
         for (i in ji)
             ji[i] = parseInt(ji[i]);


         emit([doc.jobname, ji], doc);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'page' && doc.jobname != undefined && 
doc.sectionID != undefined && doc.status != undefined && doc.table != 
undefined)
     {
         var batch = ( doc.batch == undefined ) ? "NONE" : doc.batch;

         emit(doc.jobname,{ "_id":doc._id, "jobindex": doc.jobindex , 
"sectionID" : doc.sectionID , "status" : doc.status , "batch" : batch, 
"table" : doc.table});
     }
}

function(doc)
{
     if(doc.jobindex != undefined && doc.jobname != undefined && 
doc.data != undefined){
         if(doc.type.toLowerCase() == 'job' || doc.type.toLowerCase() == 
'page')
         {
             // Split up job index into an array
             var ji = doc.jobindex.split('.');

             // Convert from strings to numbers
             for (i in ji)
                 ji[i] = parseInt(ji[i]);

             emit({"jobname" : doc.jobname, "jobindex" : ji},{ 
"jobindex": doc.jobindex, "data" : doc.data });
         }
     }
}

function(doc)
{
         var docType = doc.type.toLowerCase();

         if( docType == 'job'){
                 emit([doc.customer, doc.jobname, 0, 0], doc);

         } else if ( docType == 'page'){

                 // Add order job before page
                 var ji = doc.jobindex.split('.');

                 // Convert from strings to numbers
                 for (i in ji) ji[i] = parseInt(ji[i]);

                 emit([doc.customer, doc.jobname, ji, 1], doc);
         }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'style' && doc.stylename != undefined)
     {
         emit(doc.stylename,doc);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'style')
         emit(doc._id, doc.stylename);
}

function(doc)
{
     if(doc.type.toLowerCase() == 'txt' && doc.jobname != undefined){

         // Split up job index into an array
         var ji = doc.jobindex.split('.');

         // Convert from strings to numbers
         for (i in ji)
             ji[i] = parseInt(ji[i]);

         emit([doc.jobname, ji], doc);
     }
}

function(doc)
{
     if(doc.type.toLowerCase() == 'user' && doc.username != undefined && 
doc.password != undefined && doc.user_level_user != undefined &&  
doc.user_level_user != undefined &&  doc.user_level_user != undefined){
         emit(doc._id, {"username" : doc.username,
                       "password" : doc.password,
                       "user_level_user" : doc.user_level_user,
                       "user_level_dev" : doc.user_level_dev,
                       "user_level_admin" : doc.user_level_admin});
     }
}


On 09/02/2011 10:41, Robert Newson wrote:
> One reason I can see for the view update taking so long is you have a
> lot of views that emit the full 'doc' as the value. A lighter, faster
> alternative is to emit null for the value and use ?include_docs=true
> to get the doc at query time from the database file instead of the
> view file.
>
> That shouldn't be necessary though, and receiving this timeout means
> it took a very long time to get a response. I read as much of your
> view code as I could in the form above but didn't see anything
> obviously contentious. If you could post your map/reduce functions in
> a clearer form (i.e, without all the escaping), perhaps something will
> stand out.
>
> B.
>
> On 9 February 2011 09:45, Dave Cottlehuber<da...@muse.net.nz>  wrote:
>> On 9 February 2011 06:20, Rob Pettefar<rp...@gpslsolutions.com>  wrote:
>>>   Hi guys
>>> I have an issue with views crashing.
>>> This has occurred on both Linux and Windows distributions of CouchDB 1.0.1
>> Has this issue occurred on those platforms, on a previous version?
>>
>>> Sometimes the views for a particular database will break. In Futon they
>>> appear unresponsive.
>>> The only way round this that I have seen is to delete and rebuild the view
>>> file for the database in question.
>>>
>>> This seems to happen more often on the Windows version but I don't think it
>>> is anything to do with the>4Gb file issue.
>>> Any help you could lend would be invaluable.
>>>
>>> Thanks
>>> Rob
>>>
>>> I have included the error that was logged in the couchdb log file:
>>>
>>> [Fri, 21 Jan 2011 12:18:28 GMT] [debug] [<0.738.0>] Exit from linked pid:
>>> {<0.742.0>,
>>>                        {timeout,
>>>                            {gen_server,call,
>>>                                [couch_query_servers,
>>>                                 {get_proc,<<"javascript">>}]}}}
>>>
>>> [Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.738.0>] ** Generic server
>>> <0.738.0>  terminating
>>> ** Last message in was {'EXIT',<0.742.0>,
>>>                            {timeout,
>>>                                {gen_server,call,
>>>                                    [couch_query_servers,
>>>                                     {get_proc,<<"javascript">>}]}}}
>>> ** When Server state == {group_state,undefined,<<"testdb">>,
>>>                          {"../var/lib/couchdb",<<"testdb">>,
>>>                           {group,
>>>                              [<<"npc">>],
>> Hi Rob
>>
>> by no means am I am expert but it looks like this is a "normal"
>> timeout in couch_query_servers.
>>
>>     ProcTimeout = list_to_integer(couch_config:get(
>>                         "couchdb", "os_process_timeout", "5000")),
>>
>> If so you can try upping this from the default 5 seconds in local.ini:
>>
>> [couchdb]
>> os_process_timeout = 5000 ; 5 seconds. for view and external servers.
>>
>> &  restart.
>>
>> The more important question is - why should these views take so long to process?
>>
>> A+
>> Dave
>>



Re: Views crash

Posted by Robert Newson <ro...@gmail.com>.
One reason I can see for the view update taking so long is you have a
lot of views that emit the full 'doc' as the value. A lighter, faster
alternative is to emit null for the value and use ?include_docs=true
to get the doc at query time from the database file instead of the
view file.

That shouldn't be necessary though, and receiving this timeout means
it took a very long time to get a response. I read as much of your
view code as I could in the form above but didn't see anything
obviously contentious. If you could post your map/reduce functions in
a clearer form (i.e, without all the escaping), perhaps something will
stand out.

B.

On 9 February 2011 09:45, Dave Cottlehuber <da...@muse.net.nz> wrote:
> On 9 February 2011 06:20, Rob Pettefar <rp...@gpslsolutions.com> wrote:
>>  Hi guys
>> I have an issue with views crashing.
>> This has occurred on both Linux and Windows distributions of CouchDB 1.0.1
>
> Has this issue occurred on those platforms, on a previous version?
>
>> Sometimes the views for a particular database will break. In Futon they
>> appear unresponsive.
>> The only way round this that I have seen is to delete and rebuild the view
>> file for the database in question.
>>
>> This seems to happen more often on the Windows version but I don't think it
>> is anything to do with the >4Gb file issue.
>> Any help you could lend would be invaluable.
>>
>> Thanks
>> Rob
>>
>> I have included the error that was logged in the couchdb log file:
>>
>> [Fri, 21 Jan 2011 12:18:28 GMT] [debug] [<0.738.0>] Exit from linked pid:
>> {<0.742.0>,
>>                       {timeout,
>>                           {gen_server,call,
>>                               [couch_query_servers,
>>                                {get_proc,<<"javascript">>}]}}}
>>
>> [Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.738.0>] ** Generic server
>> <0.738.0> terminating
>> ** Last message in was {'EXIT',<0.742.0>,
>>                           {timeout,
>>                               {gen_server,call,
>>                                   [couch_query_servers,
>>                                    {get_proc,<<"javascript">>}]}}}
>> ** When Server state == {group_state,undefined,<<"testdb">>,
>>                         {"../var/lib/couchdb",<<"testdb">>,
>>                          {group,
>>                             [<<"npc">>],
>
> Hi Rob
>
> by no means am I am expert but it looks like this is a "normal"
> timeout in couch_query_servers.
>
>    ProcTimeout = list_to_integer(couch_config:get(
>                        "couchdb", "os_process_timeout", "5000")),
>
> If so you can try upping this from the default 5 seconds in local.ini:
>
> [couchdb]
> os_process_timeout = 5000 ; 5 seconds. for view and external servers.
>
> & restart.
>
> The more important question is - why should these views take so long to process?
>
> A+
> Dave
>

Re: Views crash

Posted by Dave Cottlehuber <da...@muse.net.nz>.
On 9 February 2011 06:20, Rob Pettefar <rp...@gpslsolutions.com> wrote:
>  Hi guys
> I have an issue with views crashing.
> This has occurred on both Linux and Windows distributions of CouchDB 1.0.1

Has this issue occurred on those platforms, on a previous version?

> Sometimes the views for a particular database will break. In Futon they
> appear unresponsive.
> The only way round this that I have seen is to delete and rebuild the view
> file for the database in question.
>
> This seems to happen more often on the Windows version but I don't think it
> is anything to do with the >4Gb file issue.
> Any help you could lend would be invaluable.
>
> Thanks
> Rob
>
> I have included the error that was logged in the couchdb log file:
>
> [Fri, 21 Jan 2011 12:18:28 GMT] [debug] [<0.738.0>] Exit from linked pid:
> {<0.742.0>,
>                       {timeout,
>                           {gen_server,call,
>                               [couch_query_servers,
>                                {get_proc,<<"javascript">>}]}}}
>
> [Fri, 21 Jan 2011 12:18:28 GMT] [error] [<0.738.0>] ** Generic server
> <0.738.0> terminating
> ** Last message in was {'EXIT',<0.742.0>,
>                           {timeout,
>                               {gen_server,call,
>                                   [couch_query_servers,
>                                    {get_proc,<<"javascript">>}]}}}
> ** When Server state == {group_state,undefined,<<"testdb">>,
>                         {"../var/lib/couchdb",<<"testdb">>,
>                          {group,
>                             [<<"npc">>],

Hi Rob

by no means am I am expert but it looks like this is a "normal"
timeout in couch_query_servers.

    ProcTimeout = list_to_integer(couch_config:get(
                        "couchdb", "os_process_timeout", "5000")),

If so you can try upping this from the default 5 seconds in local.ini:

[couchdb]
os_process_timeout = 5000 ; 5 seconds. for view and external servers.

& restart.

The more important question is - why should these views take so long to process?

A+
Dave

Re: Windows Server 2003 4GB file issue not solved

Posted by Rob Pettefar <rp...@gpslsolutions.com>.
  Hi there guys
I did a full clean install of everything and things are working nicely 
again.
I think we were getting false negatives when testing with the 4GB 
database file. I didn't know that it would overwrite, I just assumed it 
was append only. This would probably explain why it can't be read properly.

Thanks for your help in resolving this
Rob

On 16/02/2011 09:00, Dave Cottlehuber wrote:
> On 16 February 2011 01:12, Rob Pettefar<rp...@gpslsolutions.com>  wrote:
>> Hi guys
>> I am using Windows 7 Ultimate and I managed to get round the 4GB database
>> limit using this installer:
>> https://github.com/dch/couchdb/downloads/setup-couchdb-1.0.2_otp_R14B01_spidermonkey_1.8.5.exe
>> With this I have a 5.8GB database that is happily running, even if I restart
>> the service.
>>
>> However when this was applied to a Windows Server 2003 Standard Edition
>> machine the 4GB limit still seems to exist.
> Hi Rob
>
> If this is still causing errors then it's pretty serious -
> over-written (lost) data at the head of the couch db file is the worst
> case scenario. The 1.0.2 binary was checked for this issue on both 32
> and 64bit platforms&  came up clean.
>
> Firstly more detail please -  William of Ockham[3] is betting on the first two
> - was this a clean install of CouchDB into a new folder, or
> overwriting an existing install (not recommended)?
> - are you running on an NTFS volume for /var/lib/couchdb/*?
> - what caused you to see a 4GiB limit? errors, logs, unexpected behaviour?
>
> History -
> The 4GiB error is caused by inability in erlang 13B04 and earlier
> versions to append data to a file that is already>  4 GiB on windows
> only. This means that:
>
> 1) A running CouchDB instance can't grow a db file from<  4 GiB to
> over 4 GiB without crashing.
>
> 2) On restart, the first time that>  4GiB db file has a reason to
> write - e.g. you add a doc - it starts over-writing not from the end
> (append) but from the beginning of the DB = lost header = possible
> lost data (I think depending if compaction is needed or not).
>
> This was first resolved in 14B01 only with Juhani's patch.
>
> 3 checks you can do on a *clean* folder install of CouchDB
>
> 1) confirm what erl/werl version you are using
> 2) confirm there are no issues at erlang level using werl.exe shell
> - create a large file (I use one around 128MiB)&  use Juhani's quick
> checker [1] below from the erlang shell to drive it past 4GiB. This
> should be successful. Note the size&  the content of the header of the
> file.
> - re-run the quick checker again - each time a segment is written you
> would normally (e.g. in 1 above) see the file size grow. It does not,
> and examining the header shows you have overwritten it.
>
> 3) less direct but more couchy:
> - make a large DB just under 4GiB = 4096 * 1024 * 1024 -1 max size. I
> upload many docs with large attachments for this.
> - stop couchdb&  record header and filesize of db.
> - upload more docs&  increase the db size past 4GiB -- if you can....
> - watch couch/erlang die if error is present
> http://friendpaste.com/6moY1vyUyIsIX4t5N4hZ1B?rev=626638353230
> - stop couchdb&  record header and filesize of db.
> - start couchdb&  relaunch the script again.
> - confirm header changed, and filesize has not, even though docs were written.
>
>> Is there something funky with Server 2003 that would cause this?
> Not to my knowledge. It would be an OS bug in this case if the API is
> inconsistent between XP/2003/2008 32 and 64 bit.
>
>> Is there a binary or installer available that would work instead?
> At the moment I suspect
>
>> Also do database files that grow to the magical 4GB limit become corrupt or
>> are they usable on other setups without this issue?
> All fine so long as you don't restart. Best is to stop couch, move the
> db files safely, and upgrade.
>
>> (I have to ask as the 2003 Server machine is over in china and will take an
>> age to transfer it for examination)
>>
>> Any help on this matter would be greatly appreciated.
>> Rob
>>
> Look for me on irc d_ch&  we can go through the gory bits. I'm in UTC
> + 12 but very keen to help here.
> A+
> Dave
>
> [3] http://www.phys.ncku.edu.tw/mirrors/physicsfaq/General/occam.html
> [2] http://friendpaste.com/6moY1vyUyIsIX4t5N4hZ1B?rev=626638353230
> [1] filetest.erl
>
> -module(filetest).
> -export([main/0]).
> main() ->
> {ok, Binary}=file:read_file("128MiB"),
> {ok, WriteDescr} = file:open("grow_past_4_GiB.file", [raw, append]),
>
> loop(1000, WriteDescr,Binary),
> file:close(WriteDescr).
>
> loop(0,_NotNeeded,_NotNeeded) ->  ok;
>
> loop(N,WriteDescr,Binary) ->
> file:write(WriteDescr,Binary),
> io:format("wrote ~w\n", [N]),
> loop(N-1,WriteDescr,Binary).


Re: Windows Server 2003 4GB file issue not solved

Posted by Dave Cottlehuber <da...@muse.net.nz>.
On 16 February 2011 01:12, Rob Pettefar <rp...@gpslsolutions.com> wrote:
> Hi guys
> I am using Windows 7 Ultimate and I managed to get round the 4GB database
> limit using this installer:
> https://github.com/dch/couchdb/downloads/setup-couchdb-1.0.2_otp_R14B01_spidermonkey_1.8.5.exe
> With this I have a 5.8GB database that is happily running, even if I restart
> the service.
>
> However when this was applied to a Windows Server 2003 Standard Edition
> machine the 4GB limit still seems to exist.

Hi Rob

If this is still causing errors then it's pretty serious -
over-written (lost) data at the head of the couch db file is the worst
case scenario. The 1.0.2 binary was checked for this issue on both 32
and 64bit platforms & came up clean.

Firstly more detail please -  William of Ockham[3] is betting on the first two
- was this a clean install of CouchDB into a new folder, or
overwriting an existing install (not recommended)?
- are you running on an NTFS volume for /var/lib/couchdb/*?
- what caused you to see a 4GiB limit? errors, logs, unexpected behaviour?

History -
The 4GiB error is caused by inability in erlang 13B04 and earlier
versions to append data to a file that is already > 4 GiB on windows
only. This means that:

1) A running CouchDB instance can't grow a db file from < 4 GiB to
over 4 GiB without crashing.

2) On restart, the first time that  > 4GiB db file has a reason to
write - e.g. you add a doc - it starts over-writing not from the end
(append) but from the beginning of the DB = lost header = possible
lost data (I think depending if compaction is needed or not).

This was first resolved in 14B01 only with Juhani's patch.

3 checks you can do on a *clean* folder install of CouchDB

1) confirm what erl/werl version you are using
2) confirm there are no issues at erlang level using werl.exe shell
- create a large file (I use one around 128MiB) & use Juhani's quick
checker [1] below from the erlang shell to drive it past 4GiB. This
should be successful. Note the size & the content of the header of the
file.
- re-run the quick checker again - each time a segment is written you
would normally (e.g. in 1 above) see the file size grow. It does not,
and examining the header shows you have overwritten it.

3) less direct but more couchy:
- make a large DB just under 4GiB = 4096 * 1024 * 1024 -1 max size. I
upload many docs with large attachments for this.
- stop couchdb & record header and filesize of db.
- upload more docs & increase the db size past 4GiB -- if you can....
- watch couch/erlang die if error is present
http://friendpaste.com/6moY1vyUyIsIX4t5N4hZ1B?rev=626638353230
- stop couchdb & record header and filesize of db.
- start couchdb & relaunch the script again.
- confirm header changed, and filesize has not, even though docs were written.

> Is there something funky with Server 2003 that would cause this?

Not to my knowledge. It would be an OS bug in this case if the API is
inconsistent between XP/2003/2008 32 and 64 bit.

> Is there a binary or installer available that would work instead?
At the moment I suspect

> Also do database files that grow to the magical 4GB limit become corrupt or
> are they usable on other setups without this issue?

All fine so long as you don't restart. Best is to stop couch, move the
db files safely, and upgrade.

> (I have to ask as the 2003 Server machine is over in china and will take an
> age to transfer it for examination)
>
> Any help on this matter would be greatly appreciated.
> Rob
>

Look for me on irc d_ch & we can go through the gory bits. I'm in UTC
+ 12 but very keen to help here.
A+
Dave

[3] http://www.phys.ncku.edu.tw/mirrors/physicsfaq/General/occam.html
[2] http://friendpaste.com/6moY1vyUyIsIX4t5N4hZ1B?rev=626638353230
[1] filetest.erl

-module(filetest).
-export([main/0]).
main() ->
{ok, Binary}=file:read_file("128MiB"),
{ok, WriteDescr} = file:open("grow_past_4_GiB.file", [raw, append]),

loop(1000, WriteDescr,Binary),
file:close(WriteDescr).

loop(0,_NotNeeded,_NotNeeded) -> ok;

loop(N,WriteDescr,Binary) ->
file:write(WriteDescr,Binary),
io:format("wrote ~w\n", [N]),
loop(N-1,WriteDescr,Binary).

Re: Windows Server 2003 4GB file issue not solved

Posted by Nikolai Teofilov <n....@gmail.com>.
Rob,

Are you sure you have removed the old version? Sometimes the erlang is still has running processes and it is just suggestion that you are running still the old version in the background.
Reinstall and reboot could help ... It is "Windows" 

Another suggestion run the couchdb not as windows service. 

http://support.microsoft.com/kb/283037
Is this somehow related to your problem?

  
Regards
Nikolai


On 15.02.2011, at 13:12, Rob Pettefar wrote:

> Hi guys
> I am using Windows 7 Ultimate and I managed to get round the 4GB database limit using this installer:
> https://github.com/dch/couchdb/downloads/setup-couchdb-1.0.2_otp_R14B01_spidermonkey_1.8.5.exe
> With this I have a 5.8GB database that is happily running, even if I restart the service.
> 
> However when this was applied to a Windows Server 2003 Standard Edition machine the 4GB limit still seems to exist.
> Is there something funky with Server 2003 that would cause this?
> Is there a binary or installer available that would work instead?
> 
> Also do database files that grow to the magical 4GB limit become corrupt or are they usable on other setups without this issue?
> (I have to ask as the 2003 Server machine is over in china and will take an age to transfer it for examination)
> 
> Any help on this matter would be greatly appreciated.
> Rob


Windows Server 2003 4GB file issue not solved

Posted by Rob Pettefar <rp...@gpslsolutions.com>.
Hi guys
I am using Windows 7 Ultimate and I managed to get round the 4GB 
database limit using this installer:
https://github.com/dch/couchdb/downloads/setup-couchdb-1.0.2_otp_R14B01_spidermonkey_1.8.5.exe
With this I have a 5.8GB database that is happily running, even if I 
restart the service.

However when this was applied to a Windows Server 2003 Standard Edition 
machine the 4GB limit still seems to exist.
Is there something funky with Server 2003 that would cause this?
Is there a binary or installer available that would work instead?

Also do database files that grow to the magical 4GB limit become corrupt 
or are they usable on other setups without this issue?
(I have to ask as the 2003 Server machine is over in china and will take 
an age to transfer it for examination)

Any help on this matter would be greatly appreciated.
Rob