You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by Lewis John Mcgibbney <le...@gmail.com> on 2013/01/11 19:48:21 UTC

Nightly Builds Nearly fixed

Hi All,

We have two rather trivial issues before builds will be back online.

1) Solaris builds for trunk and 2.x - The issue relates to the following
stack trace. This relates to increasing swap partition on the slave,
however I don't know how to configure the script to achieve this.
Can anyone help me out here please?

BUILD FAILED/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/Nutch-trunk/trunk/build.xml:103:
The following error occurred while executing this line:
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/Nutch-trunk/trunk/src/plugin/build.xml:48:
The following error occurred while executing this line:
/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/Nutch-trunk/trunk/src/plugin/parse-ext/build.xml:30:
Execute failed: java.io.IOException: Cannot run program "chmod" (in
directory "/zonestorage/hudson_solaris/home/hudson/hudson-slave/workspace/Nutch-trunk/trunk/src/plugin/parse-ext"):
error=12, Not enough space

2) Windows builds for trunk and 2.x - The issue relates to the dreaded
permissions and the following stack trace. I know (suspect) that not very
many of us are on Windows, however any help to get this one sorted as well
would be highly appreciated.

Failed to set permissions of path:
\tmp\hadoop-hudson\mapred\staging\hudson-302191056\.staging to 0700
java.io.IOException: Failed to set permissions of path:
\tmp\hadoop-hudson\mapred\staging\hudson-302191056\.staging to 0700
	at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
	at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
	at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
	at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:918)
	at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
	at org.apache.nutch.crawl.CrawlDbMerger.merge(CrawlDbMerger.java:126)
	at org.apache.nutch.crawl.TestCrawlDbMerger.testMerge(TestCrawlDbMerger.java:104)


Thanks everyone.

Lewis

-- 
*Lewis*