You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by Bernd Eckenfels <ec...@zusammenkunft.net> on 2015/01/11 03:00:37 UTC

AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

Yes, it failed with clean as well.

I am currently let the Site build run in a Loop and it seems to be stable.

Gruss
Bernd

-- 
http://bernd.eckenfels.net

----- Ursprüngliche Nachricht -----
Von: "dlmarion" <dl...@comcast.net>
Gesendet: ‎11.‎01.‎2015 02:57
An: "Commons Developers List" <de...@commons.apache.org>
Betreff: Re: [VFS] Implementing custom hdfs file system using commons-vfs
  2.0

Glad that you were able to make it work. When it failed for you, were you executing the clean lifecylcle target for maven? It should work in consecutive runs with mvn clean. I did not test with consecutive runs without the clean target being executed.



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <ec...@zusammenkunft.net> </div><div>Date:01/10/2015  8:37 PM  (GMT-05:00) </div><div>To: Commons Developers List <de...@commons.apache.org> </div><div>Cc:  </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs
  2.0 </div><div>
</div>Hello,

with this commit I added a cleanup of the data dir before the
DfsMiniCluster is started. I also use absolute file names to make
debugging a bit easier and I moved initialisation code to the
setUp() method

http://svn.apache.org/r1650847 & http://svn.apache.org/r1650852

This way the test do not error out anymore. But I have no idea why this
was happening on one machine and not on others (maybe a race, the
failing machine had SSD?).

So this means, now I can concentrate on merging the new version.

Gruss
Bernd


Am Sun, 11 Jan 2015 01:25:48 +0100 schrieb Bernd Eckenfels
<ec...@zusammenkunft.net>:

> Hello,
> 
> Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC)
> schrieb dlmarion@comcast.net:
> 
> > Bernd, 
> > 
> > Regarding the Hadoop version for VFS 2.1, why not use the latest on
> > the first release of the HDFS provider? The Hadoop 1.1.2 release was
> > released in Feb 2013. 
> 
> Yes, you are right. We dont need to care about 2.0 as this is a new
> provider. I will make the changes, just want to fix the current test
> failures I see first.
> 
> 
> > I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on
> > Ubuntu. What type of test errors are you getting? Testing is
> > disabled on Windows unless you decide to pull in windows artifacts
> > attached to VFS-530. However, those artifacts are associated with
> > patch 3 and are for Hadoop 2.4.0. Updating to 2.4.0 would also be
> > sufficient in my opinion. 
> 
> Yes, what I mean is: I typically build under Windows so I would not
> notice if the test starts to fail. However it seems to pass on the
> integration build:
> 
> https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16
> 
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 13, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2
> Cluster is active Cluster is active Tests run: 76, Failures: 0,
> Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> 
> Anyway, on a Ubuntu, I get this exception currently:
> 
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> Starting DataNode 0 with dfs.data.dir:
> target/build/test/data/dfs/data/data1,tar
> get/build/test/data/dfs/data/data2 Cluster is active Cluster is
> active Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed: 1.486 sec <<< FA
> ILURE! - in
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase
> junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd
> fsFileProviderTestCase$HdfsProviderTestSuite)  Time elapsed: 1.479
> sec  <<< ERRO                                         R!
> java.lang.RuntimeException: Error setting up mini cluster at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112) at
> org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest
> Suite.java:148) at
> junit.framework.TestResult.runProtected(TestResult.java:142) at
> org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite.
> java:154) at
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.
> java:86) at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide
> r.java:283) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni
> t4Provider.java:173) at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4
> Provider.java:153) at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider                                         .java:128)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
> ssLoader(ForkedBooter.java:203) at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
> edBooter.java:155) at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
> 103) Caused by: java.io.IOException: Cannot lock storage
> target/build/test/data/dfs/n
> ame1. The directory is already locked. at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St
> orage.java:599) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 27) at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13
> 45) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 1207) at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
> 187) at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268)
> at
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107) ... 11
> more
> 
> Running
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest Tests
> run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec
> - in            
> 
> When I delete the core/target/build/test/data/dfs/ directory and then
> run the ProviderTest I can do that multiple times and it works:
> 
>   mvn surefire:test
> -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest
> 
> But when I run all tests or the HdfsFileProviderTestCase then it
> fails and afterwards not even the ProviderTest suceeds until I delete
> that dir.
> 
> (I suspect the "locking" is a missleading error, looks more like the
> data pool has some kind of instance ID which it does not have at the
> next run)
> 
> Looks like TestCase has a problem and ProviderTest does no proper
> pre-cleaning. Will check the source. More generally speaking it
> should not use a fixed working directory anyway.
> 
> 
> > I started up Hadoop 2.6.0 on my laptop, created a directory and
> > file, then used the VFS shell to list and view the contents
> > (remember, HDFS provider is read-only currently). Here is the what
> > I did: 
> 
> Looks good. I will shorten it a bit and add it to the wiki. BTW: the
> warning, is this something we can change?
> 
> Gruss
> Bernd


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0

Posted by dl...@comcast.net.
Updated to the latest commit, built with 'mvn clean install' and 'mvn clean install site'. Both succeeded, anything else you need me to try? 

----- Original Message -----

From: "Bernd Eckenfels" <ec...@zusammenkunft.net> 
To: "Commons Developers List" <de...@commons.apache.org> 
Sent: Saturday, January 10, 2015 9:00:37 PM 
Subject: AW: [VFS] Implementing custom hdfs file system using commons-vfs 2.0 

Yes, it failed with clean as well. 

I am currently let the Site build run in a Loop and it seems to be stable. 

Gruss 
Bernd 

-- 
http://bernd.eckenfels.net 

----- Ursprüngliche Nachricht ----- 
Von: "dlmarion" <dl...@comcast.net> 
Gesendet: ‎11.‎01.‎2015 02:57 
An: "Commons Developers List" <de...@commons.apache.org> 
Betreff: Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0 

Glad that you were able to make it work. When it failed for you, were you executing the clean lifecylcle target for maven? It should work in consecutive runs with mvn clean. I did not test with consecutive runs without the clean target being executed. 



<div>-------- Original message --------</div><div>From: Bernd Eckenfels <ec...@zusammenkunft.net> </div><div>Date:01/10/2015 8:37 PM (GMT-05:00) </div><div>To: Commons Developers List <de...@commons.apache.org> </div><div>Cc: </div><div>Subject: Re: [VFS] Implementing custom hdfs file system using commons-vfs 2.0 </div><div> 
</div>Hello, 

with this commit I added a cleanup of the data dir before the 
DfsMiniCluster is started. I also use absolute file names to make 
debugging a bit easier and I moved initialisation code to the 
setUp() method 

http://svn.apache.org/r1650847 & http://svn.apache.org/r1650852 

This way the test do not error out anymore. But I have no idea why this 
was happening on one machine and not on others (maybe a race, the 
failing machine had SSD?). 

So this means, now I can concentrate on merging the new version. 

Gruss 
Bernd 


Am Sun, 11 Jan 2015 01:25:48 +0100 schrieb Bernd Eckenfels 
<ec...@zusammenkunft.net>: 

> Hello, 
> 
> Am Sat, 10 Jan 2015 03:12:19 +0000 (UTC) 
> schrieb dlmarion@comcast.net: 
> 
> > Bernd, 
> > 
> > Regarding the Hadoop version for VFS 2.1, why not use the latest on 
> > the first release of the HDFS provider? The Hadoop 1.1.2 release was 
> > released in Feb 2013. 
> 
> Yes, you are right. We dont need to care about 2.0 as this is a new 
> provider. I will make the changes, just want to fix the current test 
> failures I see first. 
> 
> 
> > I just built 2.1-SNAPSHOT over the holidays with JDK 6, 7, and 8 on 
> > Ubuntu. What type of test errors are you getting? Testing is 
> > disabled on Windows unless you decide to pull in windows artifacts 
> > attached to VFS-530. However, those artifacts are associated with 
> > patch 3 and are for Hadoop 2.4.0. Updating to 2.4.0 would also be 
> > sufficient in my opinion. 
> 
> Yes, what I mean is: I typically build under Windows so I would not 
> notice if the test starts to fail. However it seems to pass on the 
> integration build: 
> 
> https://continuum-ci.apache.org/continuum/projectView.action?projectId=129&amp;projectGroupId=16 
> 
> Running 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest 
> Starting DataNode 0 with dfs.data.dir: 
> target/build/test/data/dfs/data/data1,target/build/test/data/dfs/data/data2 
> Cluster is active Cluster is active Tests run: 13, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 11.821 sec - in 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest 
> Running 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase 
> Starting DataNode 0 with dfs.data.dir: 
> target/build/test2/data/dfs/data/data1,target/build/test2/data/dfs/data/data2 
> Cluster is active Cluster is active Tests run: 76, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 18.853 sec - in 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase 
> 
> Anyway, on a Ubuntu, I get this exception currently: 
> 
> Running 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase 
> Starting DataNode 0 with dfs.data.dir: 
> target/build/test/data/dfs/data/data1,tar 
> get/build/test/data/dfs/data/data2 Cluster is active Cluster is 
> active Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 1.486 sec <<< FA 
> ILURE! - in 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase 
> junit.framework.TestSuite@56c77035(org.apache.commons.vfs2.provider.hdfs.test.Hd 
> fsFileProviderTestCase$HdfsProviderTestSuite) Time elapsed: 1.479 
> sec <<< ERRO R! 
> java.lang.RuntimeException: Error setting up mini cluster at 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H 
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:112) at 
> org.apache.commons.vfs2.test.AbstractTestSuite$1.protect(AbstractTest 
> Suite.java:148) at 
> junit.framework.TestResult.runProtected(TestResult.java:142) at 
> org.apache.commons.vfs2.test.AbstractTestSuite.run(AbstractTestSuite. 
> java:154) at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner. 
> java:86) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide 
> r.java:283) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni 
> t4Provider.java:173) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4 
> Provider.java:153) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider .java:128) 
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla 
> ssLoader(ForkedBooter.java:203) at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork 
> edBooter.java:155) at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java: 
> 103) Caused by: java.io.IOException: Cannot lock storage 
> target/build/test/data/dfs/n 
> ame1. The directory is already locked. at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(St 
> orage.java:599) at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13 
> 27) at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:13 
> 45) at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java: 
> 1207) at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java: 
> 187) at 
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:268) 
> at 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTestCase$H 
> dfsProviderTestSuite.setUp(HdfsFileProviderTestCase.java:107) ... 11 
> more 
> 
> Running 
> org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest Tests 
> run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec 
> - in 
> 
> When I delete the core/target/build/test/data/dfs/ directory and then 
> run the ProviderTest I can do that multiple times and it works: 
> 
> mvn surefire:test 
> -Dtest=org.apache.commons.vfs2.provider.hdfs.test.HdfsFileProviderTest 
> 
> But when I run all tests or the HdfsFileProviderTestCase then it 
> fails and afterwards not even the ProviderTest suceeds until I delete 
> that dir. 
> 
> (I suspect the "locking" is a missleading error, looks more like the 
> data pool has some kind of instance ID which it does not have at the 
> next run) 
> 
> Looks like TestCase has a problem and ProviderTest does no proper 
> pre-cleaning. Will check the source. More generally speaking it 
> should not use a fixed working directory anyway. 
> 
> 
> > I started up Hadoop 2.6.0 on my laptop, created a directory and 
> > file, then used the VFS shell to list and view the contents 
> > (remember, HDFS provider is read-only currently). Here is the what 
> > I did: 
> 
> Looks good. I will shorten it a bit and add it to the wiki. BTW: the 
> warning, is this something we can change? 
> 
> Gruss 
> Bernd 


--------------------------------------------------------------------- 
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org 
For additional commands, e-mail: dev-help@commons.apache.org