You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Brian Forney <bf...@integral7.com> on 2009/03/10 20:08:58 UTC

Extending ClusterMapReduceTestCase

Hi all,

I'm trying to write a JUnit test case that extends ClusterMapReduceTestCase
to test some code I've written to ease job submission and monitoring between
some existing code. Unfortunately, I see the following problem and cannot
find the jetty 5.1.4 code anywhere online. Any ideas about why this is
happening?

    [junit] Testsuite: com.integral7.batch.hadoop.test.TestJobController
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 1.384 sec
    [junit] 
    [junit] ------------- Standard Output ---------------
    [junit] 2009-03-10 12:52:26,303 [main] ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:290) - FSNamesystem initialization failed.
    [junit] java.io.IOException: Problem starting http server
    [junit]     at 
org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:379)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:288)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:859)
    [junit]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
    [junit]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
    [junit]     at 
org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
uceTestCase.java:81)
    [junit]     at 
org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
Case.java:56)
    [junit]     at 
com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
va:49)
    [junit]     at junit.framework.TestCase.runBare(TestCase.java:132)
    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:110)
    [junit]     at 
junit.framework.TestResult.runProtected(TestResult.java:128)
    [junit]     at junit.framework.TestResult.run(TestResult.java:113)
    [junit]     at junit.framework.TestCase.run(TestCase.java:124)
    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:232)
    [junit]     at junit.framework.TestSuite.run(TestSuite.java:227)
    [junit]     at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81
)
    [junit]     at 
junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:36)
    [junit]     at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRu
nner.java:421)
    [junit]     at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTes
tRunner.java:912)
    [junit]     at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestR
unner.java:766)
    [junit] Caused by:
org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
processing instruction target matching "[xX][mM][lL]" is not allowed.,
org.xml.sax.SAXParseException: The processing instruction target matching
"[xX][mM][lL]" is not allowed.]
    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
    [junit]     at org.mortbay.util.Container.start(Container.java:72)
    [junit]     at 
org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
    [junit]     ... 23 more
    [junit] ------------- ---------------- ---------------
    [junit] Testcase:
testJobSubmission(com.integral7.batch.hadoop.test.TestJobController):
Caused an ERROR
    [junit] Problem starting http server
    [junit] java.io.IOException: Problem starting http server
    [junit]     at 
org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:379)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:288)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
    [junit]     at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:859)
    [junit]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
    [junit]     at 
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
    [junit]     at 
org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
uceTestCase.java:81)
    [junit]     at 
org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
Case.java:56)
    [junit]     at 
com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
va:49)
    [junit] Caused by:
org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
processing instruction target matching "[xX][mM][lL]" is not allowed.,
org.xml.sax.SAXParseException: The processing instruction target matching
"[xX][mM][lL]" is not allowed.]
    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
    [junit]     at org.mortbay.util.Container.start(Container.java:72)
    [junit]     at 
org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
    [junit] 
    [junit] 
    [junit] Test com.integral7.batch.hadoop.test.TestJobController FAILED

Thanks,
Brian


Re: Extending ClusterMapReduceTestCase

Posted by Steve Loughran <st...@apache.org>.
jason hadoop wrote:
> I am having trouble reproducing this one. It happened in a very specific
> environment that pulled in an alternate sax parser.
> 
> The bottom line is that jetty expects a parser with particular capabilities
> and if it doesn't get one, odd things happen.
> 
> In a day or so I will have hopefully worked out the details, but it has been
> have a year since I dealt with this last.
> 
> Unless you are forking, to run your junit tests, ant won't let you change
> the class path for your unit tests - much chaos will ensue.

Even if you fork, unless you set includeantruntime=false then you get 
Ant's classpath, as the junit test listeners are in the 
ant-optional-junit.jar and you'd better pull them in somehow.

I can see why AElfred would cause problems for jetty; they need to 
handle web.xml and suchlike, and probably validate them against the 
schema to reduce support calls.


Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
Finally remembered, we had saxon 6.5.5 in the class path, and the jetty
error was
09/03/11 08:23:20 WARN xml.XmlParser: EXCEPTION
javax.xml.parsers.ParserConfigurationException: AElfred parser is
non-validating

On Wed, Mar 11, 2009 at 8:01 AM, jason hadoop <ja...@gmail.com>wrote:

> I am having trouble reproducing this one. It happened in a very specific
> environment that pulled in an alternate sax parser.
>
> The bottom line is that jetty expects a parser with particular capabilities
> and if it doesn't get one, odd things happen.
>
> In a day or so I will have hopefully worked out the details, but it has
> been have a year since I dealt with this last.
>
> Unless you are forking, to run your junit tests, ant won't let you change
> the class path for your unit tests - much chaos will ensue.
>
>
>
>
> On Wed, Mar 11, 2009 at 4:39 AM, Steve Loughran <st...@apache.org> wrote:
>
>> jason hadoop wrote:
>>
>>> The other goofy thing is that the  xml parser that is commonly first in
>>> the
>>> class path, validates xml in a way that is opposite to what jetty wants.
>>>
>>
>> What does ant -diagnostics say? It will list the XML parser at work
>>
>>
>>  This line in the preamble before theClusterMapReduceTestCase setup takes
>>> care of the xml errors.
>>>
>>>
>>> System.setProperty("javax.xml.parsers.SAXParserFactory","org.apache.xerces.jaxp.SAXParserFactoryImpl");
>>>
>>
>>
>> possibly, though when Ant starts it with the classpath set up for junit
>> runners, I'd expect the xml parser from the ant distro to get in there
>> first, system properties notwithstandng
>>
>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
I am having trouble reproducing this one. It happened in a very specific
environment that pulled in an alternate sax parser.

The bottom line is that jetty expects a parser with particular capabilities
and if it doesn't get one, odd things happen.

In a day or so I will have hopefully worked out the details, but it has been
have a year since I dealt with this last.

Unless you are forking, to run your junit tests, ant won't let you change
the class path for your unit tests - much chaos will ensue.



On Wed, Mar 11, 2009 at 4:39 AM, Steve Loughran <st...@apache.org> wrote:

> jason hadoop wrote:
>
>> The other goofy thing is that the  xml parser that is commonly first in
>> the
>> class path, validates xml in a way that is opposite to what jetty wants.
>>
>
> What does ant -diagnostics say? It will list the XML parser at work
>
>
>  This line in the preamble before theClusterMapReduceTestCase setup takes
>> care of the xml errors.
>>
>>
>> System.setProperty("javax.xml.parsers.SAXParserFactory","org.apache.xerces.jaxp.SAXParserFactoryImpl");
>>
>
>
> possibly, though when Ant starts it with the classpath set up for junit
> runners, I'd expect the xml parser from the ant distro to get in there
> first, system properties notwithstandng
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by Steve Loughran <st...@apache.org>.
jason hadoop wrote:
> The other goofy thing is that the  xml parser that is commonly first in the
> class path, validates xml in a way that is opposite to what jetty wants.

What does ant -diagnostics say? It will list the XML parser at work


> This line in the preamble before theClusterMapReduceTestCase setup takes
> care of the xml errors.
> 
> System.setProperty("javax.xml.parsers.SAXParserFactory","org.apache.xerces.jaxp.SAXParserFactoryImpl");


possibly, though when Ant starts it with the classpath set up for junit 
runners, I'd expect the xml parser from the ant distro to get in there 
first, system properties notwithstandng

Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
The other goofy thing is that the  xml parser that is commonly first in the
class path, validates xml in a way that is opposite to what jetty wants.

This line in the preamble before theClusterMapReduceTestCase setup takes
care of the xml errors.

System.setProperty("javax.xml.parsers.SAXParserFactory","org.apache.xerces.jaxp.SAXParserFactoryImpl");


On Tue, Mar 10, 2009 at 2:28 PM, jason hadoop <ja...@gmail.com>wrote:

> There are a couple of failures that happen in tests derived from
> ClusterMapReduceTestCase that are run outside of the hadoop unit test
> framework.
>
> The basic issue is that the unit test doesn't have the benefit of a runtime
> environment setup by the bin/hadoop script.
>
> The classpath is usually missing the lib/jetty-ext/*.jar files, and doesn't
> get the conf/hadoop-default.xml and conf/hadoop-site.xml.
> The *standard* properties are also unset.. hadoop.log.dir,
> hadoop.log.file, hadoop.home.dir, hadoop.id.str, hadoop.root.logger
>
> I find that I can get away with just defining hadoop.log.dir.


>
> You can read about this in detail in the chapter on unit testing map/reduce
> jobs in my book, out real soon now :)
>
>
>
>
> On Tue, Mar 10, 2009 at 12:08 PM, Brian Forney <bf...@integral7.com>wrote:
>
>> Hi all,
>>
>> I'm trying to write a JUnit test case that extends
>> ClusterMapReduceTestCase
>> to test some code I've written to ease job submission and monitoring
>> between
>> some existing code. Unfortunately, I see the following problem and cannot
>> find the jetty 5.1.4 code anywhere online. Any ideas about why this is
>> happening?
>>
>>    [junit] Testsuite: com.integral7.batch.hadoop.test.TestJobController
>>    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 1.384 sec
>>    [junit]
>>    [junit] ------------- Standard Output ---------------
>>    [junit] 2009-03-10 12:52:26,303 [main] ERROR
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
>> :290) - FSNamesystem initialization failed.
>>    [junit] java.io.IOException: Problem starting http server
>>    [junit]     at
>> org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
>> java:379)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
>> :288)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
>> )
>>    [junit]     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>>    [junit]     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
>> :859)
>>    [junit]     at
>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
>>    [junit]     at
>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
>>    [junit]     at
>>
>> org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
>> uceTestCase.java:81)
>>    [junit]     at
>>
>> org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
>> Case.java:56)
>>    [junit]     at
>>
>> com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
>> va:49)
>>    [junit]     at junit.framework.TestCase.runBare(TestCase.java:132)
>>    [junit]     at
>> junit.framework.TestResult$1.protect(TestResult.java:110)
>>    [junit]     at
>> junit.framework.TestResult.runProtected(TestResult.java:128)
>>    [junit]     at junit.framework.TestResult.run(TestResult.java:113)
>>    [junit]     at junit.framework.TestCase.run(TestCase.java:124)
>>    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:232)
>>    [junit]     at junit.framework.TestSuite.run(TestSuite.java:227)
>>    [junit]     at
>>
>> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81
>> )
>>    [junit]     at
>> junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:36)
>>    [junit]     at
>>
>> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRu
>> nner.java:421)
>>    [junit]     at
>>
>> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTes
>> tRunner.java:912)
>>    [junit]     at
>>
>> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestR
>> unner.java:766)
>>    [junit] Caused by:
>> org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
>> processing instruction target matching "[xX][mM][lL]" is not allowed.,
>> org.xml.sax.SAXParseException: The processing instruction target matching
>> "[xX][mM][lL]" is not allowed.]
>>    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>>    [junit]     at org.mortbay.util.Container.start(Container.java:72)
>>    [junit]     at
>> org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
>>    [junit]     ... 23 more
>>    [junit] ------------- ---------------- ---------------
>>    [junit] Testcase:
>> testJobSubmission(com.integral7.batch.hadoop.test.TestJobController):
>> Caused an ERROR
>>    [junit] Problem starting http server
>>    [junit] java.io.IOException: Problem starting http server
>>    [junit]     at
>> org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
>> java:379)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
>> :288)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
>> )
>>    [junit]     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>>    [junit]     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>>    [junit]     at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
>> :859)
>>    [junit]     at
>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
>>    [junit]     at
>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
>>    [junit]     at
>>
>> org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
>> uceTestCase.java:81)
>>    [junit]     at
>>
>> org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
>> Case.java:56)
>>    [junit]     at
>>
>> com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
>> va:49)
>>    [junit] Caused by:
>> org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
>> processing instruction target matching "[xX][mM][lL]" is not allowed.,
>> org.xml.sax.SAXParseException: The processing instruction target matching
>> "[xX][mM][lL]" is not allowed.]
>>    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>>    [junit]     at org.mortbay.util.Container.start(Container.java:72)
>>    [junit]     at
>> org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
>>    [junit]
>>    [junit]
>>    [junit] Test com.integral7.batch.hadoop.test.TestJobController FAILED
>>
>> Thanks,
>> Brian
>>
>>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
There are a couple of failures that happen in tests derived from
ClusterMapReduceTestCase that are run outside of the hadoop unit test
framework.

The basic issue is that the unit test doesn't have the benefit of a runtime
environment setup by the bin/hadoop script.

The classpath is usually missing the lib/jetty-ext/*.jar files, and doesn't
get the conf/hadoop-default.xml and conf/hadoop-site.xml.
The *standard* properties are also unset.. hadoop.log.dir, hadoop.log.file,
hadoop.home.dir, hadoop.id.str, hadoop.root.logger

I find that I can get away with just defining hadoop.log.dir.

You can read about this in detail in the chapter on unit testing map/reduce
jobs in my book, out real soon now :)



On Tue, Mar 10, 2009 at 12:08 PM, Brian Forney <bf...@integral7.com>wrote:

> Hi all,
>
> I'm trying to write a JUnit test case that extends ClusterMapReduceTestCase
> to test some code I've written to ease job submission and monitoring
> between
> some existing code. Unfortunately, I see the following problem and cannot
> find the jetty 5.1.4 code anywhere online. Any ideas about why this is
> happening?
>
>    [junit] Testsuite: com.integral7.batch.hadoop.test.TestJobController
>    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 1.384 sec
>    [junit]
>    [junit] ------------- Standard Output ---------------
>    [junit] 2009-03-10 12:52:26,303 [main] ERROR
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
> :290) - FSNamesystem initialization failed.
>    [junit] java.io.IOException: Problem starting http server
>    [junit]     at
> org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
> java:379)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
> :288)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
> )
>    [junit]     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>    [junit]     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
> :859)
>    [junit]     at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
>    [junit]     at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
>    [junit]     at
>
> org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
> uceTestCase.java:81)
>    [junit]     at
>
> org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
> Case.java:56)
>    [junit]     at
>
> com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
> va:49)
>    [junit]     at junit.framework.TestCase.runBare(TestCase.java:132)
>    [junit]     at junit.framework.TestResult$1.protect(TestResult.java:110)
>    [junit]     at
> junit.framework.TestResult.runProtected(TestResult.java:128)
>    [junit]     at junit.framework.TestResult.run(TestResult.java:113)
>    [junit]     at junit.framework.TestCase.run(TestCase.java:124)
>    [junit]     at junit.framework.TestSuite.runTest(TestSuite.java:232)
>    [junit]     at junit.framework.TestSuite.run(TestSuite.java:227)
>    [junit]     at
>
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81
> )
>    [junit]     at
> junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:36)
>    [junit]     at
>
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRu
> nner.java:421)
>    [junit]     at
>
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTes
> tRunner.java:912)
>    [junit]     at
>
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestR
> unner.java:766)
>    [junit] Caused by:
> org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
> processing instruction target matching "[xX][mM][lL]" is not allowed.,
> org.xml.sax.SAXParseException: The processing instruction target matching
> "[xX][mM][lL]" is not allowed.]
>    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>    [junit]     at org.mortbay.util.Container.start(Container.java:72)
>    [junit]     at
> org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
>    [junit]     ... 23 more
>    [junit] ------------- ---------------- ---------------
>    [junit] Testcase:
> testJobSubmission(com.integral7.batch.hadoop.test.TestJobController):
> Caused an ERROR
>    [junit] Problem starting http server
>    [junit] java.io.IOException: Problem starting http server
>    [junit]     at
> org.apache.hadoop.http.HttpServer.start(HttpServer.java:343)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
> java:379)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
> :288)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163
> )
>    [junit]     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>    [junit]     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>    [junit]     at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
> :859)
>    [junit]     at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:275)
>    [junit]     at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:119)
>    [junit]     at
>
> org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapRed
> uceTestCase.java:81)
>    [junit]     at
>
> org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTest
> Case.java:56)
>    [junit]     at
>
> com.integral7.batch.hadoop.test.TestJobController.setUp(TestJobController.ja
> va:49)
>    [junit] Caused by:
> org.mortbay.util.MultiException[org.xml.sax.SAXParseException: The
> processing instruction target matching "[xX][mM][lL]" is not allowed.,
> org.xml.sax.SAXParseException: The processing instruction target matching
> "[xX][mM][lL]" is not allowed.]
>    [junit]     at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
>    [junit]     at org.mortbay.util.Container.start(Container.java:72)
>    [junit]     at
> org.apache.hadoop.http.HttpServer.start(HttpServer.java:321)
>    [junit]
>    [junit]
>    [junit] Test com.integral7.batch.hadoop.test.TestJobController FAILED
>
> Thanks,
> Brian
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
I got it all up and working, thanks for your help - it was an issue with me
not actually setting the log.dir system property before the cluster startup. 
Can't believe I missed that one :)

As a side note (which you might already be aware of), the example class
you're using in Chapter 7 (PiEstimator) has changed in Hadoop 0.19.1 such
that the example code no longer works.  The new one is a little trickier to
test.

I'm looking forward to seeing the rest of the book.  And that delegate test
harness when it's available :)


jason hadoop wrote:
> 
> btw that stack trace looks like the hadoop.log.dir issue
> This is the code out of the init method, in JobHistory
> 
> LOG_DIR = conf.get("hadoop.job.history.location" ,
>         "file:///" + new File(
>         System.getProperty("hadoop.log.dir")).getAbsolutePath()
>         + File.separator + "history");
> 
> looks like the hadoop.log.dir system property is not set, note: not
> environment variable, not configuration parameter, but system property.
> 
> Try a *System.setProperty("hadoop.log.dir","/tmp");* in your code before
> you
> initialize the virtual cluster.
> 
> 
> 
> On Tue, Apr 14, 2009 at 5:56 PM, jason hadoop
> <ja...@gmail.com>wrote:
> 
>>
>> I have actually built an add on class on top of ClusterMapReduceDelegate
>> that just runs a virtual cluster that persists for running tests on, it
>> is
>> very nice, as you can interact via the web ui.
>> Especially since the virtual cluster stuff is somewhat flaky under
>> windows.
>>
>> I have a question in to the editor about the sample code.
>>
>>
>>
>> On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:
>>
>>>
>>> I actually picked up the alpha .PDF's of your book, great job.
>>>
>>> I'm following the example in chapter 7 to the letter now and am still
>>> getting the same problem.  2 quick questions (and thanks for your time
>>> in
>>> advance)...
>>>
>>> Is the ClusterMapReduceDelegate class available anywhere yet?
>>>
>>> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of
>>> bulk,
>>> so I've avoided it until now.  Are there any lib's in there that are
>>> absolutely necessary for this test to work?
>>>
>>> Thanks again,
>>> bc
>>>
>>>
>>>
>>> jason hadoop wrote:
>>> >
>>> > I have a nice variant of this in the ch7 examples section of my book,
>>> > including a standalone wrapper around the virtual cluster for allowing
>>> > multiple test instances to share the virtual cluster - and allow an
>>> easier
>>> > time to poke around with the input and output datasets.
>>> >
>>> > It even works decently under windows - my editor insisting on word to
>>> > recent
>>> > for crossover.
>>> >
>>> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
>>> >
>>> >>
>>> >> Sry, I forgot to include the not-IntelliJ-console output :)
>>> >>
>>> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>>> >> java.lang.NullPointerException
>>> >>        at java.io.File.<init>(File.java:222)
>>> >>        at
>>> org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>>> >>        at
>>> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>>> >>        at
>>> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>>> >>        at
>>> >>
>>> >>
>>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>>> >>        at java.lang.Thread.run(Thread.java:637)
>>> >>
>>> >> I managed to pick up the chapter in the Hadoop Book that Jason
>>> mentions
>>> >> that
>>> >> deals with Unit testing (great chapter btw) and it looks like
>>> everything
>>> >> is
>>> >> in order.  He points out that this error is typically caused by a bad
>>> >> hadoop.log.dir or missing log4j.properties, but I verified that my
>>> dir
>>> is
>>> >> ok
>>> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>>> >>
>>> >> I also tried running the same test with hadoop-core/test 0.19.0 -
>>> same
>>> >> thing.
>>> >>
>>> >> Thanks again,
>>> >>
>>> >> bc
>>> >>
>>> >>
>>> >> czero wrote:
>>> >> >
>>> >> > Hey all,
>>> >> >
>>> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>>> >> > trouble as well.
>>> >> >
>>> >> > Currently I'm getting :
>>> >> >
>>> >> > Starting DataNode 0 with dfs.data.dir:
>>> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>>> >> > Starting DataNode 1 with dfs.data.dir:
>>> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>>> >> > Generating rack names for tasktrackers
>>> >> > Generating host names for tasktrackers
>>> >> >
>>> >> > And then nothing... just spins on that forever.  Any ideas?
>>> >> >
>>> >> > I have all the jetty and jetty-ext libs in the classpath and I set
>>> the
>>> >> > hadoop.log.dir and the SAX parser correctly.
>>> >> >
>>> >> > This is all I have for my test class so far, I'm not even doing
>>> >> anything
>>> >> > yet:
>>> >> >
>>> >> > public class TestDoop extends ClusterMapReduceTestCase {
>>> >> >
>>> >> >     @Test
>>> >> >     public void testDoop() throws Exception {
>>> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>>> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>>> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>>> >> >
>>> >> >         setUp();
>>> >> >
>>> >> >         System.out.println("done.");
>>> >> >     }
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> > bc
>>> >> >
>>> >>
>>> >> --
>>> >> View this message in context:
>>> >>
>>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>>> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>> >>
>>> >>
>>> >
>>> >
>>> > --
>>> > Alpha Chapters of my book on Hadoop are available
>>> > http://www.apress.com/book/view/9781430219422
>>> >
>>> >
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
>>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>> --
>> Alpha Chapters of my book on Hadoop are available
>> http://www.apress.com/book/view/9781430219422
>>
> 
> 
> 
> -- 
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> 
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23065441.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
btw that stack trace looks like the hadoop.log.dir issue
This is the code out of the init method, in JobHistory

LOG_DIR = conf.get("hadoop.job.history.location" ,
        "file:///" + new File(
        System.getProperty("hadoop.log.dir")).getAbsolutePath()
        + File.separator + "history");

looks like the hadoop.log.dir system property is not set, note: not
environment variable, not configuration parameter, but system property.

Try a *System.setProperty("hadoop.log.dir","/tmp");* in your code before you
initialize the virtual cluster.



On Tue, Apr 14, 2009 at 5:56 PM, jason hadoop <ja...@gmail.com>wrote:

>
> I have actually built an add on class on top of ClusterMapReduceDelegate
> that just runs a virtual cluster that persists for running tests on, it is
> very nice, as you can interact via the web ui.
> Especially since the virtual cluster stuff is somewhat flaky under windows.
>
> I have a question in to the editor about the sample code.
>
>
>
> On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:
>
>>
>> I actually picked up the alpha .PDF's of your book, great job.
>>
>> I'm following the example in chapter 7 to the letter now and am still
>> getting the same problem.  2 quick questions (and thanks for your time in
>> advance)...
>>
>> Is the ClusterMapReduceDelegate class available anywhere yet?
>>
>> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of
>> bulk,
>> so I've avoided it until now.  Are there any lib's in there that are
>> absolutely necessary for this test to work?
>>
>> Thanks again,
>> bc
>>
>>
>>
>> jason hadoop wrote:
>> >
>> > I have a nice variant of this in the ch7 examples section of my book,
>> > including a standalone wrapper around the virtual cluster for allowing
>> > multiple test instances to share the virtual cluster - and allow an
>> easier
>> > time to poke around with the input and output datasets.
>> >
>> > It even works decently under windows - my editor insisting on word to
>> > recent
>> > for crossover.
>> >
>> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
>> >
>> >>
>> >> Sry, I forgot to include the not-IntelliJ-console output :)
>> >>
>> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>> >> java.lang.NullPointerException
>> >>        at java.io.File.<init>(File.java:222)
>> >>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>> >>        at
>> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>> >>        at
>> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>> >>        at java.lang.Thread.run(Thread.java:637)
>> >>
>> >> I managed to pick up the chapter in the Hadoop Book that Jason mentions
>> >> that
>> >> deals with Unit testing (great chapter btw) and it looks like
>> everything
>> >> is
>> >> in order.  He points out that this error is typically caused by a bad
>> >> hadoop.log.dir or missing log4j.properties, but I verified that my dir
>> is
>> >> ok
>> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>> >>
>> >> I also tried running the same test with hadoop-core/test 0.19.0 - same
>> >> thing.
>> >>
>> >> Thanks again,
>> >>
>> >> bc
>> >>
>> >>
>> >> czero wrote:
>> >> >
>> >> > Hey all,
>> >> >
>> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>> >> > trouble as well.
>> >> >
>> >> > Currently I'm getting :
>> >> >
>> >> > Starting DataNode 0 with dfs.data.dir:
>> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>> >> > Starting DataNode 1 with dfs.data.dir:
>> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>> >> > Generating rack names for tasktrackers
>> >> > Generating host names for tasktrackers
>> >> >
>> >> > And then nothing... just spins on that forever.  Any ideas?
>> >> >
>> >> > I have all the jetty and jetty-ext libs in the classpath and I set
>> the
>> >> > hadoop.log.dir and the SAX parser correctly.
>> >> >
>> >> > This is all I have for my test class so far, I'm not even doing
>> >> anything
>> >> > yet:
>> >> >
>> >> > public class TestDoop extends ClusterMapReduceTestCase {
>> >> >
>> >> >     @Test
>> >> >     public void testDoop() throws Exception {
>> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>> >> >
>> >> >         setUp();
>> >> >
>> >> >         System.out.println("done.");
>> >> >     }
>> >> >
>> >> > Thanks!
>> >> >
>> >> > bc
>> >> >
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>> > --
>> > Alpha Chapters of my book on Hadoop are available
>> > http://www.apress.com/book/view/9781430219422
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
I have actually built an add on class on top of ClusterMapReduceDelegate
that just runs a virtual cluster that persists for running tests on, it is
very nice, as you can interact via the web ui.
Especially since the virtual cluster stuff is somewhat flaky under windows.

I have a question in to the editor about the sample code.


On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:

>
> I actually picked up the alpha .PDF's of your book, great job.
>
> I'm following the example in chapter 7 to the letter now and am still
> getting the same problem.  2 quick questions (and thanks for your time in
> advance)...
>
> Is the ClusterMapReduceDelegate class available anywhere yet?
>
> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of bulk,
> so I've avoided it until now.  Are there any lib's in there that are
> absolutely necessary for this test to work?
>
> Thanks again,
> bc
>
>
>
> jason hadoop wrote:
> >
> > I have a nice variant of this in the ch7 examples section of my book,
> > including a standalone wrapper around the virtual cluster for allowing
> > multiple test instances to share the virtual cluster - and allow an
> easier
> > time to poke around with the input and output datasets.
> >
> > It even works decently under windows - my editor insisting on word to
> > recent
> > for crossover.
> >
> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
> >
> >>
> >> Sry, I forgot to include the not-IntelliJ-console output :)
> >>
> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
> >> java.lang.NullPointerException
> >>        at java.io.File.<init>(File.java:222)
> >>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
> >>        at
> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
> >>        at
> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
> >>        at
> >>
> >>
> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
> >>        at java.lang.Thread.run(Thread.java:637)
> >>
> >> I managed to pick up the chapter in the Hadoop Book that Jason mentions
> >> that
> >> deals with Unit testing (great chapter btw) and it looks like everything
> >> is
> >> in order.  He points out that this error is typically caused by a bad
> >> hadoop.log.dir or missing log4j.properties, but I verified that my dir
> is
> >> ok
> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
> >>
> >> I also tried running the same test with hadoop-core/test 0.19.0 - same
> >> thing.
> >>
> >> Thanks again,
> >>
> >> bc
> >>
> >>
> >> czero wrote:
> >> >
> >> > Hey all,
> >> >
> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
> >> > trouble as well.
> >> >
> >> > Currently I'm getting :
> >> >
> >> > Starting DataNode 0 with dfs.data.dir:
> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> >> > Starting DataNode 1 with dfs.data.dir:
> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> >> > Generating rack names for tasktrackers
> >> > Generating host names for tasktrackers
> >> >
> >> > And then nothing... just spins on that forever.  Any ideas?
> >> >
> >> > I have all the jetty and jetty-ext libs in the classpath and I set the
> >> > hadoop.log.dir and the SAX parser correctly.
> >> >
> >> > This is all I have for my test class so far, I'm not even doing
> >> anything
> >> > yet:
> >> >
> >> > public class TestDoop extends ClusterMapReduceTestCase {
> >> >
> >> >     @Test
> >> >     public void testDoop() throws Exception {
> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> >> >
> >> >         setUp();
> >> >
> >> >         System.out.println("done.");
> >> >     }
> >> >
> >> > Thanks!
> >> >
> >> > bc
> >> >
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >>
> >
> >
> > --
> > Alpha Chapters of my book on Hadoop are available
> > http://www.apress.com/book/view/9781430219422
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
I actually picked up the alpha .PDF's of your book, great job.

I'm following the example in chapter 7 to the letter now and am still
getting the same problem.  2 quick questions (and thanks for your time in
advance)...

Is the ClusterMapReduceDelegate class available anywhere yet?

Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of bulk,
so I've avoided it until now.  Are there any lib's in there that are
absolutely necessary for this test to work?

Thanks again,
bc



jason hadoop wrote:
> 
> I have a nice variant of this in the ch7 examples section of my book,
> including a standalone wrapper around the virtual cluster for allowing
> multiple test instances to share the virtual cluster - and allow an easier
> time to poke around with the input and output datasets.
> 
> It even works decently under windows - my editor insisting on word to
> recent
> for crossover.
> 
> On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
> 
>>
>> Sry, I forgot to include the not-IntelliJ-console output :)
>>
>> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>> java.lang.NullPointerException
>>        at java.io.File.<init>(File.java:222)
>>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>>        at
>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>>        at
>>
>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>>        at java.lang.Thread.run(Thread.java:637)
>>
>> I managed to pick up the chapter in the Hadoop Book that Jason mentions
>> that
>> deals with Unit testing (great chapter btw) and it looks like everything
>> is
>> in order.  He points out that this error is typically caused by a bad
>> hadoop.log.dir or missing log4j.properties, but I verified that my dir is
>> ok
>> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>>
>> I also tried running the same test with hadoop-core/test 0.19.0 - same
>> thing.
>>
>> Thanks again,
>>
>> bc
>>
>>
>> czero wrote:
>> >
>> > Hey all,
>> >
>> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>> > trouble as well.
>> >
>> > Currently I'm getting :
>> >
>> > Starting DataNode 0 with dfs.data.dir:
>> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>> > Starting DataNode 1 with dfs.data.dir:
>> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>> > Generating rack names for tasktrackers
>> > Generating host names for tasktrackers
>> >
>> > And then nothing... just spins on that forever.  Any ideas?
>> >
>> > I have all the jetty and jetty-ext libs in the classpath and I set the
>> > hadoop.log.dir and the SAX parser correctly.
>> >
>> > This is all I have for my test class so far, I'm not even doing
>> anything
>> > yet:
>> >
>> > public class TestDoop extends ClusterMapReduceTestCase {
>> >
>> >     @Test
>> >     public void testDoop() throws Exception {
>> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>> >
>> >         setUp();
>> >
>> >         System.out.println("done.");
>> >     }
>> >
>> > Thanks!
>> >
>> > bc
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> 
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
I have a nice variant of this in the ch7 examples section of my book,
including a standalone wrapper around the virtual cluster for allowing
multiple test instances to share the virtual cluster - and allow an easier
time to poke around with the input and output datasets.

It even works decently under windows - my editor insisting on word to recent
for crossover.

On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:

>
> Sry, I forgot to include the not-IntelliJ-console output :)
>
> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
> java.lang.NullPointerException
>        at java.io.File.<init>(File.java:222)
>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>        at
>
> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>        at java.lang.Thread.run(Thread.java:637)
>
> I managed to pick up the chapter in the Hadoop Book that Jason mentions
> that
> deals with Unit testing (great chapter btw) and it looks like everything is
> in order.  He points out that this error is typically caused by a bad
> hadoop.log.dir or missing log4j.properties, but I verified that my dir is
> ok
> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>
> I also tried running the same test with hadoop-core/test 0.19.0 - same
> thing.
>
> Thanks again,
>
> bc
>
>
> czero wrote:
> >
> > Hey all,
> >
> > I'm also extending the ClusterMapReduceTestCase and having a bit of
> > trouble as well.
> >
> > Currently I'm getting :
> >
> > Starting DataNode 0 with dfs.data.dir:
> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> > Starting DataNode 1 with dfs.data.dir:
> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> > Generating rack names for tasktrackers
> > Generating host names for tasktrackers
> >
> > And then nothing... just spins on that forever.  Any ideas?
> >
> > I have all the jetty and jetty-ext libs in the classpath and I set the
> > hadoop.log.dir and the SAX parser correctly.
> >
> > This is all I have for my test class so far, I'm not even doing anything
> > yet:
> >
> > public class TestDoop extends ClusterMapReduceTestCase {
> >
> >     @Test
> >     public void testDoop() throws Exception {
> >         System.setProperty("hadoop.log.dir", "~/test-logs");
> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> >
> >         setUp();
> >
> >         System.out.println("done.");
> >     }
> >
> > Thanks!
> >
> > bc
> >
>
> --
> View this message in context:
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
Sry, I forgot to include the not-IntelliJ-console output :)

09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
java.lang.NullPointerException
        at java.io.File.<init>(File.java:222)
        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
        at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
        at
org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
        at java.lang.Thread.run(Thread.java:637)

I managed to pick up the chapter in the Hadoop Book that Jason mentions that
deals with Unit testing (great chapter btw) and it looks like everything is
in order.  He points out that this error is typically caused by a bad
hadoop.log.dir or missing log4j.properties, but I verified that my dir is ok
and my hadoop-0.19.1-core.jar has the log4j.properties in it. 

I also tried running the same test with hadoop-core/test 0.19.0 - same
thing.

Thanks again,

bc 


czero wrote:
> 
> Hey all,
> 
> I'm also extending the ClusterMapReduceTestCase and having a bit of
> trouble as well.
> 
> Currently I'm getting :
> 
> Starting DataNode 0 with dfs.data.dir:
> build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> Starting DataNode 1 with dfs.data.dir:
> build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> Generating rack names for tasktrackers
> Generating host names for tasktrackers
> 
> And then nothing... just spins on that forever.  Any ideas?
> 
> I have all the jetty and jetty-ext libs in the classpath and I set the
> hadoop.log.dir and the SAX parser correctly.
> 
> This is all I have for my test class so far, I'm not even doing anything
> yet:
> 
> public class TestDoop extends ClusterMapReduceTestCase {
> 
>     @Test
>     public void testDoop() throws Exception {
>         System.setProperty("hadoop.log.dir", "~/test-logs");
>         System.setProperty("javax.xml.parsers.SAXParserFactory",
> "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> 
>         setUp();
> 
>         System.out.println("done.");
>     }
> 
> Thanks! 
> 
> bc
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
Hey all,

I'm also extending the ClusterMapReduceTestCase and having a bit of trouble
as well.

Currently I'm getting :

Starting DataNode 0 with dfs.data.dir:
build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
Starting DataNode 1 with dfs.data.dir:
build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
Generating rack names for tasktrackers
Generating host names for tasktrackers

And then nothing... just spins on that forever.  Any ideas?

I have all the jetty and jetty-ext libs in the classpath and I set the
hadoop.log.dir and the SAX parser correctly.

This is all I have for my test class so far, I'm not even doing anything
yet:

public class TestDoop extends ClusterMapReduceTestCase {

    @Test
    public void testDoop() throws Exception {
        System.setProperty("hadoop.log.dir", "~/test-logs");
        System.setProperty("javax.xml.parsers.SAXParserFactory",
"com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");

        setUp();

        System.out.println("done.");
    }

Thanks! 

bc
-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024043.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.