You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by czero <br...@gmail.com> on 2009/04/13 17:40:28 UTC

Re: Extending ClusterMapReduceTestCase

Hey all,

I'm also extending the ClusterMapReduceTestCase and having a bit of trouble
as well.

Currently I'm getting :

Starting DataNode 0 with dfs.data.dir:
build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
Starting DataNode 1 with dfs.data.dir:
build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
Generating rack names for tasktrackers
Generating host names for tasktrackers

And then nothing... just spins on that forever.  Any ideas?

I have all the jetty and jetty-ext libs in the classpath and I set the
hadoop.log.dir and the SAX parser correctly.

This is all I have for my test class so far, I'm not even doing anything
yet:

public class TestDoop extends ClusterMapReduceTestCase {

    @Test
    public void testDoop() throws Exception {
        System.setProperty("hadoop.log.dir", "~/test-logs");
        System.setProperty("javax.xml.parsers.SAXParserFactory",
"com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");

        setUp();

        System.out.println("done.");
    }

Thanks! 

bc
-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024043.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
I got it all up and working, thanks for your help - it was an issue with me
not actually setting the log.dir system property before the cluster startup. 
Can't believe I missed that one :)

As a side note (which you might already be aware of), the example class
you're using in Chapter 7 (PiEstimator) has changed in Hadoop 0.19.1 such
that the example code no longer works.  The new one is a little trickier to
test.

I'm looking forward to seeing the rest of the book.  And that delegate test
harness when it's available :)


jason hadoop wrote:
> 
> btw that stack trace looks like the hadoop.log.dir issue
> This is the code out of the init method, in JobHistory
> 
> LOG_DIR = conf.get("hadoop.job.history.location" ,
>         "file:///" + new File(
>         System.getProperty("hadoop.log.dir")).getAbsolutePath()
>         + File.separator + "history");
> 
> looks like the hadoop.log.dir system property is not set, note: not
> environment variable, not configuration parameter, but system property.
> 
> Try a *System.setProperty("hadoop.log.dir","/tmp");* in your code before
> you
> initialize the virtual cluster.
> 
> 
> 
> On Tue, Apr 14, 2009 at 5:56 PM, jason hadoop
> <ja...@gmail.com>wrote:
> 
>>
>> I have actually built an add on class on top of ClusterMapReduceDelegate
>> that just runs a virtual cluster that persists for running tests on, it
>> is
>> very nice, as you can interact via the web ui.
>> Especially since the virtual cluster stuff is somewhat flaky under
>> windows.
>>
>> I have a question in to the editor about the sample code.
>>
>>
>>
>> On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:
>>
>>>
>>> I actually picked up the alpha .PDF's of your book, great job.
>>>
>>> I'm following the example in chapter 7 to the letter now and am still
>>> getting the same problem.  2 quick questions (and thanks for your time
>>> in
>>> advance)...
>>>
>>> Is the ClusterMapReduceDelegate class available anywhere yet?
>>>
>>> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of
>>> bulk,
>>> so I've avoided it until now.  Are there any lib's in there that are
>>> absolutely necessary for this test to work?
>>>
>>> Thanks again,
>>> bc
>>>
>>>
>>>
>>> jason hadoop wrote:
>>> >
>>> > I have a nice variant of this in the ch7 examples section of my book,
>>> > including a standalone wrapper around the virtual cluster for allowing
>>> > multiple test instances to share the virtual cluster - and allow an
>>> easier
>>> > time to poke around with the input and output datasets.
>>> >
>>> > It even works decently under windows - my editor insisting on word to
>>> > recent
>>> > for crossover.
>>> >
>>> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
>>> >
>>> >>
>>> >> Sry, I forgot to include the not-IntelliJ-console output :)
>>> >>
>>> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>>> >> java.lang.NullPointerException
>>> >>        at java.io.File.<init>(File.java:222)
>>> >>        at
>>> org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>>> >>        at
>>> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>>> >>        at
>>> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>>> >>        at
>>> >>
>>> >>
>>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>>> >>        at java.lang.Thread.run(Thread.java:637)
>>> >>
>>> >> I managed to pick up the chapter in the Hadoop Book that Jason
>>> mentions
>>> >> that
>>> >> deals with Unit testing (great chapter btw) and it looks like
>>> everything
>>> >> is
>>> >> in order.  He points out that this error is typically caused by a bad
>>> >> hadoop.log.dir or missing log4j.properties, but I verified that my
>>> dir
>>> is
>>> >> ok
>>> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>>> >>
>>> >> I also tried running the same test with hadoop-core/test 0.19.0 -
>>> same
>>> >> thing.
>>> >>
>>> >> Thanks again,
>>> >>
>>> >> bc
>>> >>
>>> >>
>>> >> czero wrote:
>>> >> >
>>> >> > Hey all,
>>> >> >
>>> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>>> >> > trouble as well.
>>> >> >
>>> >> > Currently I'm getting :
>>> >> >
>>> >> > Starting DataNode 0 with dfs.data.dir:
>>> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>>> >> > Starting DataNode 1 with dfs.data.dir:
>>> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>>> >> > Generating rack names for tasktrackers
>>> >> > Generating host names for tasktrackers
>>> >> >
>>> >> > And then nothing... just spins on that forever.  Any ideas?
>>> >> >
>>> >> > I have all the jetty and jetty-ext libs in the classpath and I set
>>> the
>>> >> > hadoop.log.dir and the SAX parser correctly.
>>> >> >
>>> >> > This is all I have for my test class so far, I'm not even doing
>>> >> anything
>>> >> > yet:
>>> >> >
>>> >> > public class TestDoop extends ClusterMapReduceTestCase {
>>> >> >
>>> >> >     @Test
>>> >> >     public void testDoop() throws Exception {
>>> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>>> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>>> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>>> >> >
>>> >> >         setUp();
>>> >> >
>>> >> >         System.out.println("done.");
>>> >> >     }
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> > bc
>>> >> >
>>> >>
>>> >> --
>>> >> View this message in context:
>>> >>
>>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>>> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>> >>
>>> >>
>>> >
>>> >
>>> > --
>>> > Alpha Chapters of my book on Hadoop are available
>>> > http://www.apress.com/book/view/9781430219422
>>> >
>>> >
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
>>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>> --
>> Alpha Chapters of my book on Hadoop are available
>> http://www.apress.com/book/view/9781430219422
>>
> 
> 
> 
> -- 
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> 
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23065441.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
btw that stack trace looks like the hadoop.log.dir issue
This is the code out of the init method, in JobHistory

LOG_DIR = conf.get("hadoop.job.history.location" ,
        "file:///" + new File(
        System.getProperty("hadoop.log.dir")).getAbsolutePath()
        + File.separator + "history");

looks like the hadoop.log.dir system property is not set, note: not
environment variable, not configuration parameter, but system property.

Try a *System.setProperty("hadoop.log.dir","/tmp");* in your code before you
initialize the virtual cluster.



On Tue, Apr 14, 2009 at 5:56 PM, jason hadoop <ja...@gmail.com>wrote:

>
> I have actually built an add on class on top of ClusterMapReduceDelegate
> that just runs a virtual cluster that persists for running tests on, it is
> very nice, as you can interact via the web ui.
> Especially since the virtual cluster stuff is somewhat flaky under windows.
>
> I have a question in to the editor about the sample code.
>
>
>
> On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:
>
>>
>> I actually picked up the alpha .PDF's of your book, great job.
>>
>> I'm following the example in chapter 7 to the letter now and am still
>> getting the same problem.  2 quick questions (and thanks for your time in
>> advance)...
>>
>> Is the ClusterMapReduceDelegate class available anywhere yet?
>>
>> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of
>> bulk,
>> so I've avoided it until now.  Are there any lib's in there that are
>> absolutely necessary for this test to work?
>>
>> Thanks again,
>> bc
>>
>>
>>
>> jason hadoop wrote:
>> >
>> > I have a nice variant of this in the ch7 examples section of my book,
>> > including a standalone wrapper around the virtual cluster for allowing
>> > multiple test instances to share the virtual cluster - and allow an
>> easier
>> > time to poke around with the input and output datasets.
>> >
>> > It even works decently under windows - my editor insisting on word to
>> > recent
>> > for crossover.
>> >
>> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
>> >
>> >>
>> >> Sry, I forgot to include the not-IntelliJ-console output :)
>> >>
>> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>> >> java.lang.NullPointerException
>> >>        at java.io.File.<init>(File.java:222)
>> >>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>> >>        at
>> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>> >>        at
>> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>> >>        at java.lang.Thread.run(Thread.java:637)
>> >>
>> >> I managed to pick up the chapter in the Hadoop Book that Jason mentions
>> >> that
>> >> deals with Unit testing (great chapter btw) and it looks like
>> everything
>> >> is
>> >> in order.  He points out that this error is typically caused by a bad
>> >> hadoop.log.dir or missing log4j.properties, but I verified that my dir
>> is
>> >> ok
>> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>> >>
>> >> I also tried running the same test with hadoop-core/test 0.19.0 - same
>> >> thing.
>> >>
>> >> Thanks again,
>> >>
>> >> bc
>> >>
>> >>
>> >> czero wrote:
>> >> >
>> >> > Hey all,
>> >> >
>> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>> >> > trouble as well.
>> >> >
>> >> > Currently I'm getting :
>> >> >
>> >> > Starting DataNode 0 with dfs.data.dir:
>> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>> >> > Starting DataNode 1 with dfs.data.dir:
>> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>> >> > Generating rack names for tasktrackers
>> >> > Generating host names for tasktrackers
>> >> >
>> >> > And then nothing... just spins on that forever.  Any ideas?
>> >> >
>> >> > I have all the jetty and jetty-ext libs in the classpath and I set
>> the
>> >> > hadoop.log.dir and the SAX parser correctly.
>> >> >
>> >> > This is all I have for my test class so far, I'm not even doing
>> >> anything
>> >> > yet:
>> >> >
>> >> > public class TestDoop extends ClusterMapReduceTestCase {
>> >> >
>> >> >     @Test
>> >> >     public void testDoop() throws Exception {
>> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>> >> >
>> >> >         setUp();
>> >> >
>> >> >         System.out.println("done.");
>> >> >     }
>> >> >
>> >> > Thanks!
>> >> >
>> >> > bc
>> >> >
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>> > --
>> > Alpha Chapters of my book on Hadoop are available
>> > http://www.apress.com/book/view/9781430219422
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
I have actually built an add on class on top of ClusterMapReduceDelegate
that just runs a virtual cluster that persists for running tests on, it is
very nice, as you can interact via the web ui.
Especially since the virtual cluster stuff is somewhat flaky under windows.

I have a question in to the editor about the sample code.


On Tue, Apr 14, 2009 at 8:16 AM, czero <br...@gmail.com> wrote:

>
> I actually picked up the alpha .PDF's of your book, great job.
>
> I'm following the example in chapter 7 to the letter now and am still
> getting the same problem.  2 quick questions (and thanks for your time in
> advance)...
>
> Is the ClusterMapReduceDelegate class available anywhere yet?
>
> Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of bulk,
> so I've avoided it until now.  Are there any lib's in there that are
> absolutely necessary for this test to work?
>
> Thanks again,
> bc
>
>
>
> jason hadoop wrote:
> >
> > I have a nice variant of this in the ch7 examples section of my book,
> > including a standalone wrapper around the virtual cluster for allowing
> > multiple test instances to share the virtual cluster - and allow an
> easier
> > time to poke around with the input and output datasets.
> >
> > It even works decently under windows - my editor insisting on word to
> > recent
> > for crossover.
> >
> > On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
> >
> >>
> >> Sry, I forgot to include the not-IntelliJ-console output :)
> >>
> >> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
> >> java.lang.NullPointerException
> >>        at java.io.File.<init>(File.java:222)
> >>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
> >>        at
> >> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
> >>        at
> >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
> >>        at
> >>
> >>
> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
> >>        at java.lang.Thread.run(Thread.java:637)
> >>
> >> I managed to pick up the chapter in the Hadoop Book that Jason mentions
> >> that
> >> deals with Unit testing (great chapter btw) and it looks like everything
> >> is
> >> in order.  He points out that this error is typically caused by a bad
> >> hadoop.log.dir or missing log4j.properties, but I verified that my dir
> is
> >> ok
> >> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
> >>
> >> I also tried running the same test with hadoop-core/test 0.19.0 - same
> >> thing.
> >>
> >> Thanks again,
> >>
> >> bc
> >>
> >>
> >> czero wrote:
> >> >
> >> > Hey all,
> >> >
> >> > I'm also extending the ClusterMapReduceTestCase and having a bit of
> >> > trouble as well.
> >> >
> >> > Currently I'm getting :
> >> >
> >> > Starting DataNode 0 with dfs.data.dir:
> >> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> >> > Starting DataNode 1 with dfs.data.dir:
> >> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> >> > Generating rack names for tasktrackers
> >> > Generating host names for tasktrackers
> >> >
> >> > And then nothing... just spins on that forever.  Any ideas?
> >> >
> >> > I have all the jetty and jetty-ext libs in the classpath and I set the
> >> > hadoop.log.dir and the SAX parser correctly.
> >> >
> >> > This is all I have for my test class so far, I'm not even doing
> >> anything
> >> > yet:
> >> >
> >> > public class TestDoop extends ClusterMapReduceTestCase {
> >> >
> >> >     @Test
> >> >     public void testDoop() throws Exception {
> >> >         System.setProperty("hadoop.log.dir", "~/test-logs");
> >> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
> >> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> >> >
> >> >         setUp();
> >> >
> >> >         System.out.println("done.");
> >> >     }
> >> >
> >> > Thanks!
> >> >
> >> > bc
> >> >
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >>
> >
> >
> > --
> > Alpha Chapters of my book on Hadoop are available
> > http://www.apress.com/book/view/9781430219422
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
I actually picked up the alpha .PDF's of your book, great job.

I'm following the example in chapter 7 to the letter now and am still
getting the same problem.  2 quick questions (and thanks for your time in
advance)...

Is the ClusterMapReduceDelegate class available anywhere yet?

Adding ~/hadoop/libs/*.jar in it's entirety to my pom.xml is a lot of bulk,
so I've avoided it until now.  Are there any lib's in there that are
absolutely necessary for this test to work?

Thanks again,
bc



jason hadoop wrote:
> 
> I have a nice variant of this in the ch7 examples section of my book,
> including a standalone wrapper around the virtual cluster for allowing
> multiple test instances to share the virtual cluster - and allow an easier
> time to poke around with the input and output datasets.
> 
> It even works decently under windows - my editor insisting on word to
> recent
> for crossover.
> 
> On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:
> 
>>
>> Sry, I forgot to include the not-IntelliJ-console output :)
>>
>> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
>> java.lang.NullPointerException
>>        at java.io.File.<init>(File.java:222)
>>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>>        at
>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>>        at
>>
>> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>>        at java.lang.Thread.run(Thread.java:637)
>>
>> I managed to pick up the chapter in the Hadoop Book that Jason mentions
>> that
>> deals with Unit testing (great chapter btw) and it looks like everything
>> is
>> in order.  He points out that this error is typically caused by a bad
>> hadoop.log.dir or missing log4j.properties, but I verified that my dir is
>> ok
>> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>>
>> I also tried running the same test with hadoop-core/test 0.19.0 - same
>> thing.
>>
>> Thanks again,
>>
>> bc
>>
>>
>> czero wrote:
>> >
>> > Hey all,
>> >
>> > I'm also extending the ClusterMapReduceTestCase and having a bit of
>> > trouble as well.
>> >
>> > Currently I'm getting :
>> >
>> > Starting DataNode 0 with dfs.data.dir:
>> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
>> > Starting DataNode 1 with dfs.data.dir:
>> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
>> > Generating rack names for tasktrackers
>> > Generating host names for tasktrackers
>> >
>> > And then nothing... just spins on that forever.  Any ideas?
>> >
>> > I have all the jetty and jetty-ext libs in the classpath and I set the
>> > hadoop.log.dir and the SAX parser correctly.
>> >
>> > This is all I have for my test class so far, I'm not even doing
>> anything
>> > yet:
>> >
>> > public class TestDoop extends ClusterMapReduceTestCase {
>> >
>> >     @Test
>> >     public void testDoop() throws Exception {
>> >         System.setProperty("hadoop.log.dir", "~/test-logs");
>> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
>> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
>> >
>> >         setUp();
>> >
>> >         System.out.println("done.");
>> >     }
>> >
>> > Thanks!
>> >
>> > bc
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> 
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23041470.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Extending ClusterMapReduceTestCase

Posted by jason hadoop <ja...@gmail.com>.
I have a nice variant of this in the ch7 examples section of my book,
including a standalone wrapper around the virtual cluster for allowing
multiple test instances to share the virtual cluster - and allow an easier
time to poke around with the input and output datasets.

It even works decently under windows - my editor insisting on word to recent
for crossover.

On Mon, Apr 13, 2009 at 9:16 AM, czero <br...@gmail.com> wrote:

>
> Sry, I forgot to include the not-IntelliJ-console output :)
>
> 09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
> java.lang.NullPointerException
>        at java.io.File.<init>(File.java:222)
>        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
>        at
>
> org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
>        at java.lang.Thread.run(Thread.java:637)
>
> I managed to pick up the chapter in the Hadoop Book that Jason mentions
> that
> deals with Unit testing (great chapter btw) and it looks like everything is
> in order.  He points out that this error is typically caused by a bad
> hadoop.log.dir or missing log4j.properties, but I verified that my dir is
> ok
> and my hadoop-0.19.1-core.jar has the log4j.properties in it.
>
> I also tried running the same test with hadoop-core/test 0.19.0 - same
> thing.
>
> Thanks again,
>
> bc
>
>
> czero wrote:
> >
> > Hey all,
> >
> > I'm also extending the ClusterMapReduceTestCase and having a bit of
> > trouble as well.
> >
> > Currently I'm getting :
> >
> > Starting DataNode 0 with dfs.data.dir:
> > build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> > Starting DataNode 1 with dfs.data.dir:
> > build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> > Generating rack names for tasktrackers
> > Generating host names for tasktrackers
> >
> > And then nothing... just spins on that forever.  Any ideas?
> >
> > I have all the jetty and jetty-ext libs in the classpath and I set the
> > hadoop.log.dir and the SAX parser correctly.
> >
> > This is all I have for my test class so far, I'm not even doing anything
> > yet:
> >
> > public class TestDoop extends ClusterMapReduceTestCase {
> >
> >     @Test
> >     public void testDoop() throws Exception {
> >         System.setProperty("hadoop.log.dir", "~/test-logs");
> >         System.setProperty("javax.xml.parsers.SAXParserFactory",
> > "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> >
> >         setUp();
> >
> >         System.out.println("done.");
> >     }
> >
> > Thanks!
> >
> > bc
> >
>
> --
> View this message in context:
> http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Re: Extending ClusterMapReduceTestCase

Posted by czero <br...@gmail.com>.
Sry, I forgot to include the not-IntelliJ-console output :)

09/04/13 12:07:14 ERROR mapred.MiniMRCluster: Job tracker crashed
java.lang.NullPointerException
        at java.io.File.<init>(File.java:222)
        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:143)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1110)
        at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:143)
        at
org.apache.hadoop.mapred.MiniMRCluster$JobTrackerRunner.run(MiniMRCluster.java:96)
        at java.lang.Thread.run(Thread.java:637)

I managed to pick up the chapter in the Hadoop Book that Jason mentions that
deals with Unit testing (great chapter btw) and it looks like everything is
in order.  He points out that this error is typically caused by a bad
hadoop.log.dir or missing log4j.properties, but I verified that my dir is ok
and my hadoop-0.19.1-core.jar has the log4j.properties in it. 

I also tried running the same test with hadoop-core/test 0.19.0 - same
thing.

Thanks again,

bc 


czero wrote:
> 
> Hey all,
> 
> I'm also extending the ClusterMapReduceTestCase and having a bit of
> trouble as well.
> 
> Currently I'm getting :
> 
> Starting DataNode 0 with dfs.data.dir:
> build/test/data/dfs/data/data1,build/test/data/dfs/data/data2
> Starting DataNode 1 with dfs.data.dir:
> build/test/data/dfs/data/data3,build/test/data/dfs/data/data4
> Generating rack names for tasktrackers
> Generating host names for tasktrackers
> 
> And then nothing... just spins on that forever.  Any ideas?
> 
> I have all the jetty and jetty-ext libs in the classpath and I set the
> hadoop.log.dir and the SAX parser correctly.
> 
> This is all I have for my test class so far, I'm not even doing anything
> yet:
> 
> public class TestDoop extends ClusterMapReduceTestCase {
> 
>     @Test
>     public void testDoop() throws Exception {
>         System.setProperty("hadoop.log.dir", "~/test-logs");
>         System.setProperty("javax.xml.parsers.SAXParserFactory",
> "com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl");
> 
>         setUp();
> 
>         System.out.println("done.");
>     }
> 
> Thanks! 
> 
> bc
> 

-- 
View this message in context: http://www.nabble.com/Extending-ClusterMapReduceTestCase-tp22440254p23024597.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.