You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Guy Doulberg <Gu...@conduit.com> on 2011/04/08 23:38:40 UTC

‏‏RE: Developing, Testing, Distributing

Thanks, I think I will try your way of developing (replacing the ant)
________________________________________
‏‏מאת: Tsz Wo (Nicholas), Sze [s29752-hadoopgeneral@yahoo.com]
‏‏נשלח: ‏‏יום שישי 08 אפריל 2011 21:08
‏‏אל: common-user@hadoop.apache.org
‏‏נושא: Re: Developing, Testing, Distributing

First of all, I am a Hadoop contributor and I am familiar with the Hadoop code
base/build mechanism.  Here is what I do:


Q1: What IDE you are using,
Eclipse.

Q2: What plugins to the IDE you are using
No plugins.

Q3:  How do you test your code, which Unit test libraries your using, how do
you run your automatic tests after you have finished the development?
I use JUnit.  The tests are executed using ant, the same way for what we did in
Hadoop development.

Q4: Do you have test/qa/staging environments beside the dev and the production?
How do you keep it similar to the production
We, Yahoo!, have test clusters which have similar settings as production
cluster.

Q5: Code reuse - how do you build components that can be used in other jobs, do
you build generic map or reduce class?
I do have my own framework for running generic computations or generic jobs.

Some more details:
1) svn checkout MapReduce trunk (or common/branches/branch-0.20 for 0.20)
2) compile everything using ant
3) setup eclipse
4) remove existing files under ./src/examples
5) develop my codes under ./src/examples
6) add unit tests under ./src/test/mapred

I find it very convenient since (i) the build scripts could compile the examples
code, run unit test, create jar, etc., and (ii) Hadoop contributors would
maintain it.

Hope it helps.
Nicholas Sze