You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ravi Prakash <ra...@gmail.com> on 2011/08/18 23:19:28 UTC

Help running built artifacts

Hi,

http://wiki.apache.org/hadoop/HowToContribute is a great resource detailing
the steps needed to build jars and tars from the source code. However, I am
still not sure what the best way to run hadoop servers (NN, SNN, DNs, JT,
TTs) using those built jars is. Could we all please reach consensus that for
an efficient dev cycle, we should be able start hadoop servers from built
source code easily?

What are the ways people currently do this? Is there a script to transfer
built artifacts into a single directory which I can then label HADOOP_PREFIX
and then run from there? Whatever the best method, I feel should be included
in the HowToContribute twiki. Its not really effective testing if I only ran
test-patch and unit tests after making changes without running an actual
single-node cluster.

Thanks
Ravi.

Fwd: Help running built artifacts

Posted by Ravi Prakash <ra...@gmail.com>.
To go some ways in answering my own question.

This comment from Alejandro is very helpful.
https://issues.apache.org/jira/browse/HDFS-2277?focusedCommentId=13089177&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13089177

---------- Forwarded message ----------
From: Ravi Prakash <ra...@gmail.com>
Date: Thu, Aug 18, 2011 at 4:19 PM
Subject: Help running built artifacts
To: common-user@hadoop.apache.org


Hi,

http://wiki.apache.org/hadoop/HowToContribute is a great resource detailing
the steps needed to build jars and tars from the source code. However, I am
still not sure what the best way to run hadoop servers (NN, SNN, DNs, JT,
TTs) using those built jars is. Could we all please reach consensus that for
an efficient dev cycle, we should be able start hadoop servers from built
source code easily?

What are the ways people currently do this? Is there a script to transfer
built artifacts into a single directory which I can then label HADOOP_PREFIX
and then run from there? Whatever the best method, I feel should be included
in the HowToContribute twiki. Its not really effective testing if I only ran
test-patch and unit tests after making changes without running an actual
single-node cluster.

Thanks
Ravi.