You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2009/09/12 01:21:57 UTC

[jira] Created: (HADOOP-6255) Create an rpm target in the build.xml

Create an rpm target in the build.xml
-------------------------------------

                 Key: HADOOP-6255
                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
             Project: Hadoop Common
          Issue Type: New Feature
            Reporter: Owen O'Malley


We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869272#action_12869272 ] 

Steve Loughran commented on HADOOP-6255:
----------------------------------------

OK. 
# Presumably we'd have a hadoop-avro too, or would that start off in hadoop-common.noarch
# For conf, I think we could pre-generate a few example configurations -hadoop-conf-standalone-, and I'm assuming one  conf RPM for everything instead of a common-conf, hdfs-conf and mapred-conf. 
# We should also redist the .tar file needed for someone on a unix with rpmbuild installed to create their own conf RPMs. That's what I effectively do in Smartfrog, where the configuration also tells the runtime what services to deploy on startup. That way, anyone is free to create the own -conf RPM from a local set of files and push it out to the cluster.

How to start this -is there a bit of SVN where we can start to prototype something?


> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm target in the build.xml

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759593#action_12759593 ] 

Steve Loughran commented on HADOOP-6255:
----------------------------------------

# This should be a separate project from the others, it's integration, and will soon get big.
# The project would be linux and OS/X (with rpmbuild installed) only. Even on linux, the right tools need to be installed
# Basic RPMs are easy, passing rpmlint harder
# Testing, that's the fun part. 

We test our RPMs by 
# SCP to configured real/virtual machines. These are Centos 5.x VMs, usually hosted under VMWare. Under VirtualBox, RHEL5 and Centos spins one CPU at 100% (Virtualbox bug #1233)
# force-uninstalling any old versions, install the new ones.
# SSH in, walk the shell scripts through their entry points
{code}
  <target name="rpm-remote-initd"
      depends="rpm-ready-to-remote-install,rpm-remote-install"
      description="check that initd parses">
    <rootssh command="${remote-smartfrogd} start"/>
    <pause/>
    <rootssh command="${remote-smartfrogd} status"/>
    <pause/>
    <rootssh command="${remote-smartfrogd} start"/>
    <rootssh command="${remote-smartfrogd} status"/>
    <rootssh command="${remote-smartfrogd} stop"/>
    <rootssh command="${remote-smartfrogd} stop"/>
    <rootssh command="${remote-smartfrogd} restart"/>
    <pause/>
    <rootssh command="${remote-smartfrogd} status"/>
    <rootssh command="${remote-smartfrogd} restart"/>
    <pause/>
    <rootssh command="${remote-smartfrogd} status"/>
    <rootssh command="${remote-smartfrogd} stop"/>
  </target>
{code}
# run rpm -qf against various files, verify that they are owned. The RPM commands, executed remotely over SSH, are no fun to use in tests as you have to look for certain strings in the response; error codes are not used to signal failures. Ouch.
{code}
    <fail>
      <condition>
        <or>
          <contains string="${rpm.queries.results}"
              substring="is not owned by any package"/>
          <contains string="${rpm.queries.results}"
              substring="No such file or directory"/>
        </or>
      </condition>
      One of the directories/files in the RPM is not declared as being owned by any RPM.
      This file/directory will not be managed correctly, or have the correct permissions
      on a hardened linux.
      ${rpm.queries.results}
    </fail>
{code}

For full functional testing, we also package up the test source trees as JAR files which are published via Ivy, so that the release/ project can retrieve those test files and point them (by way of java properties) at the remote machine. This is powerful as you can be sure that  the RPM installations really do work as intended. If you only test the local machine, you miss out on problems. 

These tests don't verify all possible upgrades. They can be trouble as RPM installs the new files before uninstalling the old ones. Trouble.

The other issue is configuration. You can either mark all configuration files as {{%config(noreplace)}}, meaning people can edit them and upgrades won't stamp on them, or have a more structured process for managing conf files. Cloudera provide a web site to create a new configuration RPM, Apache could be provide a .tar.gz file which contains everything needed to create your own configuration RPM. 

Therefore + 1 to RPMs and debs 
# In a separate package
# Named Apache-Hadoop. People out there are already releasing hadoop RPMs, we don't want confusion.
# With all config files in the RPM marked as {{%config)}} files, which end users can stamp on, or a separate roll-your-own-config RPM tool
# Once the tests are designed to run against remote systems, they should be run against the RPM installations.

I don't volunteer to write the spec files or the build files, all mine are up to look at, and I will formally release them as Apache licensed if you want to use them as a starting point:
http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/release/
I could help with some of the functional testing now, provided it uses some of my real/virtual cluster management stuff to pick target hosts.


> Create an rpm target in the build.xml
> -------------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880065#action_12880065 ] 

Allen Wittenauer commented on HADOOP-6255:
------------------------------------------

Are we trying to build a package that fits the local OS or are we trying to build a package that has hadoop completely in one dir and is fairly generic?

If the former, trying to dictate where things like config files are installed is going to break.  Every OS has fairly specific rules and expectations (Linux has LSB, Solaris has filesystems(5), NeXTStep... I mean, OS X, has something documented somewhere, I'm sure, etc...).  If the latter, then I'd make the following changes:

a) drop the share level.  it doesn't seem to serve a purpose

b) etc/ should contain configs

c) lib should contain hadoop-config.sh and *.so in addtion to *.jar

d) i take it hdfs/mapred/etc is a 0.21 thing?  are we concerned about the number of people that have 'hdfs' aliased to 'hadoop dfs'?  or is it a replacement for that?

e) var/tmp and var/logs should be defined

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Christophe Bisciglia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869090#action_12869090 ] 

Christophe Bisciglia commented on HADOOP-6255:
----------------------------------------------

Thanks for your mail. I am currently on vacation with extremely
limited access to email.

For Training, please contact training-admin@cloudera.com

For Sales, please contact sales@cloudera.com

For Services and Support, please contact omer@cloudera.com

-- 
get hadoop: cloudera.com/hadoop
online training: cloudera.com/hadoop-training
blog: cloudera.com/blog
twitter: twitter.com/cloudera


> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Konstantin Boudnik (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880058#action_12880058 ] 

Konstantin Boudnik commented on HADOOP-6255:
--------------------------------------------

After a good discussion with Owen and Roman here's the a better proposal:
- new top level targets to be introduced into ant build:
-- package-32
-- package-64
-- package-noarch
-- package depends on all above

It is up to build to find out about the type of OS the build is running on and either to locate appropriate packaging script and schema or fail with appropriate diagnostics.

The preliminary structure of installed packages is like this:
{noformat}
$root/
  bin/
    hadoop
    hadoop-daemon?.sh
    hdfs
    mapred
    <other user facing scripts>
  share/
    hadoop/
      bin/
        hadoop-config.sh
      lib/
        *.jar
      man/
      include/
        c++/
      sbin/
        jvsc
        taskcontroller
        runAs (for Herriot packages)
{noformat}
Some notes:
- jar files  in {{share/hadoop/lib/}} have to have their owners based on the components they are coming from (e.g. hdfs-client, hdfs-server, etc.)
- packages required by Hadoop but aren't included into its source code (LZO is a good example) shall be delivered via inter-package dependencies.

Something has to be done about configs. Shall they be placed under {{/etc/hadoop}} perhaps?

Herriot package (test for real cluster as in HADOOP-6332) shall be created separately because it requires byte-code instrumentation.

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Konstantin Boudnik (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12879263#action_12879263 ] 

Konstantin Boudnik commented on HADOOP-6255:
--------------------------------------------

It seems like people here are fond of having a separate project from the Hadoop itself. While it seems like good idea I can see slight different approach. Here it is:

- native packaging specs (DEBs, RPMs, whatnot) are placed within the current build systems i.e. under {{$project.root/packaging/specs}}
- existing {{build.xml}} is extended with an extra target {{create-packages}} depends on perhaps {{tar}}
- an execution of 
  {{ant create-packages -Dpackage.type=RPM|DEB -Dpackage.arch=noarch|x32|x64|ARM}} 
  will exec a package creation script from {{$project.root/packaging/scripts}} using spec specified by {{package.type}}
- similarly, test packages can be produced by
  {{ant create-packages -Dpackage.type=RPM|DEB -Dpackage.class=test}}

IMO this approach should allow for reuse of some of existing build functionality without adding any extra hassle to the current build system. Besides, the packaging will be kept as a part of the project itself however will be physically separated from the build system.

Any dependencies resolvable as packages of certain type needs to be listed as external dependencies. Non-resolvable (like Avro above) will have to be included.

bq. We also need to separate the client from the server rpms
+1 on this one.


> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869242#action_12869242 ] 

Allen Wittenauer commented on HADOOP-6255:
------------------------------------------

> native binaries. I've never done native RPMs, don't know where to begin.

We'll basically need an RPM per-architecture:

- noarch for the pure java, shell, etc bits
- i586 for the 32-bit compiled stuff
- x86_64 for the 64-bit compiled stuff

I suspect we'll also need this broken up by project.  i.e.:

hadoop-X.X.X-common-X.X.X.noarch, hadoop-X.X.X-hdfs-X.X.X.noarch, hadoop-X.X.X-mapred-X.X.X.noarch,
hadoop-X.X.X-hdfs-X.X.X.i586, hadoop-X.X.X-hdfs-X.X.X.x86_64, 
hadoop-X.X.X-mapred-X.X.X.i586, hadoop-X.X.X-mapred-X.X.X.x86_64, where X.X.X is the version number.  [Need to package per-version RPMs so that upgrades are tolerable. :( ]

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880179#action_12880179 ] 

Steve Loughran commented on HADOOP-6255:
----------------------------------------

I've been pushing for this to be downstream of the initial tar process as you may want to let people build their own RPMs, with their own config files. The tar file to create the RPMs is a redistributable all of its own. If I understood source RPMs, they would probably fit into the story somehow too.



> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869091#action_12869091 ] 

Steve Loughran commented on HADOOP-6255:
----------------------------------------

Here is how I propose doing this
# Have a subproject that uses Ivy to pull in what is needed
# Start with the spec files of HADOOP-5615 updated with any changes needed for 0.21
# Remove the requirement for sun-java, maybe add an RPM to make that dependency explicit as an option for people who need it and don't want openjdk/jrockit jvms.
# Build file creates the RPMs of all the JARs etc, with configuration a separate RPM.

Troublespots: 
# configuration. Initial cut take the conf dir and create the RPMs for it.
# native binaries. I've never done native RPMs, don't know where to begin. 
# Upgrades and testing thereof. where it gets real fun is when you think about FS upgrades with RPM upgrades
# Automated testing. Help needed, we'd like this to work under hudson too

The goal here is to have a basic Apache Hadoop RPM set, something we can start off with in 0.21 beta tests to see how they work.

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869453#action_12869453 ] 

Owen O'Malley commented on HADOOP-6255:
---------------------------------------

Avro is just another dependency. Until it starts generating rpms, I suggest that we just include all of the dependencies in the rpm for hadoop-common.noarch.

We also need to separate the client from the server rpms, since it would be really nice to be able to install the client code with out root, but the server-side rpms require root. (To install the setuid task controller...)

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880266#action_12880266 ] 

Allen Wittenauer commented on HADOOP-6255:
------------------------------------------

After a night of sleep, the following occurred to me:

a) the stuff in sbin should actually be in libexec, along with hadoop-config.sh
b) hdfs, mapred, and hadoop-dameon.sh (assuming those are all admin tools) should be in sbin
c) why include/c++ ?  shouldn't it just be include?

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm target in the build.xml

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759604#action_12759604 ] 

Steve Loughran commented on HADOOP-6255:
----------------------------------------

I should add that I do include my own Hadoop jars in my RPMs, and that these RPMs are what get installed in machine images (real or virtual) that are then used for all the cluster based testing. Because if you are going to distribute your artifacts as RPMs, that's how you should test your code. Once you've automated RPM installation and the creation of gold-VM images for your target VM infrastructure (VMWare, Xen, EC2, etc), then you can worry about cluster scale testing of the artifacts.

> Create an rpm target in the build.xml
> -------------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm target in the build.xml

Posted by "FROHNER Ákos (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12755949#action_12755949 ] 

FROHNER Ákos commented on HADOOP-6255:
--------------------------------------

Hi,

I would suggest the other way around: create an RPM spec file,
which uses a distribution tarball and calls the generic build.xml
to build the hadoop packages.

This way eases the adoption by upstream distributions, as they
already have the framework to build packages from tarball+spec
files (source RPM).

And the same pattern can be used for Debian/Ubuntu packaging.

> Create an rpm target in the build.xml
> -------------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Roman Shaposhnik (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880359#action_12880359 ] 

Roman Shaposhnik commented on HADOOP-6255:
------------------------------------------

I have two points to add to the discussion:
   1. I'm wondering whether it would be useful to slice it a bit more thinly. IOW, introducing the notion of these extra 
       top level targets available for packaging:
              hadoop-core
              hadoop-client
              hadoop-daemon
              hadoop-devel
              hadoop-javadoc
     

    2. As for configs, I'd like to point out an example that Debian has established with their packaging of .20. Basically
        they created one package per node type (http://packages.qa.debian.org/h/hadoop.html) plus one package common
        among all the daemons:
              hadoop-daemons-common 
              hadoop-jobtrackerd
              hadoop-tasktrackerd
              hadoop-datanoded
              hadoop-namenoded
              hadoop-secondarynamenoded
 
         The packages themselves are pretty slim -- containing only hooks to make daemons plug into the service management
         system (init.d in Debian's case, but one would imagine Solaris/SMF or anything like that also being an option for us). 
         I also would tend to believe that these could be reasonable packages to be used for splitting the configs appropriately. 

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880939#action_12880939 ] 

Allen Wittenauer commented on HADOOP-6255:
------------------------------------------

I might be wrong, but I'm pretty certain that hadoop-config.sh isn't meant to be directly user-runnable.  It falls into the same category as the taskController.


> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-6255) Create an rpm integration project

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran updated HADOOP-6255:
-----------------------------------

    Summary: Create an rpm integration project  (was: Create an rpm target in the build.xml)

changing the title

> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-6255) Create an rpm integration project

Posted by "Konstantin Boudnik (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880656#action_12880656 ] 

Konstantin Boudnik commented on HADOOP-6255:
--------------------------------------------

bq. a) the stuff in sbin should actually be in libexec, along with hadoop-config.sh

Perhaps for the current {{sbin/}} stuff it a better place. But why {{hadoop-config.sh}} I can see the latter to be in {{sbin/}} though

bq. b) hdfs, mapred, and hadoop-dameon.sh (assuming those are all admin tools) should be in sbin
In a sense I agree -  they are admin tools therefore need to be in {{sbin}}. And also they aren't likely to be invoked directly but rather through distro specific start/stop facilities e.g. {{init.d}} or {{service}}. So, perhaps they are indeed belong to {{sbin/}}

bq. c) why include/c++ ? shouldn't it just be include?
Agree, I guess c++ was an example of the stuff me need to put into {{include/}}


> Create an rpm integration project
> ---------------------------------
>
>                 Key: HADOOP-6255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6255
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.