You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Vinod K V (JIRA)" <ji...@apache.org> on 2009/10/13 11:41:31 UTC

[jira] Commented: (HADOOP-5107) split the core, hdfs, and mapred jars from each other and publish them independently to the Maven repository

    [ https://issues.apache.org/jira/browse/HADOOP-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12765011#action_12765011 ] 

Vinod K V commented on HADOOP-5107:
-----------------------------------

The patch works overall.

bq. ivy doesnt work offline. Everytime we do a build whether the dependencies are present in the cache or not it goes and verifies the repo. If the dependencies are present locally it doesn't download. Same is the case with mvn-ant-task.jar. It doesnt download the jar everytime as usetimestamp is set to true.
It works like that on trunk. After the first run, I can go offline and still do my work. I think it works this way because we specify particular versioned jars, and so ivy actually doesn't go to the repo everytime. This might change if we wish to use snapshot jars of common/mapred/hdfs.

{quote}
> common project: Should we take this as an opportunity and rename the core jar to common jar before publishing? It looks odd the project name is common while the jar's name refers to core. 
>>>> That would be quite a work and I would defn. want that to be in a diff jira.
{quote}
Created MAPREDUCE-1101 for the same.

{quote}
> Should `ant clean` delete maven-ant-tasks.jar every time? I guess not. 
>>>> When I call ant clean I would defn. expect a clean workspace.
Also there is a different reason. I ve seen ppl doing a ctrl-c half way when the ivy/maven-ant-task. jar is downloading. So the jar is partially downloaded. Next time when a user runs the build and the build fails for the jar file being corrupt, they have to go delete them manually.
{quote}
Then we may wish to clean the ivy.jar too when we do ant clean.

Also, as Giri already has mentioned, we will need a follow up issue to clean up the list of dependencies, particularly of the contrib projects.
In any case, this issue is still blocked on the whole common, hdfs, mapred dependency related issues. Just putting these comments, so we are ready.

> split the core, hdfs, and mapred jars from each other and publish them independently to the Maven repository
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5107
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: build
>    Affects Versions: 0.20.0
>            Reporter: Owen O'Malley
>            Assignee: Giridharan Kesavan
>         Attachments: common-trunk-v1.patch, common-trunk-v4.patch, common-trunk-v6.patch, common-trunk-v7.patch, common-trunk-v8.patch, common-trunk.patch, hadoop-hdfsd-v4.patch, hdfs-trunk-v1.patch, hdfs-trunk-v2.patch, hdfs-trunk-v6.patch, hdfs-trunk.patch, mapred-trunk-v1.patch, mapred-trunk-v2.patch, mapred-trunk-v3.patch, mapred-trunk-v4.patch, mapred-trunk-v5.patch, mapred-trunk-v6.patch, mapreduce-trunk.patch
>
>
> I think to support splitting the projects, we should publish the jars for 0.20.0 as independent jars to the Maven repository 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.