You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Christian Schneider <cs...@gmail.com> on 2013/09/05 23:29:29 UTC

How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Hi,
I start to write a small ncdu clone to browse HDFS on the CLI (
http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
to make it available for a wider group of users (Hortonworks, ..).

Is it enough to pick different vanilla Versions (for IPC 5, 7)?

Best Regards,
Christian.

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hi,

I think the simpler way is to distribute multiple pre-built packages,
targeting each version/distribution, instead of trying to detect which
one to load?

On Sat, Sep 7, 2013 at 4:14 PM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi Harsh,
> Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
> Also I wasn't sure about the ideal way of packaging this utility.
>
> My "dream" is to have 1 binary that is able to deal with different versions
> (IPC 3, 5, 7, ...).
> The users will download the binary package and it is compatible with a wide
> range of versions.
>
> From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
> compatible with IPC V7.
>
> So I think I need to do the same then HBase Compat does. "Somehow" the tool
> needs to check
> the version of the cluster and then load the correct implementation for
> that.
>
> But how to check the IPC version?
>
> Best Regards,
> Christian.
>
>
> P.S.: Thanks, that motivates me to continue :)
>
>
> 2013/9/6 Harsh J <ha...@cloudera.com>
>>
>> Oh and btw, nice utility! :)
>>
>> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
>> > Hello,
>> >
>> > There are a few additions to the FileSystem that may bite you across
>> > versions, but if you pick an old stable version such as Apache Hadoop
>> > 0.20.2, and stick to only its offered APIs, it would work better
>> > across different version dependencies as we try to maintain FileSystem
>> > as a stable interface as much as we can (there was also more recent
>> > work to ensure the stabilization). I looked over your current code
>> > state and it seemed to have pretty stable calls that I think have
>> > existed across several versions and exists today, but I did notice you
>> > had to remove an isRoot as part of a previous commit, which may have
>> > lead to this question?
>> >
>> > If that doesn't work for you, you can also switch out to using
>> > sub-modules carrying code specific to a build version type (such as
>> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
>> > the hbase-hadoop-compat directories)).
>> >
>> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
>> > <cs...@gmail.com> wrote:
>> >> Hi,
>> >> I start to write a small ncdu clone to browse HDFS on the CLI
>> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
>> >> like
>> >> to make it available for a wider group of users (Hortonworks, ..).
>> >>
>> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>> >>
>> >> Best Regards,
>> >> Christian.
>> >>
>> >
>> >
>> >
>> > --
>> > Harsh J
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hi,

I think the simpler way is to distribute multiple pre-built packages,
targeting each version/distribution, instead of trying to detect which
one to load?

On Sat, Sep 7, 2013 at 4:14 PM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi Harsh,
> Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
> Also I wasn't sure about the ideal way of packaging this utility.
>
> My "dream" is to have 1 binary that is able to deal with different versions
> (IPC 3, 5, 7, ...).
> The users will download the binary package and it is compatible with a wide
> range of versions.
>
> From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
> compatible with IPC V7.
>
> So I think I need to do the same then HBase Compat does. "Somehow" the tool
> needs to check
> the version of the cluster and then load the correct implementation for
> that.
>
> But how to check the IPC version?
>
> Best Regards,
> Christian.
>
>
> P.S.: Thanks, that motivates me to continue :)
>
>
> 2013/9/6 Harsh J <ha...@cloudera.com>
>>
>> Oh and btw, nice utility! :)
>>
>> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
>> > Hello,
>> >
>> > There are a few additions to the FileSystem that may bite you across
>> > versions, but if you pick an old stable version such as Apache Hadoop
>> > 0.20.2, and stick to only its offered APIs, it would work better
>> > across different version dependencies as we try to maintain FileSystem
>> > as a stable interface as much as we can (there was also more recent
>> > work to ensure the stabilization). I looked over your current code
>> > state and it seemed to have pretty stable calls that I think have
>> > existed across several versions and exists today, but I did notice you
>> > had to remove an isRoot as part of a previous commit, which may have
>> > lead to this question?
>> >
>> > If that doesn't work for you, you can also switch out to using
>> > sub-modules carrying code specific to a build version type (such as
>> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
>> > the hbase-hadoop-compat directories)).
>> >
>> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
>> > <cs...@gmail.com> wrote:
>> >> Hi,
>> >> I start to write a small ncdu clone to browse HDFS on the CLI
>> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
>> >> like
>> >> to make it available for a wider group of users (Hortonworks, ..).
>> >>
>> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>> >>
>> >> Best Regards,
>> >> Christian.
>> >>
>> >
>> >
>> >
>> > --
>> > Harsh J
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hi,

I think the simpler way is to distribute multiple pre-built packages,
targeting each version/distribution, instead of trying to detect which
one to load?

On Sat, Sep 7, 2013 at 4:14 PM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi Harsh,
> Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
> Also I wasn't sure about the ideal way of packaging this utility.
>
> My "dream" is to have 1 binary that is able to deal with different versions
> (IPC 3, 5, 7, ...).
> The users will download the binary package and it is compatible with a wide
> range of versions.
>
> From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
> compatible with IPC V7.
>
> So I think I need to do the same then HBase Compat does. "Somehow" the tool
> needs to check
> the version of the cluster and then load the correct implementation for
> that.
>
> But how to check the IPC version?
>
> Best Regards,
> Christian.
>
>
> P.S.: Thanks, that motivates me to continue :)
>
>
> 2013/9/6 Harsh J <ha...@cloudera.com>
>>
>> Oh and btw, nice utility! :)
>>
>> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
>> > Hello,
>> >
>> > There are a few additions to the FileSystem that may bite you across
>> > versions, but if you pick an old stable version such as Apache Hadoop
>> > 0.20.2, and stick to only its offered APIs, it would work better
>> > across different version dependencies as we try to maintain FileSystem
>> > as a stable interface as much as we can (there was also more recent
>> > work to ensure the stabilization). I looked over your current code
>> > state and it seemed to have pretty stable calls that I think have
>> > existed across several versions and exists today, but I did notice you
>> > had to remove an isRoot as part of a previous commit, which may have
>> > lead to this question?
>> >
>> > If that doesn't work for you, you can also switch out to using
>> > sub-modules carrying code specific to a build version type (such as
>> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
>> > the hbase-hadoop-compat directories)).
>> >
>> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
>> > <cs...@gmail.com> wrote:
>> >> Hi,
>> >> I start to write a small ncdu clone to browse HDFS on the CLI
>> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
>> >> like
>> >> to make it available for a wider group of users (Hortonworks, ..).
>> >>
>> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>> >>
>> >> Best Regards,
>> >> Christian.
>> >>
>> >
>> >
>> >
>> > --
>> > Harsh J
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hi,

I think the simpler way is to distribute multiple pre-built packages,
targeting each version/distribution, instead of trying to detect which
one to load?

On Sat, Sep 7, 2013 at 4:14 PM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi Harsh,
> Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
> Also I wasn't sure about the ideal way of packaging this utility.
>
> My "dream" is to have 1 binary that is able to deal with different versions
> (IPC 3, 5, 7, ...).
> The users will download the binary package and it is compatible with a wide
> range of versions.
>
> From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
> compatible with IPC V7.
>
> So I think I need to do the same then HBase Compat does. "Somehow" the tool
> needs to check
> the version of the cluster and then load the correct implementation for
> that.
>
> But how to check the IPC version?
>
> Best Regards,
> Christian.
>
>
> P.S.: Thanks, that motivates me to continue :)
>
>
> 2013/9/6 Harsh J <ha...@cloudera.com>
>>
>> Oh and btw, nice utility! :)
>>
>> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
>> > Hello,
>> >
>> > There are a few additions to the FileSystem that may bite you across
>> > versions, but if you pick an old stable version such as Apache Hadoop
>> > 0.20.2, and stick to only its offered APIs, it would work better
>> > across different version dependencies as we try to maintain FileSystem
>> > as a stable interface as much as we can (there was also more recent
>> > work to ensure the stabilization). I looked over your current code
>> > state and it seemed to have pretty stable calls that I think have
>> > existed across several versions and exists today, but I did notice you
>> > had to remove an isRoot as part of a previous commit, which may have
>> > lead to this question?
>> >
>> > If that doesn't work for you, you can also switch out to using
>> > sub-modules carrying code specific to a build version type (such as
>> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
>> > the hbase-hadoop-compat directories)).
>> >
>> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
>> > <cs...@gmail.com> wrote:
>> >> Hi,
>> >> I start to write a small ncdu clone to browse HDFS on the CLI
>> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
>> >> like
>> >> to make it available for a wider group of users (Hortonworks, ..).
>> >>
>> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>> >>
>> >> Best Regards,
>> >> Christian.
>> >>
>> >
>> >
>> >
>> > --
>> > Harsh J
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Christian Schneider <cs...@gmail.com>.
Hi Harsh,
Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
Also I wasn't sure about the ideal way of packaging this utility.

My "dream" is to have 1 binary that is able to deal with different versions
(IPC 3, 5, 7, ...).
The users will download the binary package and it is compatible with a wide
range of versions.

>From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
compatible with IPC V7.

So I think I need to do the same then HBase Compat does. "Somehow" the tool
needs to check
the version of the cluster and then load the correct implementation for
that.

But how to check the IPC version?

Best Regards,
Christian.


P.S.: Thanks, that motivates me to continue :)


2013/9/6 Harsh J <ha...@cloudera.com>

> Oh and btw, nice utility! :)
>
> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> > Hello,
> >
> > There are a few additions to the FileSystem that may bite you across
> > versions, but if you pick an old stable version such as Apache Hadoop
> > 0.20.2, and stick to only its offered APIs, it would work better
> > across different version dependencies as we try to maintain FileSystem
> > as a stable interface as much as we can (there was also more recent
> > work to ensure the stabilization). I looked over your current code
> > state and it seemed to have pretty stable calls that I think have
> > existed across several versions and exists today, but I did notice you
> > had to remove an isRoot as part of a previous commit, which may have
> > lead to this question?
> >
> > If that doesn't work for you, you can also switch out to using
> > sub-modules carrying code specific to a build version type (such as
> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> > the hbase-hadoop-compat directories)).
> >
> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> > <cs...@gmail.com> wrote:
> >> Hi,
> >> I start to write a small ncdu clone to browse HDFS on the CLI
> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
> like
> >> to make it available for a wider group of users (Hortonworks, ..).
> >>
> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
> >>
> >> Best Regards,
> >> Christian.
> >>
> >
> >
> >
> > --
> > Harsh J
>
>
>
> --
> Harsh J
>

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Christian Schneider <cs...@gmail.com>.
Hi Harsh,
Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
Also I wasn't sure about the ideal way of packaging this utility.

My "dream" is to have 1 binary that is able to deal with different versions
(IPC 3, 5, 7, ...).
The users will download the binary package and it is compatible with a wide
range of versions.

>From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
compatible with IPC V7.

So I think I need to do the same then HBase Compat does. "Somehow" the tool
needs to check
the version of the cluster and then load the correct implementation for
that.

But how to check the IPC version?

Best Regards,
Christian.


P.S.: Thanks, that motivates me to continue :)


2013/9/6 Harsh J <ha...@cloudera.com>

> Oh and btw, nice utility! :)
>
> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> > Hello,
> >
> > There are a few additions to the FileSystem that may bite you across
> > versions, but if you pick an old stable version such as Apache Hadoop
> > 0.20.2, and stick to only its offered APIs, it would work better
> > across different version dependencies as we try to maintain FileSystem
> > as a stable interface as much as we can (there was also more recent
> > work to ensure the stabilization). I looked over your current code
> > state and it seemed to have pretty stable calls that I think have
> > existed across several versions and exists today, but I did notice you
> > had to remove an isRoot as part of a previous commit, which may have
> > lead to this question?
> >
> > If that doesn't work for you, you can also switch out to using
> > sub-modules carrying code specific to a build version type (such as
> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> > the hbase-hadoop-compat directories)).
> >
> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> > <cs...@gmail.com> wrote:
> >> Hi,
> >> I start to write a small ncdu clone to browse HDFS on the CLI
> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
> like
> >> to make it available for a wider group of users (Hortonworks, ..).
> >>
> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
> >>
> >> Best Regards,
> >> Christian.
> >>
> >
> >
> >
> > --
> > Harsh J
>
>
>
> --
> Harsh J
>

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Christian Schneider <cs...@gmail.com>.
Hi Harsh,
Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
Also I wasn't sure about the ideal way of packaging this utility.

My "dream" is to have 1 binary that is able to deal with different versions
(IPC 3, 5, 7, ...).
The users will download the binary package and it is compatible with a wide
range of versions.

>From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
compatible with IPC V7.

So I think I need to do the same then HBase Compat does. "Somehow" the tool
needs to check
the version of the cluster and then load the correct implementation for
that.

But how to check the IPC version?

Best Regards,
Christian.


P.S.: Thanks, that motivates me to continue :)


2013/9/6 Harsh J <ha...@cloudera.com>

> Oh and btw, nice utility! :)
>
> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> > Hello,
> >
> > There are a few additions to the FileSystem that may bite you across
> > versions, but if you pick an old stable version such as Apache Hadoop
> > 0.20.2, and stick to only its offered APIs, it would work better
> > across different version dependencies as we try to maintain FileSystem
> > as a stable interface as much as we can (there was also more recent
> > work to ensure the stabilization). I looked over your current code
> > state and it seemed to have pretty stable calls that I think have
> > existed across several versions and exists today, but I did notice you
> > had to remove an isRoot as part of a previous commit, which may have
> > lead to this question?
> >
> > If that doesn't work for you, you can also switch out to using
> > sub-modules carrying code specific to a build version type (such as
> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> > the hbase-hadoop-compat directories)).
> >
> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> > <cs...@gmail.com> wrote:
> >> Hi,
> >> I start to write a small ncdu clone to browse HDFS on the CLI
> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
> like
> >> to make it available for a wider group of users (Hortonworks, ..).
> >>
> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
> >>
> >> Best Regards,
> >> Christian.
> >>
> >
> >
> >
> > --
> > Harsh J
>
>
>
> --
> Harsh J
>

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Christian Schneider <cs...@gmail.com>.
Hi Harsh,
Thanks for the suggestion. And yes, the .isRoot() lead to this question :).
Also I wasn't sure about the ideal way of packaging this utility.

My "dream" is to have 1 binary that is able to deal with different versions
(IPC 3, 5, 7, ...).
The users will download the binary package and it is compatible with a wide
range of versions.

>From your suggestions I tried to run 0.20.2 against CDH4.3, - but it is not
compatible with IPC V7.

So I think I need to do the same then HBase Compat does. "Somehow" the tool
needs to check
the version of the cluster and then load the correct implementation for
that.

But how to check the IPC version?

Best Regards,
Christian.


P.S.: Thanks, that motivates me to continue :)


2013/9/6 Harsh J <ha...@cloudera.com>

> Oh and btw, nice utility! :)
>
> On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> > Hello,
> >
> > There are a few additions to the FileSystem that may bite you across
> > versions, but if you pick an old stable version such as Apache Hadoop
> > 0.20.2, and stick to only its offered APIs, it would work better
> > across different version dependencies as we try to maintain FileSystem
> > as a stable interface as much as we can (there was also more recent
> > work to ensure the stabilization). I looked over your current code
> > state and it seemed to have pretty stable calls that I think have
> > existed across several versions and exists today, but I did notice you
> > had to remove an isRoot as part of a previous commit, which may have
> > lead to this question?
> >
> > If that doesn't work for you, you can also switch out to using
> > sub-modules carrying code specific to a build version type (such as
> > what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> > the hbase-hadoop-compat directories)).
> >
> > On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> > <cs...@gmail.com> wrote:
> >> Hi,
> >> I start to write a small ncdu clone to browse HDFS on the CLI
> >> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I
> like
> >> to make it available for a wider group of users (Hortonworks, ..).
> >>
> >> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
> >>
> >> Best Regards,
> >> Christian.
> >>
> >
> >
> >
> > --
> > Harsh J
>
>
>
> --
> Harsh J
>

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Oh and btw, nice utility! :)

On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> Hello,
>
> There are a few additions to the FileSystem that may bite you across
> versions, but if you pick an old stable version such as Apache Hadoop
> 0.20.2, and stick to only its offered APIs, it would work better
> across different version dependencies as we try to maintain FileSystem
> as a stable interface as much as we can (there was also more recent
> work to ensure the stabilization). I looked over your current code
> state and it seemed to have pretty stable calls that I think have
> existed across several versions and exists today, but I did notice you
> had to remove an isRoot as part of a previous commit, which may have
> lead to this question?
>
> If that doesn't work for you, you can also switch out to using
> sub-modules carrying code specific to a build version type (such as
> what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> the hbase-hadoop-compat directories)).
>
> On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> <cs...@gmail.com> wrote:
>> Hi,
>> I start to write a small ncdu clone to browse HDFS on the CLI
>> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
>> to make it available for a wider group of users (Hortonworks, ..).
>>
>> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>>
>> Best Regards,
>> Christian.
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Oh and btw, nice utility! :)

On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> Hello,
>
> There are a few additions to the FileSystem that may bite you across
> versions, but if you pick an old stable version such as Apache Hadoop
> 0.20.2, and stick to only its offered APIs, it would work better
> across different version dependencies as we try to maintain FileSystem
> as a stable interface as much as we can (there was also more recent
> work to ensure the stabilization). I looked over your current code
> state and it seemed to have pretty stable calls that I think have
> existed across several versions and exists today, but I did notice you
> had to remove an isRoot as part of a previous commit, which may have
> lead to this question?
>
> If that doesn't work for you, you can also switch out to using
> sub-modules carrying code specific to a build version type (such as
> what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> the hbase-hadoop-compat directories)).
>
> On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> <cs...@gmail.com> wrote:
>> Hi,
>> I start to write a small ncdu clone to browse HDFS on the CLI
>> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
>> to make it available for a wider group of users (Hortonworks, ..).
>>
>> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>>
>> Best Regards,
>> Christian.
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Oh and btw, nice utility! :)

On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> Hello,
>
> There are a few additions to the FileSystem that may bite you across
> versions, but if you pick an old stable version such as Apache Hadoop
> 0.20.2, and stick to only its offered APIs, it would work better
> across different version dependencies as we try to maintain FileSystem
> as a stable interface as much as we can (there was also more recent
> work to ensure the stabilization). I looked over your current code
> state and it seemed to have pretty stable calls that I think have
> existed across several versions and exists today, but I did notice you
> had to remove an isRoot as part of a previous commit, which may have
> lead to this question?
>
> If that doesn't work for you, you can also switch out to using
> sub-modules carrying code specific to a build version type (such as
> what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> the hbase-hadoop-compat directories)).
>
> On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> <cs...@gmail.com> wrote:
>> Hi,
>> I start to write a small ncdu clone to browse HDFS on the CLI
>> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
>> to make it available for a wider group of users (Hortonworks, ..).
>>
>> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>>
>> Best Regards,
>> Christian.
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Oh and btw, nice utility! :)

On Fri, Sep 6, 2013 at 7:50 AM, Harsh J <ha...@cloudera.com> wrote:
> Hello,
>
> There are a few additions to the FileSystem that may bite you across
> versions, but if you pick an old stable version such as Apache Hadoop
> 0.20.2, and stick to only its offered APIs, it would work better
> across different version dependencies as we try to maintain FileSystem
> as a stable interface as much as we can (there was also more recent
> work to ensure the stabilization). I looked over your current code
> state and it seemed to have pretty stable calls that I think have
> existed across several versions and exists today, but I did notice you
> had to remove an isRoot as part of a previous commit, which may have
> lead to this question?
>
> If that doesn't work for you, you can also switch out to using
> sub-modules carrying code specific to a build version type (such as
> what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
> the hbase-hadoop-compat directories)).
>
> On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
> <cs...@gmail.com> wrote:
>> Hi,
>> I start to write a small ncdu clone to browse HDFS on the CLI
>> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
>> to make it available for a wider group of users (Hortonworks, ..).
>>
>> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>>
>> Best Regards,
>> Christian.
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hello,

There are a few additions to the FileSystem that may bite you across
versions, but if you pick an old stable version such as Apache Hadoop
0.20.2, and stick to only its offered APIs, it would work better
across different version dependencies as we try to maintain FileSystem
as a stable interface as much as we can (there was also more recent
work to ensure the stabilization). I looked over your current code
state and it seemed to have pretty stable calls that I think have
existed across several versions and exists today, but I did notice you
had to remove an isRoot as part of a previous commit, which may have
lead to this question?

If that doesn't work for you, you can also switch out to using
sub-modules carrying code specific to a build version type (such as
what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
the hbase-hadoop-compat directories)).

On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi,
> I start to write a small ncdu clone to browse HDFS on the CLI
> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
> to make it available for a wider group of users (Hortonworks, ..).
>
> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>
> Best Regards,
> Christian.
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hello,

There are a few additions to the FileSystem that may bite you across
versions, but if you pick an old stable version such as Apache Hadoop
0.20.2, and stick to only its offered APIs, it would work better
across different version dependencies as we try to maintain FileSystem
as a stable interface as much as we can (there was also more recent
work to ensure the stabilization). I looked over your current code
state and it seemed to have pretty stable calls that I think have
existed across several versions and exists today, but I did notice you
had to remove an isRoot as part of a previous commit, which may have
lead to this question?

If that doesn't work for you, you can also switch out to using
sub-modules carrying code specific to a build version type (such as
what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
the hbase-hadoop-compat directories)).

On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi,
> I start to write a small ncdu clone to browse HDFS on the CLI
> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
> to make it available for a wider group of users (Hortonworks, ..).
>
> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>
> Best Regards,
> Christian.
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hello,

There are a few additions to the FileSystem that may bite you across
versions, but if you pick an old stable version such as Apache Hadoop
0.20.2, and stick to only its offered APIs, it would work better
across different version dependencies as we try to maintain FileSystem
as a stable interface as much as we can (there was also more recent
work to ensure the stabilization). I looked over your current code
state and it seemed to have pretty stable calls that I think have
existed across several versions and exists today, but I did notice you
had to remove an isRoot as part of a previous commit, which may have
lead to this question?

If that doesn't work for you, you can also switch out to using
sub-modules carrying code specific to a build version type (such as
what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
the hbase-hadoop-compat directories)).

On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi,
> I start to write a small ncdu clone to browse HDFS on the CLI
> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
> to make it available for a wider group of users (Hortonworks, ..).
>
> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>
> Best Regards,
> Christian.
>



-- 
Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

Posted by Harsh J <ha...@cloudera.com>.
Hello,

There are a few additions to the FileSystem that may bite you across
versions, but if you pick an old stable version such as Apache Hadoop
0.20.2, and stick to only its offered APIs, it would work better
across different version dependencies as we try to maintain FileSystem
as a stable interface as much as we can (there was also more recent
work to ensure the stabilization). I looked over your current code
state and it seemed to have pretty stable calls that I think have
existed across several versions and exists today, but I did notice you
had to remove an isRoot as part of a previous commit, which may have
lead to this question?

If that doesn't work for you, you can also switch out to using
sub-modules carrying code specific to a build version type (such as
what HBase does at https://github.com/apache/hbase/tree/trunk/ (see
the hbase-hadoop-compat directories)).

On Fri, Sep 6, 2013 at 2:59 AM, Christian Schneider
<cs...@gmail.com> wrote:
> Hi,
> I start to write a small ncdu clone to browse HDFS on the CLI
> (http://nchadoop.org/). Currently i'm testing it against CDH4, - but I like
> to make it available for a wider group of users (Hortonworks, ..).
>
> Is it enough to pick different vanilla Versions (for IPC 5, 7)?
>
> Best Regards,
> Christian.
>



-- 
Harsh J