You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by javaLee <wu...@gmail.com> on 2013/01/08 12:35:03 UTC

Why the official Hadoop Documents are so messy?

For example,look at the documents about HDFS shell guide:

In 0.17, the prefix of HDFS shell is hadoop dfs:
http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html

In 0.19, the prefix of HDFS shell is hadoop fs:
http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr

In 1.0.4,the prefix of HDFS shell is hdfs dfs:
http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls


Reading official Hadoop ducuments is such a suffering.
As a end user, I am confused...

Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
No, I was not talking about wrappers of ASF projects. I was referring to non-ASF Open Source projects all together (e.g., GitHub, SourceForge, Google code etc.). 

Oleg

On Jan 8, 2013, at 8:20 AM, Glen Mazza <gm...@talend.com> wrote:

> quote: "Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts." -- I'm not so sure about that; in cases where companies provide commercial wraps of products but pool their resources with other companies in maintaining the open-souce product they're wrapping, their financial incentive would be in keeping their commercial wrap documentation top-notch to lure people to their wraps but less so the Apache website documentation.
> 
> I think the original poster just needs to help out with the documentation, check it out from SVN and submit patches to improve it (or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of the Hadoop Wiki as I was learning from it.
> 
> Glen
> 
> On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
>> Just a little clarification
>> This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
>> It all comes down to the 2 Open Source models
>> 1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
>> 2. Stewardship-based Open Source - controlled and managed by an individual or company
>> 
>> Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation
>> 
>> Cheers
>> Oleg
>> 
>> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello there,
>>> 
>>>      Thank you for the comments. But, just to let you know, 
>>> it's a community work and no one in particular can be held
>>> responsible for these kind of small things. This is how open
>>> source works. Guys who are working on Hadoop have a lot
>>> of things to do. In spite of that, they are giving their best. In
>>> the process sometimes these kinda things might happen.
>>> 
>>> I really appreciate your effort. But rather than this you can
>>> raise a JIRA if you find something wrong somewhere and
>>> fix it or let somebody else fix it.
>>> 
>>> Many thanks.
>>> 
>>> 
>>> P.S. : Don't take it otherwise.
>>> 
>>> 
>>> Best Regards,
>>> Tariq
>>> +91-9741563634
>>> https://mtariq.jux.com/
>>> 
>>> 
>>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
>>> For example,look at the documents about HDFS shell guide:
>>> 
>>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>>> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>> 
>>> In 0.19, the prefix of HDFS shell is hadoop fs:
>>> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>> 
>>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>>  
>>> 
>>> Reading official Hadoop ducuments is such a suffering.
>>> As a end user, I am confused...
>>> 
>> 
> 
> 
> -- 
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
No, I was not talking about wrappers of ASF projects. I was referring to non-ASF Open Source projects all together (e.g., GitHub, SourceForge, Google code etc.). 

Oleg

On Jan 8, 2013, at 8:20 AM, Glen Mazza <gm...@talend.com> wrote:

> quote: "Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts." -- I'm not so sure about that; in cases where companies provide commercial wraps of products but pool their resources with other companies in maintaining the open-souce product they're wrapping, their financial incentive would be in keeping their commercial wrap documentation top-notch to lure people to their wraps but less so the Apache website documentation.
> 
> I think the original poster just needs to help out with the documentation, check it out from SVN and submit patches to improve it (or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of the Hadoop Wiki as I was learning from it.
> 
> Glen
> 
> On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
>> Just a little clarification
>> This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
>> It all comes down to the 2 Open Source models
>> 1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
>> 2. Stewardship-based Open Source - controlled and managed by an individual or company
>> 
>> Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation
>> 
>> Cheers
>> Oleg
>> 
>> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello there,
>>> 
>>>      Thank you for the comments. But, just to let you know, 
>>> it's a community work and no one in particular can be held
>>> responsible for these kind of small things. This is how open
>>> source works. Guys who are working on Hadoop have a lot
>>> of things to do. In spite of that, they are giving their best. In
>>> the process sometimes these kinda things might happen.
>>> 
>>> I really appreciate your effort. But rather than this you can
>>> raise a JIRA if you find something wrong somewhere and
>>> fix it or let somebody else fix it.
>>> 
>>> Many thanks.
>>> 
>>> 
>>> P.S. : Don't take it otherwise.
>>> 
>>> 
>>> Best Regards,
>>> Tariq
>>> +91-9741563634
>>> https://mtariq.jux.com/
>>> 
>>> 
>>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
>>> For example,look at the documents about HDFS shell guide:
>>> 
>>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>>> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>> 
>>> In 0.19, the prefix of HDFS shell is hadoop fs:
>>> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>> 
>>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>>  
>>> 
>>> Reading official Hadoop ducuments is such a suffering.
>>> As a end user, I am confused...
>>> 
>> 
> 
> 
> -- 
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
No, I was not talking about wrappers of ASF projects. I was referring to non-ASF Open Source projects all together (e.g., GitHub, SourceForge, Google code etc.). 

Oleg

On Jan 8, 2013, at 8:20 AM, Glen Mazza <gm...@talend.com> wrote:

> quote: "Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts." -- I'm not so sure about that; in cases where companies provide commercial wraps of products but pool their resources with other companies in maintaining the open-souce product they're wrapping, their financial incentive would be in keeping their commercial wrap documentation top-notch to lure people to their wraps but less so the Apache website documentation.
> 
> I think the original poster just needs to help out with the documentation, check it out from SVN and submit patches to improve it (or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of the Hadoop Wiki as I was learning from it.
> 
> Glen
> 
> On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
>> Just a little clarification
>> This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
>> It all comes down to the 2 Open Source models
>> 1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
>> 2. Stewardship-based Open Source - controlled and managed by an individual or company
>> 
>> Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation
>> 
>> Cheers
>> Oleg
>> 
>> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello there,
>>> 
>>>      Thank you for the comments. But, just to let you know, 
>>> it's a community work and no one in particular can be held
>>> responsible for these kind of small things. This is how open
>>> source works. Guys who are working on Hadoop have a lot
>>> of things to do. In spite of that, they are giving their best. In
>>> the process sometimes these kinda things might happen.
>>> 
>>> I really appreciate your effort. But rather than this you can
>>> raise a JIRA if you find something wrong somewhere and
>>> fix it or let somebody else fix it.
>>> 
>>> Many thanks.
>>> 
>>> 
>>> P.S. : Don't take it otherwise.
>>> 
>>> 
>>> Best Regards,
>>> Tariq
>>> +91-9741563634
>>> https://mtariq.jux.com/
>>> 
>>> 
>>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
>>> For example,look at the documents about HDFS shell guide:
>>> 
>>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>>> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>> 
>>> In 0.19, the prefix of HDFS shell is hadoop fs:
>>> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>> 
>>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>>  
>>> 
>>> Reading official Hadoop ducuments is such a suffering.
>>> As a end user, I am confused...
>>> 
>> 
> 
> 
> -- 
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
No, I was not talking about wrappers of ASF projects. I was referring to non-ASF Open Source projects all together (e.g., GitHub, SourceForge, Google code etc.). 

Oleg

On Jan 8, 2013, at 8:20 AM, Glen Mazza <gm...@talend.com> wrote:

> quote: "Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts." -- I'm not so sure about that; in cases where companies provide commercial wraps of products but pool their resources with other companies in maintaining the open-souce product they're wrapping, their financial incentive would be in keeping their commercial wrap documentation top-notch to lure people to their wraps but less so the Apache website documentation.
> 
> I think the original poster just needs to help out with the documentation, check it out from SVN and submit patches to improve it (or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of the Hadoop Wiki as I was learning from it.
> 
> Glen
> 
> On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
>> Just a little clarification
>> This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
>> It all comes down to the 2 Open Source models
>> 1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
>> 2. Stewardship-based Open Source - controlled and managed by an individual or company
>> 
>> Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation
>> 
>> Cheers
>> Oleg
>> 
>> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello there,
>>> 
>>>      Thank you for the comments. But, just to let you know, 
>>> it's a community work and no one in particular can be held
>>> responsible for these kind of small things. This is how open
>>> source works. Guys who are working on Hadoop have a lot
>>> of things to do. In spite of that, they are giving their best. In
>>> the process sometimes these kinda things might happen.
>>> 
>>> I really appreciate your effort. But rather than this you can
>>> raise a JIRA if you find something wrong somewhere and
>>> fix it or let somebody else fix it.
>>> 
>>> Many thanks.
>>> 
>>> 
>>> P.S. : Don't take it otherwise.
>>> 
>>> 
>>> Best Regards,
>>> Tariq
>>> +91-9741563634
>>> https://mtariq.jux.com/
>>> 
>>> 
>>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
>>> For example,look at the documents about HDFS shell guide:
>>> 
>>> In 0.17, the prefix of HDFS shell is hadoop dfs:
>>> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>> 
>>> In 0.19, the prefix of HDFS shell is hadoop fs:
>>> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>> 
>>> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>>  
>>> 
>>> Reading official Hadoop ducuments is such a suffering.
>>> As a end user, I am confused...
>>> 
>> 
> 
> 
> -- 
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Glen Mazza <gm...@talend.com>.
quote: "Obviously in the second there is a vested interested by such 
individual or company to promote the product therefore things like 
documentation tend to be much crispier then its ASF counterparts." -- 
I'm not so sure about that; in cases where companies provide commercial 
wraps of products but pool their resources with other companies in 
maintaining the open-souce product they're wrapping, their financial 
incentive would be in keeping their commercial wrap documentation 
top-notch to lure people to their wraps but less so the Apache website 
documentation.

I think the original poster just needs to help out with the 
documentation, check it out from SVN and submit patches to improve it 
(or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of 
the Hadoop Wiki as I was learning from it.

Glen

On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
> Just a little clarification
> This is NOT "how open source works" by any means as there are many 
> Open Source projects with  well written and maintained documentation.
> It all comes down to the 2 Open Source models
> 1. ASF Open Source - which is a pure democracy or may be even anarchy 
> without any governing (individual or corporate) other then the ASF 
> procedures/guidelines themselves
> 2. Stewardship-based Open Source - controlled and managed by an 
> individual or company
>
> Obviously in the second there is a vested interested by such 
> individual or company to promote the product therefore things like 
> documentation tend to be much crispier then its ASF counterparts. 
> However the Stewardship-based Open Source model is much tighter with 
> regard to control of what goes in, quality of code etc., then its ASF 
> counterpart which allows a greater flow to free ideas from the 
> community, so both are valid both are open source and both needs to 
> exist and we developers just need to deal with it. After all its Open 
> Source and the code is always a good source of documentation
>
> Cheers
> Oleg
>
> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <dontariq@gmail.com 
> <ma...@gmail.com>> wrote:
>
>> Hello there,
>>
>>      Thank you for the comments. But, just to let you know,
>> it's a community work and no one in particular can be held
>> responsible for these kind of small things. This is how open
>> source works. Guys who are working on Hadoop have a lot
>> of things to do. In spite of that, they are giving their best. In
>> the process sometimes these kinda things might happen.
>>
>> I really appreciate your effort. But rather than this you can
>> raise a JIRA if you find something wrong somewhere and
>> fix it or let somebody else fix it.
>>
>> Many thanks.
>>
>>
>> P.S. : Don't take it otherwise.
>>
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>>
>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wuaner@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>>     For example,look at the documents about HDFS shell guide:
>>
>>     In 0.17, the prefix of HDFS shell is hadoop dfs:
>>     http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>
>>     In 0.19, the prefix of HDFS shell is hadoop fs:
>>     http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>
>>     In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>     http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>
>>     Reading official Hadoop ducuments is such a suffering.
>>     As a end user, I am confused...
>>
>>
>


-- 
Glen Mazza
Talend Community Coders - coders.talend.com
blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Glen Mazza <gm...@talend.com>.
quote: "Obviously in the second there is a vested interested by such 
individual or company to promote the product therefore things like 
documentation tend to be much crispier then its ASF counterparts." -- 
I'm not so sure about that; in cases where companies provide commercial 
wraps of products but pool their resources with other companies in 
maintaining the open-souce product they're wrapping, their financial 
incentive would be in keeping their commercial wrap documentation 
top-notch to lure people to their wraps but less so the Apache website 
documentation.

I think the original poster just needs to help out with the 
documentation, check it out from SVN and submit patches to improve it 
(or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of 
the Hadoop Wiki as I was learning from it.

Glen

On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
> Just a little clarification
> This is NOT "how open source works" by any means as there are many 
> Open Source projects with  well written and maintained documentation.
> It all comes down to the 2 Open Source models
> 1. ASF Open Source - which is a pure democracy or may be even anarchy 
> without any governing (individual or corporate) other then the ASF 
> procedures/guidelines themselves
> 2. Stewardship-based Open Source - controlled and managed by an 
> individual or company
>
> Obviously in the second there is a vested interested by such 
> individual or company to promote the product therefore things like 
> documentation tend to be much crispier then its ASF counterparts. 
> However the Stewardship-based Open Source model is much tighter with 
> regard to control of what goes in, quality of code etc., then its ASF 
> counterpart which allows a greater flow to free ideas from the 
> community, so both are valid both are open source and both needs to 
> exist and we developers just need to deal with it. After all its Open 
> Source and the code is always a good source of documentation
>
> Cheers
> Oleg
>
> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <dontariq@gmail.com 
> <ma...@gmail.com>> wrote:
>
>> Hello there,
>>
>>      Thank you for the comments. But, just to let you know,
>> it's a community work and no one in particular can be held
>> responsible for these kind of small things. This is how open
>> source works. Guys who are working on Hadoop have a lot
>> of things to do. In spite of that, they are giving their best. In
>> the process sometimes these kinda things might happen.
>>
>> I really appreciate your effort. But rather than this you can
>> raise a JIRA if you find something wrong somewhere and
>> fix it or let somebody else fix it.
>>
>> Many thanks.
>>
>>
>> P.S. : Don't take it otherwise.
>>
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>>
>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wuaner@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>>     For example,look at the documents about HDFS shell guide:
>>
>>     In 0.17, the prefix of HDFS shell is hadoop dfs:
>>     http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>
>>     In 0.19, the prefix of HDFS shell is hadoop fs:
>>     http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>
>>     In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>     http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>
>>     Reading official Hadoop ducuments is such a suffering.
>>     As a end user, I am confused...
>>
>>
>


-- 
Glen Mazza
Talend Community Coders - coders.talend.com
blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Glen Mazza <gm...@talend.com>.
quote: "Obviously in the second there is a vested interested by such 
individual or company to promote the product therefore things like 
documentation tend to be much crispier then its ASF counterparts." -- 
I'm not so sure about that; in cases where companies provide commercial 
wraps of products but pool their resources with other companies in 
maintaining the open-souce product they're wrapping, their financial 
incentive would be in keeping their commercial wrap documentation 
top-notch to lure people to their wraps but less so the Apache website 
documentation.

I think the original poster just needs to help out with the 
documentation, check it out from SVN and submit patches to improve it 
(or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of 
the Hadoop Wiki as I was learning from it.

Glen

On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
> Just a little clarification
> This is NOT "how open source works" by any means as there are many 
> Open Source projects with  well written and maintained documentation.
> It all comes down to the 2 Open Source models
> 1. ASF Open Source - which is a pure democracy or may be even anarchy 
> without any governing (individual or corporate) other then the ASF 
> procedures/guidelines themselves
> 2. Stewardship-based Open Source - controlled and managed by an 
> individual or company
>
> Obviously in the second there is a vested interested by such 
> individual or company to promote the product therefore things like 
> documentation tend to be much crispier then its ASF counterparts. 
> However the Stewardship-based Open Source model is much tighter with 
> regard to control of what goes in, quality of code etc., then its ASF 
> counterpart which allows a greater flow to free ideas from the 
> community, so both are valid both are open source and both needs to 
> exist and we developers just need to deal with it. After all its Open 
> Source and the code is always a good source of documentation
>
> Cheers
> Oleg
>
> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <dontariq@gmail.com 
> <ma...@gmail.com>> wrote:
>
>> Hello there,
>>
>>      Thank you for the comments. But, just to let you know,
>> it's a community work and no one in particular can be held
>> responsible for these kind of small things. This is how open
>> source works. Guys who are working on Hadoop have a lot
>> of things to do. In spite of that, they are giving their best. In
>> the process sometimes these kinda things might happen.
>>
>> I really appreciate your effort. But rather than this you can
>> raise a JIRA if you find something wrong somewhere and
>> fix it or let somebody else fix it.
>>
>> Many thanks.
>>
>>
>> P.S. : Don't take it otherwise.
>>
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>>
>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wuaner@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>>     For example,look at the documents about HDFS shell guide:
>>
>>     In 0.17, the prefix of HDFS shell is hadoop dfs:
>>     http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>
>>     In 0.19, the prefix of HDFS shell is hadoop fs:
>>     http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>
>>     In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>     http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>
>>     Reading official Hadoop ducuments is such a suffering.
>>     As a end user, I am confused...
>>
>>
>


-- 
Glen Mazza
Talend Community Coders - coders.talend.com
blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Glen Mazza <gm...@talend.com>.
quote: "Obviously in the second there is a vested interested by such 
individual or company to promote the product therefore things like 
documentation tend to be much crispier then its ASF counterparts." -- 
I'm not so sure about that; in cases where companies provide commercial 
wraps of products but pool their resources with other companies in 
maintaining the open-souce product they're wrapping, their financial 
incentive would be in keeping their commercial wrap documentation 
top-notch to lure people to their wraps but less so the Apache website 
documentation.

I think the original poster just needs to help out with the 
documentation, check it out from SVN and submit patches to improve it 
(or at least submit a JIRA as Mohammad mentioned).  I cleaned up much of 
the Hadoop Wiki as I was learning from it.

Glen

On 01/08/2013 07:13 AM, Oleg Zhurakousky wrote:
> Just a little clarification
> This is NOT "how open source works" by any means as there are many 
> Open Source projects with  well written and maintained documentation.
> It all comes down to the 2 Open Source models
> 1. ASF Open Source - which is a pure democracy or may be even anarchy 
> without any governing (individual or corporate) other then the ASF 
> procedures/guidelines themselves
> 2. Stewardship-based Open Source - controlled and managed by an 
> individual or company
>
> Obviously in the second there is a vested interested by such 
> individual or company to promote the product therefore things like 
> documentation tend to be much crispier then its ASF counterparts. 
> However the Stewardship-based Open Source model is much tighter with 
> regard to control of what goes in, quality of code etc., then its ASF 
> counterpart which allows a greater flow to free ideas from the 
> community, so both are valid both are open source and both needs to 
> exist and we developers just need to deal with it. After all its Open 
> Source and the code is always a good source of documentation
>
> Cheers
> Oleg
>
> On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <dontariq@gmail.com 
> <ma...@gmail.com>> wrote:
>
>> Hello there,
>>
>>      Thank you for the comments. But, just to let you know,
>> it's a community work and no one in particular can be held
>> responsible for these kind of small things. This is how open
>> source works. Guys who are working on Hadoop have a lot
>> of things to do. In spite of that, they are giving their best. In
>> the process sometimes these kinda things might happen.
>>
>> I really appreciate your effort. But rather than this you can
>> raise a JIRA if you find something wrong somewhere and
>> fix it or let somebody else fix it.
>>
>> Many thanks.
>>
>>
>> P.S. : Don't take it otherwise.
>>
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>>
>> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wuaner@gmail.com 
>> <ma...@gmail.com>> wrote:
>>
>>     For example,look at the documents about HDFS shell guide:
>>
>>     In 0.17, the prefix of HDFS shell is hadoop dfs:
>>     http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>>
>>     In 0.19, the prefix of HDFS shell is hadoop fs:
>>     http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>>
>>     In 1.0.4,the prefix of HDFS shell is hdfs dfs:
>>     http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>>
>>     Reading official Hadoop ducuments is such a suffering.
>>     As a end user, I am confused...
>>
>>
>


-- 
Glen Mazza
Talend Community Coders - coders.talend.com
blog: www.jroller.com/gmazza


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Just a little clarification
This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
It all comes down to the 2 Open Source models
1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
2. Stewardship-based Open Source - controlled and managed by an individual or company

Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation

Cheers
Oleg

On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello there,
> 
>      Thank you for the comments. But, just to let you know, 
> it's a community work and no one in particular can be held
> responsible for these kind of small things. This is how open
> source works. Guys who are working on Hadoop have a lot
> of things to do. In spite of that, they are giving their best. In
> the process sometimes these kinda things might happen.
> 
> I really appreciate your effort. But rather than this you can
> raise a JIRA if you find something wrong somewhere and
> fix it or let somebody else fix it.
> 
> Many thanks.
> 
> 
> P.S. : Don't take it otherwise.
> 
> 
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
> 
> 
> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
> For example,look at the documents about HDFS shell guide:
> 
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> 
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> 
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>  
> 
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
> 


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Just a little clarification
This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
It all comes down to the 2 Open Source models
1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
2. Stewardship-based Open Source - controlled and managed by an individual or company

Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation

Cheers
Oleg

On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello there,
> 
>      Thank you for the comments. But, just to let you know, 
> it's a community work and no one in particular can be held
> responsible for these kind of small things. This is how open
> source works. Guys who are working on Hadoop have a lot
> of things to do. In spite of that, they are giving their best. In
> the process sometimes these kinda things might happen.
> 
> I really appreciate your effort. But rather than this you can
> raise a JIRA if you find something wrong somewhere and
> fix it or let somebody else fix it.
> 
> Many thanks.
> 
> 
> P.S. : Don't take it otherwise.
> 
> 
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
> 
> 
> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
> For example,look at the documents about HDFS shell guide:
> 
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> 
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> 
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>  
> 
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
> 


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Just a little clarification
This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
It all comes down to the 2 Open Source models
1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
2. Stewardship-based Open Source - controlled and managed by an individual or company

Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation

Cheers
Oleg

On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello there,
> 
>      Thank you for the comments. But, just to let you know, 
> it's a community work and no one in particular can be held
> responsible for these kind of small things. This is how open
> source works. Guys who are working on Hadoop have a lot
> of things to do. In spite of that, they are giving their best. In
> the process sometimes these kinda things might happen.
> 
> I really appreciate your effort. But rather than this you can
> raise a JIRA if you find something wrong somewhere and
> fix it or let somebody else fix it.
> 
> Many thanks.
> 
> 
> P.S. : Don't take it otherwise.
> 
> 
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
> 
> 
> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
> For example,look at the documents about HDFS shell guide:
> 
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> 
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> 
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>  
> 
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
> 


Re: Why the official Hadoop Documents are so messy?

Posted by Oleg Zhurakousky <ol...@gmail.com>.
Just a little clarification
This is NOT "how open source works" by any means as there are many Open Source projects with  well written and maintained documentation. 
It all comes down to the 2 Open Source models
1. ASF Open Source - which is a pure democracy or may be even anarchy without any governing (individual or corporate) other then the ASF procedures/guidelines themselves
2. Stewardship-based Open Source - controlled and managed by an individual or company

Obviously in the second there is a vested interested by such individual or company to promote the product therefore things like documentation tend to be much crispier then its ASF counterparts. However the Stewardship-based Open Source model is much tighter with regard to control of what goes in, quality of code etc., then its ASF counterpart which allows a greater flow to free ideas from the community, so both are valid both are open source and both needs to exist and we developers just need to deal with it. After all its Open Source and the code is always a good source of documentation

Cheers
Oleg

On Jan 8, 2013, at 6:59 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello there,
> 
>      Thank you for the comments. But, just to let you know, 
> it's a community work and no one in particular can be held
> responsible for these kind of small things. This is how open
> source works. Guys who are working on Hadoop have a lot
> of things to do. In spite of that, they are giving their best. In
> the process sometimes these kinda things might happen.
> 
> I really appreciate your effort. But rather than this you can
> raise a JIRA if you find something wrong somewhere and
> fix it or let somebody else fix it.
> 
> Many thanks.
> 
> 
> P.S. : Don't take it otherwise.
> 
> 
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
> 
> 
> On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:
> For example,look at the documents about HDFS shell guide:
> 
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> 
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> 
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>  
> 
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
> 


Re: Why the official Hadoop Documents are so messy?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello there,

     Thank you for the comments. But, just to let you know,
it's a community work and no one in particular can be held
responsible for these kind of small things. This is how open
source works. Guys who are working on Hadoop have a lot
of things to do. In spite of that, they are giving their best. In
the process sometimes these kinda things might happen.

I really appreciate your effort. But rather than this you can
raise a JIRA if you find something wrong somewhere and
fix it or let somebody else fix it.

Many thanks.


P.S. : Don't take it otherwise.


Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/


On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Jason Lee <wu...@gmail.com>.
Thanks, you guys, deeply appreciated for your replies.

I am just a newbie to Hadoop. I noticed this problem when i looking for
some documentations on official site. No offensive, i just think these
official documentations may be confusing beginners like me.

I'm very glad to post this to hadoop jira.

Regards.
--Jason Lee

Re: Why the official Hadoop Documents are so messy?

Posted by Jason Lee <wu...@gmail.com>.
Thanks, you guys, deeply appreciated for your replies.

I am just a newbie to Hadoop. I noticed this problem when i looking for
some documentations on official site. No offensive, i just think these
official documentations may be confusing beginners like me.

I'm very glad to post this to hadoop jira.

Regards.
--Jason Lee

Re: Why the official Hadoop Documents are so messy?

Posted by Jason Lee <wu...@gmail.com>.
Thanks, you guys, deeply appreciated for your replies.

I am just a newbie to Hadoop. I noticed this problem when i looking for
some documentations on official site. No offensive, i just think these
official documentations may be confusing beginners like me.

I'm very glad to post this to hadoop jira.

Regards.
--Jason Lee

Re: Why the official Hadoop Documents are so messy?

Posted by Jason Lee <wu...@gmail.com>.
Thanks, you guys, deeply appreciated for your replies.

I am just a newbie to Hadoop. I noticed this problem when i looking for
some documentations on official site. No offensive, i just think these
official documentations may be confusing beginners like me.

I'm very glad to post this to hadoop jira.

Regards.
--Jason Lee

Re: Why the official Hadoop Documents are so messy?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

I am not sure if your complaint is as much about the changing interfaces as
it is about documentation.

Please note that versions prior to 1.0 did not have stable interfaces as a
major requirement. Not by choice, but because the focus was on seemingly
more important functionality, stability, performance etc. Specifically with
respect to the shell commands you refer to, they were going through the
same evolution. From now, 1.x releases will not change these kind of public
interfaces and Apis.

I don't intend that documentation is unimportant. Just that this might be
less of an issue now, post the 1.x release. As others have mentioned, it
would be great if you can participate to improve documentation by filing or
fixing jiras.

Thanks
Hemanth

On Tuesday, January 8, 2013, javaLee wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

I am not sure if your complaint is as much about the changing interfaces as
it is about documentation.

Please note that versions prior to 1.0 did not have stable interfaces as a
major requirement. Not by choice, but because the focus was on seemingly
more important functionality, stability, performance etc. Specifically with
respect to the shell commands you refer to, they were going through the
same evolution. From now, 1.x releases will not change these kind of public
interfaces and Apis.

I don't intend that documentation is unimportant. Just that this might be
less of an issue now, post the 1.x release. As others have mentioned, it
would be great if you can participate to improve documentation by filing or
fixing jiras.

Thanks
Hemanth

On Tuesday, January 8, 2013, javaLee wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello there,

     Thank you for the comments. But, just to let you know,
it's a community work and no one in particular can be held
responsible for these kind of small things. This is how open
source works. Guys who are working on Hadoop have a lot
of things to do. In spite of that, they are giving their best. In
the process sometimes these kinda things might happen.

I really appreciate your effort. But rather than this you can
raise a JIRA if you find something wrong somewhere and
fix it or let somebody else fix it.

Many thanks.


P.S. : Don't take it otherwise.


Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/


On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello there,

     Thank you for the comments. But, just to let you know,
it's a community work and no one in particular can be held
responsible for these kind of small things. This is how open
source works. Guys who are working on Hadoop have a lot
of things to do. In spite of that, they are giving their best. In
the process sometimes these kinda things might happen.

I really appreciate your effort. But rather than this you can
raise a JIRA if you find something wrong somewhere and
fix it or let somebody else fix it.

Many thanks.


P.S. : Don't take it otherwise.


Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/


On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

I am not sure if your complaint is as much about the changing interfaces as
it is about documentation.

Please note that versions prior to 1.0 did not have stable interfaces as a
major requirement. Not by choice, but because the focus was on seemingly
more important functionality, stability, performance etc. Specifically with
respect to the shell commands you refer to, they were going through the
same evolution. From now, 1.x releases will not change these kind of public
interfaces and Apis.

I don't intend that documentation is unimportant. Just that this might be
less of an issue now, post the 1.x release. As others have mentioned, it
would be great if you can participate to improve documentation by filing or
fixing jiras.

Thanks
Hemanth

On Tuesday, January 8, 2013, javaLee wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

I am not sure if your complaint is as much about the changing interfaces as
it is about documentation.

Please note that versions prior to 1.0 did not have stable interfaces as a
major requirement. Not by choice, but because the focus was on seemingly
more important functionality, stability, performance etc. Specifically with
respect to the shell commands you refer to, they were going through the
same evolution. From now, 1.x releases will not change these kind of public
interfaces and Apis.

I don't intend that documentation is unimportant. Just that this might be
less of an issue now, post the 1.x release. As others have mentioned, it
would be great if you can participate to improve documentation by filing or
fixing jiras.

Thanks
Hemanth

On Tuesday, January 8, 2013, javaLee wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>

Re: Why the official Hadoop Documents are so messy?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello there,

     Thank you for the comments. But, just to let you know,
it's a community work and no one in particular can be held
responsible for these kind of small things. This is how open
source works. Guys who are working on Hadoop have a lot
of things to do. In spite of that, they are giving their best. In
the process sometimes these kinda things might happen.

I really appreciate your effort. But rather than this you can
raise a JIRA if you find something wrong somewhere and
fix it or let somebody else fix it.

Many thanks.


P.S. : Don't take it otherwise.


Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/


On Tue, Jan 8, 2013 at 5:05 PM, javaLee <wu...@gmail.com> wrote:

> For example,look at the documents about HDFS shell guide:
>
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
>
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
>
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
>
>
> Reading official Hadoop ducuments is such a suffering.
> As a end user, I am confused...
>