You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by John Meza <j_...@hotmail.com> on 2013/03/04 20:00:55 UTC
dfs.datanode.du.reserved
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanksJohn
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Ok. I'll add my ideas and comments.
From: harsh@cloudera.com
Date: Thu, 7 Mar 2013 06:37:20 +0530
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Hey John,
Ideas, comments and patches are welcome on https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
--
Harsh J
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Ok. I'll add my ideas and comments.
From: harsh@cloudera.com
Date: Thu, 7 Mar 2013 06:37:20 +0530
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Hey John,
Ideas, comments and patches are welcome on https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
--
Harsh J
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Ok. I'll add my ideas and comments.
From: harsh@cloudera.com
Date: Thu, 7 Mar 2013 06:37:20 +0530
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Hey John,
Ideas, comments and patches are welcome on https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
--
Harsh J
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Ok. I'll add my ideas and comments.
From: harsh@cloudera.com
Date: Thu, 7 Mar 2013 06:37:20 +0530
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Hey John,
Ideas, comments and patches are welcome on https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
--
Harsh J
Re: dfs.datanode.du.reserved
Posted by Harsh J <ha...@cloudera.com>.
Hey John,
Ideas, comments and patches are welcome on
https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
> Thanks for the reply. This sounds like it has potential, but also seems to
> be a rather duct-tape type of work around. It would be nice if there was a
> mod to dfs.datanode.du.reserved that worked within Hadoop, so that would
> imply that hadoop was a little more certain to adhere to it.
>
> I understand that dfs.datanode.du.reserved defines reservd storage on each
> volume. I would like to give each volume a different reserved value.
>
> An example:
>
> <name>dfs.datanode.du.reserved</name>
> <value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
>
> Or something similiar.
> thanks
> John
> ------------------------------
> Date: Wed, 6 Mar 2013 10:25:17 +0100
> Subject: Re: dfs.datanode.du.reserved
> From: dechouxb@gmail.com
> To: user@hadoop.apache.org
>
>
> Not that I know. If so you should be able to identify each volume and as
> of now this isn't the case.
> BUT it can be done without Hadoop knowing about it, at the OS level, using
> different partitions/mounts for datanode and jobtracker stuff. That should
> solve your problem.
>
> Regards
>
> Bertrand
>
> On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
>
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
>
--
Harsh J
Re: dfs.datanode.du.reserved
Posted by Harsh J <ha...@cloudera.com>.
Hey John,
Ideas, comments and patches are welcome on
https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
> Thanks for the reply. This sounds like it has potential, but also seems to
> be a rather duct-tape type of work around. It would be nice if there was a
> mod to dfs.datanode.du.reserved that worked within Hadoop, so that would
> imply that hadoop was a little more certain to adhere to it.
>
> I understand that dfs.datanode.du.reserved defines reservd storage on each
> volume. I would like to give each volume a different reserved value.
>
> An example:
>
> <name>dfs.datanode.du.reserved</name>
> <value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
>
> Or something similiar.
> thanks
> John
> ------------------------------
> Date: Wed, 6 Mar 2013 10:25:17 +0100
> Subject: Re: dfs.datanode.du.reserved
> From: dechouxb@gmail.com
> To: user@hadoop.apache.org
>
>
> Not that I know. If so you should be able to identify each volume and as
> of now this isn't the case.
> BUT it can be done without Hadoop knowing about it, at the OS level, using
> different partitions/mounts for datanode and jobtracker stuff. That should
> solve your problem.
>
> Regards
>
> Bertrand
>
> On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
>
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
>
--
Harsh J
Re: dfs.datanode.du.reserved
Posted by Harsh J <ha...@cloudera.com>.
Hey John,
Ideas, comments and patches are welcome on
https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
> Thanks for the reply. This sounds like it has potential, but also seems to
> be a rather duct-tape type of work around. It would be nice if there was a
> mod to dfs.datanode.du.reserved that worked within Hadoop, so that would
> imply that hadoop was a little more certain to adhere to it.
>
> I understand that dfs.datanode.du.reserved defines reservd storage on each
> volume. I would like to give each volume a different reserved value.
>
> An example:
>
> <name>dfs.datanode.du.reserved</name>
> <value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
>
> Or something similiar.
> thanks
> John
> ------------------------------
> Date: Wed, 6 Mar 2013 10:25:17 +0100
> Subject: Re: dfs.datanode.du.reserved
> From: dechouxb@gmail.com
> To: user@hadoop.apache.org
>
>
> Not that I know. If so you should be able to identify each volume and as
> of now this isn't the case.
> BUT it can be done without Hadoop knowing about it, at the OS level, using
> different partitions/mounts for datanode and jobtracker stuff. That should
> solve your problem.
>
> Regards
>
> Bertrand
>
> On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
>
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
>
--
Harsh J
Re: dfs.datanode.du.reserved
Posted by Harsh J <ha...@cloudera.com>.
Hey John,
Ideas, comments and patches are welcome on
https://issues.apache.org/jira/browse/HDFS-1564 for achieving this!
On Wed, Mar 6, 2013 at 9:56 PM, John Meza <j_...@hotmail.com> wrote:
> Thanks for the reply. This sounds like it has potential, but also seems to
> be a rather duct-tape type of work around. It would be nice if there was a
> mod to dfs.datanode.du.reserved that worked within Hadoop, so that would
> imply that hadoop was a little more certain to adhere to it.
>
> I understand that dfs.datanode.du.reserved defines reservd storage on each
> volume. I would like to give each volume a different reserved value.
>
> An example:
>
> <name>dfs.datanode.du.reserved</name>
> <value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>
>
> Or something similiar.
> thanks
> John
> ------------------------------
> Date: Wed, 6 Mar 2013 10:25:17 +0100
> Subject: Re: dfs.datanode.du.reserved
> From: dechouxb@gmail.com
> To: user@hadoop.apache.org
>
>
> Not that I know. If so you should be able to identify each volume and as
> of now this isn't the case.
> BUT it can be done without Hadoop knowing about it, at the OS level, using
> different partitions/mounts for datanode and jobtracker stuff. That should
> solve your problem.
>
> Regards
>
> Bertrand
>
> On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
>
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
>
--
Harsh J
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
Thanks for the reply. This sounds like it has potential, but also seems to be a rather duct-tape type of work around. It would be nice if there was a mod to dfs.datanode.du.reserved that worked within Hadoop, so that would imply that hadoop was a little more certain to adhere to it.
I understand that dfs.datanode.du.reserved defines reservd storage on each volume. I would like to give each volume a different reserved value.
An example: <name>dfs.datanode.du.reserved</name>
<value>///hstore1/dfs/dn:161061273600,///hstore2/dfs/dn:53687091200</value>Or something similiar. thanks
John
Date: Wed, 6 Mar 2013 10:25:17 +0100
Subject: Re: dfs.datanode.du.reserved
From: dechouxb@gmail.com
To: user@hadoop.apache.org
Not that I know. If so you should be able to identify each volume and as of now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using different partitions/mounts for datanode and jobtracker stuff. That should solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.
https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
Re: dfs.datanode.du.reserved
Posted by Bertrand Dechoux <de...@gmail.com>.
Not that I know. If so you should be able to identify each volume and as of
now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using
different partitions/mounts for datanode and jobtracker stuff. That should
solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
Re: dfs.datanode.du.reserved
Posted by Bertrand Dechoux <de...@gmail.com>.
Not that I know. If so you should be able to identify each volume and as of
now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using
different partitions/mounts for datanode and jobtracker stuff. That should
solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
Re: dfs.datanode.du.reserved
Posted by Bertrand Dechoux <de...@gmail.com>.
Not that I know. If so you should be able to identify each volume and as of
now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using
different partitions/mounts for datanode and jobtracker stuff. That should
solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
Re: dfs.datanode.du.reserved
Posted by Bertrand Dechoux <de...@gmail.com>.
Not that I know. If so you should be able to identify each volume and as of
now this isn't the case.
BUT it can be done without Hadoop knowing about it, at the OS level, using
different partitions/mounts for datanode and jobtracker stuff. That should
solve your problem.
Regards
Bertrand
On Mon, Mar 4, 2013 at 10:26 PM, John Meza <j_...@hotmail.com> wrote:
> I'm probably not being clear.
> this seems to describe it: dfs.datanode.du.reserved configured
> per-volume.
> https://issues.apache.org/jira/browse/HDFS-1564
>
> thanks
> John
> ------------------------------
> From: outlawdba@gmail.com
> Date: Mon, 4 Mar 2013 15:37:36 -0500
> Subject: Re: dfs.datanode.du.reserved
> To: user@hadoop.apache.org
>
>
> Possible to reserve 0 from various testing I have done yet that could
> cause the obvious side effect of achieving zero disk space:) Have only
> tested in development environment, however. Yet there are various tuning
> white papers and other benchmarks where the very same has been tested.
>
> Thanks.
>
> On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
>
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
>
>
>
> --
> Ellis R. Miller
> 937.829.2380
>
>
> <http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
>
> Mundo Nulla Fides
>
>
>
> <http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
>
>
>
>
>
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
RE: dfs.datanode.du.reserved
Posted by John Meza <j_...@hotmail.com>.
I'm probably not being clear.this seems to describe it: dfs.datanode.du.reserved configured per-volume.https://issues.apache.org/jira/browse/HDFS-1564
thanksJohn
From: outlawdba@gmail.com
Date: Mon, 4 Mar 2013 15:37:36 -0500
Subject: Re: dfs.datanode.du.reserved
To: user@hadoop.apache.org
Possible to reserve 0 from various testing I have done yet that could cause the obvious side effect of achieving zero disk space:) Have only tested in development environment, however. Yet there are various tuning white papers and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
the parameter: dfs.datanode.du.reserved is used to reserve disk space PER datanode. Is it possible to reserve a different amount of disk space per DISK?
thanks
John
--
Ellis R. Miller937.829.2380
Mundo Nulla Fides
Re: dfs.datanode.du.reserved
Posted by Ellis Miller <ou...@gmail.com>.
Possible to reserve 0 from various testing I have done yet that could cause
the obvious side effect of achieving zero disk space:) Have only tested in
development environment, however. Yet there are various tuning white papers
and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
--
Ellis R. Miller
937.829.2380
<http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
Mundo Nulla Fides
<http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
Re: dfs.datanode.du.reserved
Posted by Ellis Miller <ou...@gmail.com>.
Possible to reserve 0 from various testing I have done yet that could cause
the obvious side effect of achieving zero disk space:) Have only tested in
development environment, however. Yet there are various tuning white papers
and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
--
Ellis R. Miller
937.829.2380
<http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
Mundo Nulla Fides
<http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
Re: dfs.datanode.du.reserved
Posted by Ellis Miller <ou...@gmail.com>.
Possible to reserve 0 from various testing I have done yet that could cause
the obvious side effect of achieving zero disk space:) Have only tested in
development environment, however. Yet there are various tuning white papers
and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
--
Ellis R. Miller
937.829.2380
<http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
Mundo Nulla Fides
<http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>
Re: dfs.datanode.du.reserved
Posted by Ellis Miller <ou...@gmail.com>.
Possible to reserve 0 from various testing I have done yet that could cause
the obvious side effect of achieving zero disk space:) Have only tested in
development environment, however. Yet there are various tuning white papers
and other benchmarks where the very same has been tested.
Thanks.
On Mon, Mar 4, 2013 at 2:00 PM, John Meza <j_...@hotmail.com> wrote:
> the parameter: dfs.datanode.du.reserved is used to reserve disk space PER
> datanode. Is it possible to reserve a different amount of disk space per
> DISK?
>
> thanks
> John
>
--
Ellis R. Miller
937.829.2380
<http://my.wisestamp.com/link?u=2hxhdfd4p76bkhcm&site=www.wisestamp.com/email-install>
Mundo Nulla Fides
<http://my.wisestamp.com/link?u=gfbmwhzrwxzcrjqx&site=www.wisestamp.com/email-install>