You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Tomer Benyamini <to...@gmail.com> on 2014/09/07 14:27:18 UTC

Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2

Hi,

I would like to make sure I'm not exceeding the quota on the local
cluster's hdfs. I have a couple of questions:

1. How do I know the quota? Here's the output of hadoop fs -count -q
which essentially does not tell me a lot

root@ip-172-31-7-49 ~]$ hadoop fs -count -q /
  2147483647      2147482006            none             inf
 4         1637        25412205559 /

2. What should I do to increase the quota? Should I bring down the
existing slaves and upgrade to ones with more storage? Is there a way
to add disks to existing slaves? I'm using the default m1.large slaves
set up using the spark-ec2 script.

Thanks,
Tomer

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2

Posted by Tomer Benyamini <to...@gmail.com>.
Thanks! I found the hdfs ui via this port - http://[master-ip]:50070/.
It shows 1 node hdfs though, although I have 4 slaves on my cluster.
Any idea why?

On Sun, Sep 7, 2014 at 4:29 PM, Ognen Duzlevski
<og...@gmail.com> wrote:
>
> On 9/7/2014 7:27 AM, Tomer Benyamini wrote:
>>
>> 2. What should I do to increase the quota? Should I bring down the
>> existing slaves and upgrade to ones with more storage? Is there a way
>> to add disks to existing slaves? I'm using the default m1.large slaves
>> set up using the spark-ec2 script.
>
> Take a look at: http://www.ec2instances.info/
>
> There you will find the available EC2 instances with their associated costs
> and how much ephemeral space they come with. Once you pick an instance you
> get only so much ephemeral space. You can always add drives but they will be
> EBS and not physically attached to the instance.
>
> Ognen
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2

Posted by Ognen Duzlevski <og...@gmail.com>.
On 9/7/2014 7:27 AM, Tomer Benyamini wrote:
> 2. What should I do to increase the quota? Should I bring down the
> existing slaves and upgrade to ones with more storage? Is there a way
> to add disks to existing slaves? I'm using the default m1.large slaves
> set up using the spark-ec2 script.
Take a look at: http://www.ec2instances.info/

There you will find the available EC2 instances with their associated 
costs and how much ephemeral space they come with. Once you pick an 
instance you get only so much ephemeral space. You can always add drives 
but they will be EBS and not physically attached to the instance.

Ognen

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org