You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Tomás Fernández Pena <tf...@usc.es> on 2014/11/20 11:11:14 UTC
Change the blocksize in 2.5.1
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems
changing the default block size. My hdfs-site.xml file I've set the property
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
to have blocks of 64 MB, but it seems that the system ignore this
setting. When I copy a new file, it uses a block size of 128M. Only if I
specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.
Any idea?
Best regards
Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
Re: Change the blocksize in 2.5.1
Posted by hadoop hive <ha...@gmail.com>.
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, "Tomás Fernández Pena" <tf...@usc.es> wrote:
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems
> changing the default block size. My hdfs-site.xml file I've set the
> property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this
> setting. When I copy a new file, it uses a block size of 128M. Only if I
> specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
>
Re: Change the blocksize in 2.5.1
Posted by hadoop hive <ha...@gmail.com>.
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, "Tomás Fernández Pena" <tf...@usc.es> wrote:
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems
> changing the default block size. My hdfs-site.xml file I've set the
> property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this
> setting. When I copy a new file, it uses a block size of 128M. Only if I
> specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
>
Re: Change the blocksize in 2.5.1
Posted by hadoop hive <ha...@gmail.com>.
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, "Tomás Fernández Pena" <tf...@usc.es> wrote:
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems
> changing the default block size. My hdfs-site.xml file I've set the
> property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this
> setting. When I copy a new file, it uses a block size of 128M. Only if I
> specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
>
Re: Change the blocksize in 2.5.1
Posted by Tomás Fernández Pena <tf...@usc.es>.
Hi
Thanks for your kind answers. I've found the problem.
The point is that I'd only specified the dfs.blocksize parameter in the
hdfs-site.xml of the NameNode and datanodes, but no in the client.
My question now is, how can I avoid that the client change the value of
blocksize? I've tried to put the df.blocksize property as final, but it
doesn't work
$ cat etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.blocksize</name>
<value>64m</value>
<final>true</final>
</property>
</configuration>
$ hdfs dfs -Ddfs.blocksize=$((32*1024*1024)) -put foo .
and foo uses blocks of 32m
Best regards
Tomas
On 20/11/14 11:32, Rohith Sharma K S wrote:
> It seems HADOOP_CONF_DIR is poiniting different location!!?
> May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
>
>
> Thanks & Regards
> Rohith Sharma K S
>
> -----Original Message-----
> From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
> Sent: 20 November 2014 15:41
> To: user@hadoop.apache.org
> Subject: Change the blocksize in 2.5.1
>
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
Re: Change the blocksize in 2.5.1
Posted by Tomás Fernández Pena <tf...@usc.es>.
Hi
Thanks for your kind answers. I've found the problem.
The point is that I'd only specified the dfs.blocksize parameter in the
hdfs-site.xml of the NameNode and datanodes, but no in the client.
My question now is, how can I avoid that the client change the value of
blocksize? I've tried to put the df.blocksize property as final, but it
doesn't work
$ cat etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.blocksize</name>
<value>64m</value>
<final>true</final>
</property>
</configuration>
$ hdfs dfs -Ddfs.blocksize=$((32*1024*1024)) -put foo .
and foo uses blocks of 32m
Best regards
Tomas
On 20/11/14 11:32, Rohith Sharma K S wrote:
> It seems HADOOP_CONF_DIR is poiniting different location!!?
> May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
>
>
> Thanks & Regards
> Rohith Sharma K S
>
> -----Original Message-----
> From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
> Sent: 20 November 2014 15:41
> To: user@hadoop.apache.org
> Subject: Change the blocksize in 2.5.1
>
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
Re: Change the blocksize in 2.5.1
Posted by Tomás Fernández Pena <tf...@usc.es>.
Hi
Thanks for your kind answers. I've found the problem.
The point is that I'd only specified the dfs.blocksize parameter in the
hdfs-site.xml of the NameNode and datanodes, but no in the client.
My question now is, how can I avoid that the client change the value of
blocksize? I've tried to put the df.blocksize property as final, but it
doesn't work
$ cat etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.blocksize</name>
<value>64m</value>
<final>true</final>
</property>
</configuration>
$ hdfs dfs -Ddfs.blocksize=$((32*1024*1024)) -put foo .
and foo uses blocks of 32m
Best regards
Tomas
On 20/11/14 11:32, Rohith Sharma K S wrote:
> It seems HADOOP_CONF_DIR is poiniting different location!!?
> May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
>
>
> Thanks & Regards
> Rohith Sharma K S
>
> -----Original Message-----
> From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
> Sent: 20 November 2014 15:41
> To: user@hadoop.apache.org
> Subject: Change the blocksize in 2.5.1
>
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
Re: Change the blocksize in 2.5.1
Posted by Tomás Fernández Pena <tf...@usc.es>.
Hi
Thanks for your kind answers. I've found the problem.
The point is that I'd only specified the dfs.blocksize parameter in the
hdfs-site.xml of the NameNode and datanodes, but no in the client.
My question now is, how can I avoid that the client change the value of
blocksize? I've tried to put the df.blocksize property as final, but it
doesn't work
$ cat etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.blocksize</name>
<value>64m</value>
<final>true</final>
</property>
</configuration>
$ hdfs dfs -Ddfs.blocksize=$((32*1024*1024)) -put foo .
and foo uses blocks of 32m
Best regards
Tomas
On 20/11/14 11:32, Rohith Sharma K S wrote:
> It seems HADOOP_CONF_DIR is poiniting different location!!?
> May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
>
>
> Thanks & Regards
> Rohith Sharma K S
>
> -----Original Message-----
> From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
> Sent: 20 November 2014 15:41
> To: user@hadoop.apache.org
> Subject: Change the blocksize in 2.5.1
>
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
RE: Change the blocksize in 2.5.1
Posted by Rohith Sharma K S <ro...@huawei.com>.
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
Thanks & Regards
Rohith Sharma K S
-----Original Message-----
From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
Sent: 20 November 2014 15:41
To: user@hadoop.apache.org
Subject: Change the blocksize in 2.5.1
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.
Any idea?
Best regards
Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
RE: Change the blocksize in 2.5.1
Posted by Rohith Sharma K S <ro...@huawei.com>.
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
Thanks & Regards
Rohith Sharma K S
-----Original Message-----
From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
Sent: 20 November 2014 15:41
To: user@hadoop.apache.org
Subject: Change the blocksize in 2.5.1
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.
Any idea?
Best regards
Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
RE: Change the blocksize in 2.5.1
Posted by Rohith Sharma K S <ro...@huawei.com>.
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
Thanks & Regards
Rohith Sharma K S
-----Original Message-----
From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
Sent: 20 November 2014 15:41
To: user@hadoop.apache.org
Subject: Change the blocksize in 2.5.1
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.
Any idea?
Best regards
Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A
Re: Change the blocksize in 2.5.1
Posted by hadoop hive <ha...@gmail.com>.
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, "Tomás Fernández Pena" <tf...@usc.es> wrote:
> Hello everyone,
>
> I've just installed Hadoop 2.5.1 from source code, and I have problems
> changing the default block size. My hdfs-site.xml file I've set the
> property
>
> <property>
> <name>dfs.blocksize</name>
> <value>67108864</value>
> </property>
>
> to have blocks of 64 MB, but it seems that the system ignore this
> setting. When I copy a new file, it uses a block size of 128M. Only if I
> specify the block size when the file is created (ie hdfs dfs
> -Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
> 64 MB.
>
> Any idea?
>
> Best regards
>
> Tomas
> --
> Tomás Fernández Pena
> Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
> Santiago de Compostela
> Tel: +34 881816439, Fax: +34 881814112,
> https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
> Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
> 81F6 435A
>
>
RE: Change the blocksize in 2.5.1
Posted by Rohith Sharma K S <ro...@huawei.com>.
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs command.
Thanks & Regards
Rohith Sharma K S
-----Original Message-----
From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
Sent: 20 November 2014 15:41
To: user@hadoop.apache.org
Subject: Change the blocksize in 2.5.1
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default block size. My hdfs-site.xml file I've set the property
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new file, it uses a block size of 128M. Only if I specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.
Any idea?
Best regards
Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A