You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Charles Robertson <ch...@gmail.com> on 2014/09/16 11:39:42 UTC

Cannot start DataNode after adding new volume

Hi all,

I am running out of space on a data node, so added a new volume to the
host, mounted it and made sure the permissions were set OK. Then I updated
the 'DataNode Directories' property in Ambari to include the new path
(comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
the components with stale configs for that host, but the DataNode wouldn't
come back up, reporting 'connection refused'. When I remove the new data
directory path from the property and restart, it starts fine.

What am I doing wrong?

Thanks,
Charles

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Samir,

That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop
and tried again and the DataNode has started successfully.

Thank you!
Charles

On 16 September 2014 13:47, Samir Ahmic <ah...@gmail.com> wrote:

> Hi Charles,
>
> From log it looks like that DataNode process don't have permissions to
> write to "/usr/lib/hadoop" dir. Can you check permissions on "
> /usr/lib/hadoop" for user under  which DataNode process is started.
> (probably hdfs user but not sure).
>
> Cheers
> Samir
>
> On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
> charles.robertson@gmail.com> wrote:
>
>> I've found this in the logs:
>>
>> 014-09-16 11:00:31,287 INFO  datanode.DataNode
>> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
>> [TERM, HUP, INT]
>> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
>> Please update hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path  should be specified as a URI in configuration files. Please update
>> hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /data/hdfs should be specified as a URI in configuration files. Please
>> update hdfs configuration.
>> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
>> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
>> /usr/lib/hadoop :
>> EPERM: Operation not permitted
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>> at
>> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
>> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>>
>> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
>> path I have added. All the documentation says it can be a comma-delimited
>> list of paths but this log is complaining it's not a URI? When it's
>> /hadoop/hdfs/data on its own it starts fine...?
>>
>> Regards,
>> Charles
>>
>> On 16 September 2014 12:08, Charles Robertson <
>> charles.robertson@gmail.com> wrote:
>>
>>> Hi Susheel,
>>>
>>> Tried that - same result. DataNode still not starting.
>>>
>>> Thanks,
>>> Charles
>>>
>>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>>> wrote:
>>>
>>>> The VERSION file has to be same across all the data nodes directories.
>>>>
>>>> So I suggested to copy it as it is using OS command and start data node.
>>>>
>>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> > Hi Susheel,
>>>> >
>>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>>> >
>>>> > When I created the new directory on the new volume I simply created an
>>>> > empty directory. I see from the existing data node directory that it
>>>> has a
>>>> > sub-directory called current containing a file called VERSION.
>>>> >
>>>> > Your advice is to create the 'current' sub-directory and copy the
>>>> VERSION
>>>> > file across to it without changes? I see it has various guids, and so
>>>> I'm
>>>> > worried about it clashing with the VERSION file in the other data
>>>> > directory.
>>>> >
>>>> > Thanks,
>>>> > Charles
>>>> >
>>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <
>>>> skgadalay@gmail.com>
>>>> > wrote:
>>>> >
>>>> >> Is it something to do current/VERSION file in data node directory.
>>>> >>
>>>> >> Just copy from the existing directory and start.
>>>> >>
>>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> >> > Hi all,
>>>> >> >
>>>> >> > I am running out of space on a data node, so added a new volume to
>>>> the
>>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>>> >> updated
>>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>>> path
>>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>>> >> > restarted
>>>> >> > the components with stale configs for that host, but the DataNode
>>>> >> wouldn't
>>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>>> >> > data
>>>> >> > directory path from the property and restart, it starts fine.
>>>> >> >
>>>> >> > What am I doing wrong?
>>>> >> >
>>>> >> > Thanks,
>>>> >> > Charles
>>>> >> >
>>>> >>
>>>> >
>>>>
>>>
>>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Samir,

That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop
and tried again and the DataNode has started successfully.

Thank you!
Charles

On 16 September 2014 13:47, Samir Ahmic <ah...@gmail.com> wrote:

> Hi Charles,
>
> From log it looks like that DataNode process don't have permissions to
> write to "/usr/lib/hadoop" dir. Can you check permissions on "
> /usr/lib/hadoop" for user under  which DataNode process is started.
> (probably hdfs user but not sure).
>
> Cheers
> Samir
>
> On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
> charles.robertson@gmail.com> wrote:
>
>> I've found this in the logs:
>>
>> 014-09-16 11:00:31,287 INFO  datanode.DataNode
>> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
>> [TERM, HUP, INT]
>> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
>> Please update hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path  should be specified as a URI in configuration files. Please update
>> hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /data/hdfs should be specified as a URI in configuration files. Please
>> update hdfs configuration.
>> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
>> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
>> /usr/lib/hadoop :
>> EPERM: Operation not permitted
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>> at
>> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
>> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>>
>> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
>> path I have added. All the documentation says it can be a comma-delimited
>> list of paths but this log is complaining it's not a URI? When it's
>> /hadoop/hdfs/data on its own it starts fine...?
>>
>> Regards,
>> Charles
>>
>> On 16 September 2014 12:08, Charles Robertson <
>> charles.robertson@gmail.com> wrote:
>>
>>> Hi Susheel,
>>>
>>> Tried that - same result. DataNode still not starting.
>>>
>>> Thanks,
>>> Charles
>>>
>>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>>> wrote:
>>>
>>>> The VERSION file has to be same across all the data nodes directories.
>>>>
>>>> So I suggested to copy it as it is using OS command and start data node.
>>>>
>>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> > Hi Susheel,
>>>> >
>>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>>> >
>>>> > When I created the new directory on the new volume I simply created an
>>>> > empty directory. I see from the existing data node directory that it
>>>> has a
>>>> > sub-directory called current containing a file called VERSION.
>>>> >
>>>> > Your advice is to create the 'current' sub-directory and copy the
>>>> VERSION
>>>> > file across to it without changes? I see it has various guids, and so
>>>> I'm
>>>> > worried about it clashing with the VERSION file in the other data
>>>> > directory.
>>>> >
>>>> > Thanks,
>>>> > Charles
>>>> >
>>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <
>>>> skgadalay@gmail.com>
>>>> > wrote:
>>>> >
>>>> >> Is it something to do current/VERSION file in data node directory.
>>>> >>
>>>> >> Just copy from the existing directory and start.
>>>> >>
>>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> >> > Hi all,
>>>> >> >
>>>> >> > I am running out of space on a data node, so added a new volume to
>>>> the
>>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>>> >> updated
>>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>>> path
>>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>>> >> > restarted
>>>> >> > the components with stale configs for that host, but the DataNode
>>>> >> wouldn't
>>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>>> >> > data
>>>> >> > directory path from the property and restart, it starts fine.
>>>> >> >
>>>> >> > What am I doing wrong?
>>>> >> >
>>>> >> > Thanks,
>>>> >> > Charles
>>>> >> >
>>>> >>
>>>> >
>>>>
>>>
>>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Samir,

That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop
and tried again and the DataNode has started successfully.

Thank you!
Charles

On 16 September 2014 13:47, Samir Ahmic <ah...@gmail.com> wrote:

> Hi Charles,
>
> From log it looks like that DataNode process don't have permissions to
> write to "/usr/lib/hadoop" dir. Can you check permissions on "
> /usr/lib/hadoop" for user under  which DataNode process is started.
> (probably hdfs user but not sure).
>
> Cheers
> Samir
>
> On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
> charles.robertson@gmail.com> wrote:
>
>> I've found this in the logs:
>>
>> 014-09-16 11:00:31,287 INFO  datanode.DataNode
>> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
>> [TERM, HUP, INT]
>> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
>> Please update hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path  should be specified as a URI in configuration files. Please update
>> hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /data/hdfs should be specified as a URI in configuration files. Please
>> update hdfs configuration.
>> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
>> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
>> /usr/lib/hadoop :
>> EPERM: Operation not permitted
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>> at
>> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
>> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>>
>> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
>> path I have added. All the documentation says it can be a comma-delimited
>> list of paths but this log is complaining it's not a URI? When it's
>> /hadoop/hdfs/data on its own it starts fine...?
>>
>> Regards,
>> Charles
>>
>> On 16 September 2014 12:08, Charles Robertson <
>> charles.robertson@gmail.com> wrote:
>>
>>> Hi Susheel,
>>>
>>> Tried that - same result. DataNode still not starting.
>>>
>>> Thanks,
>>> Charles
>>>
>>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>>> wrote:
>>>
>>>> The VERSION file has to be same across all the data nodes directories.
>>>>
>>>> So I suggested to copy it as it is using OS command and start data node.
>>>>
>>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> > Hi Susheel,
>>>> >
>>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>>> >
>>>> > When I created the new directory on the new volume I simply created an
>>>> > empty directory. I see from the existing data node directory that it
>>>> has a
>>>> > sub-directory called current containing a file called VERSION.
>>>> >
>>>> > Your advice is to create the 'current' sub-directory and copy the
>>>> VERSION
>>>> > file across to it without changes? I see it has various guids, and so
>>>> I'm
>>>> > worried about it clashing with the VERSION file in the other data
>>>> > directory.
>>>> >
>>>> > Thanks,
>>>> > Charles
>>>> >
>>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <
>>>> skgadalay@gmail.com>
>>>> > wrote:
>>>> >
>>>> >> Is it something to do current/VERSION file in data node directory.
>>>> >>
>>>> >> Just copy from the existing directory and start.
>>>> >>
>>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> >> > Hi all,
>>>> >> >
>>>> >> > I am running out of space on a data node, so added a new volume to
>>>> the
>>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>>> >> updated
>>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>>> path
>>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>>> >> > restarted
>>>> >> > the components with stale configs for that host, but the DataNode
>>>> >> wouldn't
>>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>>> >> > data
>>>> >> > directory path from the property and restart, it starts fine.
>>>> >> >
>>>> >> > What am I doing wrong?
>>>> >> >
>>>> >> > Thanks,
>>>> >> > Charles
>>>> >> >
>>>> >>
>>>> >
>>>>
>>>
>>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Samir,

That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop
and tried again and the DataNode has started successfully.

Thank you!
Charles

On 16 September 2014 13:47, Samir Ahmic <ah...@gmail.com> wrote:

> Hi Charles,
>
> From log it looks like that DataNode process don't have permissions to
> write to "/usr/lib/hadoop" dir. Can you check permissions on "
> /usr/lib/hadoop" for user under  which DataNode process is started.
> (probably hdfs user but not sure).
>
> Cheers
> Samir
>
> On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
> charles.robertson@gmail.com> wrote:
>
>> I've found this in the logs:
>>
>> 014-09-16 11:00:31,287 INFO  datanode.DataNode
>> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
>> [TERM, HUP, INT]
>> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
>> Please update hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path  should be specified as a URI in configuration files. Please update
>> hdfs configuration.
>> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
>> Path /data/hdfs should be specified as a URI in configuration files. Please
>> update hdfs configuration.
>> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
>> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
>> /usr/lib/hadoop :
>> EPERM: Operation not permitted
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
>> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>> at
>> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
>> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>>
>> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
>> path I have added. All the documentation says it can be a comma-delimited
>> list of paths but this log is complaining it's not a URI? When it's
>> /hadoop/hdfs/data on its own it starts fine...?
>>
>> Regards,
>> Charles
>>
>> On 16 September 2014 12:08, Charles Robertson <
>> charles.robertson@gmail.com> wrote:
>>
>>> Hi Susheel,
>>>
>>> Tried that - same result. DataNode still not starting.
>>>
>>> Thanks,
>>> Charles
>>>
>>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>>> wrote:
>>>
>>>> The VERSION file has to be same across all the data nodes directories.
>>>>
>>>> So I suggested to copy it as it is using OS command and start data node.
>>>>
>>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> > Hi Susheel,
>>>> >
>>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>>> >
>>>> > When I created the new directory on the new volume I simply created an
>>>> > empty directory. I see from the existing data node directory that it
>>>> has a
>>>> > sub-directory called current containing a file called VERSION.
>>>> >
>>>> > Your advice is to create the 'current' sub-directory and copy the
>>>> VERSION
>>>> > file across to it without changes? I see it has various guids, and so
>>>> I'm
>>>> > worried about it clashing with the VERSION file in the other data
>>>> > directory.
>>>> >
>>>> > Thanks,
>>>> > Charles
>>>> >
>>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <
>>>> skgadalay@gmail.com>
>>>> > wrote:
>>>> >
>>>> >> Is it something to do current/VERSION file in data node directory.
>>>> >>
>>>> >> Just copy from the existing directory and start.
>>>> >>
>>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>>> >> > Hi all,
>>>> >> >
>>>> >> > I am running out of space on a data node, so added a new volume to
>>>> the
>>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>>> >> updated
>>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>>> path
>>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>>> >> > restarted
>>>> >> > the components with stale configs for that host, but the DataNode
>>>> >> wouldn't
>>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>>> >> > data
>>>> >> > directory path from the property and restart, it starts fine.
>>>> >> >
>>>> >> > What am I doing wrong?
>>>> >> >
>>>> >> > Thanks,
>>>> >> > Charles
>>>> >> >
>>>> >>
>>>> >
>>>>
>>>
>>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Samir Ahmic <ah...@gmail.com>.
Hi Charles,

>From log it looks like that DataNode process don't have permissions to
write to "/usr/lib/hadoop" dir. Can you check permissions on "
/usr/lib/hadoop" for user under  which DataNode process is started.
(probably hdfs user but not sure).

Cheers
Samir

On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
charles.robertson@gmail.com> wrote:

> I've found this in the logs:
>
> 014-09-16 11:00:31,287 INFO  datanode.DataNode
> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
> [TERM, HUP, INT]
> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path  should be specified as a URI in configuration files. Please update
> hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /data/hdfs should be specified as a URI in configuration files. Please
> update hdfs configuration.
> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
> /usr/lib/hadoop :
> EPERM: Operation not permitted
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>
> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
> path I have added. All the documentation says it can be a comma-delimited
> list of paths but this log is complaining it's not a URI? When it's
> /hadoop/hdfs/data on its own it starts fine...?
>
> Regards,
> Charles
>
> On 16 September 2014 12:08, Charles Robertson <charles.robertson@gmail.com
> > wrote:
>
>> Hi Susheel,
>>
>> Tried that - same result. DataNode still not starting.
>>
>> Thanks,
>> Charles
>>
>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>> wrote:
>>
>>> The VERSION file has to be same across all the data nodes directories.
>>>
>>> So I suggested to copy it as it is using OS command and start data node.
>>>
>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> > Hi Susheel,
>>> >
>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>> >
>>> > When I created the new directory on the new volume I simply created an
>>> > empty directory. I see from the existing data node directory that it
>>> has a
>>> > sub-directory called current containing a file called VERSION.
>>> >
>>> > Your advice is to create the 'current' sub-directory and copy the
>>> VERSION
>>> > file across to it without changes? I see it has various guids, and so
>>> I'm
>>> > worried about it clashing with the VERSION file in the other data
>>> > directory.
>>> >
>>> > Thanks,
>>> > Charles
>>> >
>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <skgadalay@gmail.com
>>> >
>>> > wrote:
>>> >
>>> >> Is it something to do current/VERSION file in data node directory.
>>> >>
>>> >> Just copy from the existing directory and start.
>>> >>
>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I am running out of space on a data node, so added a new volume to
>>> the
>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>> >> updated
>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>> path
>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>> >> > restarted
>>> >> > the components with stale configs for that host, but the DataNode
>>> >> wouldn't
>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>> >> > data
>>> >> > directory path from the property and restart, it starts fine.
>>> >> >
>>> >> > What am I doing wrong?
>>> >> >
>>> >> > Thanks,
>>> >> > Charles
>>> >> >
>>> >>
>>> >
>>>
>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Samir Ahmic <ah...@gmail.com>.
Hi Charles,

>From log it looks like that DataNode process don't have permissions to
write to "/usr/lib/hadoop" dir. Can you check permissions on "
/usr/lib/hadoop" for user under  which DataNode process is started.
(probably hdfs user but not sure).

Cheers
Samir

On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
charles.robertson@gmail.com> wrote:

> I've found this in the logs:
>
> 014-09-16 11:00:31,287 INFO  datanode.DataNode
> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
> [TERM, HUP, INT]
> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path  should be specified as a URI in configuration files. Please update
> hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /data/hdfs should be specified as a URI in configuration files. Please
> update hdfs configuration.
> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
> /usr/lib/hadoop :
> EPERM: Operation not permitted
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>
> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
> path I have added. All the documentation says it can be a comma-delimited
> list of paths but this log is complaining it's not a URI? When it's
> /hadoop/hdfs/data on its own it starts fine...?
>
> Regards,
> Charles
>
> On 16 September 2014 12:08, Charles Robertson <charles.robertson@gmail.com
> > wrote:
>
>> Hi Susheel,
>>
>> Tried that - same result. DataNode still not starting.
>>
>> Thanks,
>> Charles
>>
>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>> wrote:
>>
>>> The VERSION file has to be same across all the data nodes directories.
>>>
>>> So I suggested to copy it as it is using OS command and start data node.
>>>
>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> > Hi Susheel,
>>> >
>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>> >
>>> > When I created the new directory on the new volume I simply created an
>>> > empty directory. I see from the existing data node directory that it
>>> has a
>>> > sub-directory called current containing a file called VERSION.
>>> >
>>> > Your advice is to create the 'current' sub-directory and copy the
>>> VERSION
>>> > file across to it without changes? I see it has various guids, and so
>>> I'm
>>> > worried about it clashing with the VERSION file in the other data
>>> > directory.
>>> >
>>> > Thanks,
>>> > Charles
>>> >
>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <skgadalay@gmail.com
>>> >
>>> > wrote:
>>> >
>>> >> Is it something to do current/VERSION file in data node directory.
>>> >>
>>> >> Just copy from the existing directory and start.
>>> >>
>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I am running out of space on a data node, so added a new volume to
>>> the
>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>> >> updated
>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>> path
>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>> >> > restarted
>>> >> > the components with stale configs for that host, but the DataNode
>>> >> wouldn't
>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>> >> > data
>>> >> > directory path from the property and restart, it starts fine.
>>> >> >
>>> >> > What am I doing wrong?
>>> >> >
>>> >> > Thanks,
>>> >> > Charles
>>> >> >
>>> >>
>>> >
>>>
>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Samir Ahmic <ah...@gmail.com>.
Hi Charles,

>From log it looks like that DataNode process don't have permissions to
write to "/usr/lib/hadoop" dir. Can you check permissions on "
/usr/lib/hadoop" for user under  which DataNode process is started.
(probably hdfs user but not sure).

Cheers
Samir

On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
charles.robertson@gmail.com> wrote:

> I've found this in the logs:
>
> 014-09-16 11:00:31,287 INFO  datanode.DataNode
> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
> [TERM, HUP, INT]
> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path  should be specified as a URI in configuration files. Please update
> hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /data/hdfs should be specified as a URI in configuration files. Please
> update hdfs configuration.
> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
> /usr/lib/hadoop :
> EPERM: Operation not permitted
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>
> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
> path I have added. All the documentation says it can be a comma-delimited
> list of paths but this log is complaining it's not a URI? When it's
> /hadoop/hdfs/data on its own it starts fine...?
>
> Regards,
> Charles
>
> On 16 September 2014 12:08, Charles Robertson <charles.robertson@gmail.com
> > wrote:
>
>> Hi Susheel,
>>
>> Tried that - same result. DataNode still not starting.
>>
>> Thanks,
>> Charles
>>
>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>> wrote:
>>
>>> The VERSION file has to be same across all the data nodes directories.
>>>
>>> So I suggested to copy it as it is using OS command and start data node.
>>>
>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> > Hi Susheel,
>>> >
>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>> >
>>> > When I created the new directory on the new volume I simply created an
>>> > empty directory. I see from the existing data node directory that it
>>> has a
>>> > sub-directory called current containing a file called VERSION.
>>> >
>>> > Your advice is to create the 'current' sub-directory and copy the
>>> VERSION
>>> > file across to it without changes? I see it has various guids, and so
>>> I'm
>>> > worried about it clashing with the VERSION file in the other data
>>> > directory.
>>> >
>>> > Thanks,
>>> > Charles
>>> >
>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <skgadalay@gmail.com
>>> >
>>> > wrote:
>>> >
>>> >> Is it something to do current/VERSION file in data node directory.
>>> >>
>>> >> Just copy from the existing directory and start.
>>> >>
>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I am running out of space on a data node, so added a new volume to
>>> the
>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>> >> updated
>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>> path
>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>> >> > restarted
>>> >> > the components with stale configs for that host, but the DataNode
>>> >> wouldn't
>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>> >> > data
>>> >> > directory path from the property and restart, it starts fine.
>>> >> >
>>> >> > What am I doing wrong?
>>> >> >
>>> >> > Thanks,
>>> >> > Charles
>>> >> >
>>> >>
>>> >
>>>
>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Samir Ahmic <ah...@gmail.com>.
Hi Charles,

>From log it looks like that DataNode process don't have permissions to
write to "/usr/lib/hadoop" dir. Can you check permissions on "
/usr/lib/hadoop" for user under  which DataNode process is started.
(probably hdfs user but not sure).

Cheers
Samir

On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson <
charles.robertson@gmail.com> wrote:

> I've found this in the logs:
>
> 014-09-16 11:00:31,287 INFO  datanode.DataNode
> (SignalLogger.java:register(91)) - registered UNIX signal handlers for
> [TERM, HUP, INT]
> 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /hadoop/hdfs/data should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path  should be specified as a URI in configuration files. Please update
> hdfs configuration.
> 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
> Path /data/hdfs should be specified as a URI in configuration files. Please
> update hdfs configuration.
> 2014-09-16 11:00:32,277 WARN  datanode.DataNode
> (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
> /usr/lib/hadoop :
> EPERM: Operation not permitted
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
> at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)
>
> /hadoop/hdfs/data is the original default value. /data/hdfs is the the
> path I have added. All the documentation says it can be a comma-delimited
> list of paths but this log is complaining it's not a URI? When it's
> /hadoop/hdfs/data on its own it starts fine...?
>
> Regards,
> Charles
>
> On 16 September 2014 12:08, Charles Robertson <charles.robertson@gmail.com
> > wrote:
>
>> Hi Susheel,
>>
>> Tried that - same result. DataNode still not starting.
>>
>> Thanks,
>> Charles
>>
>> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
>> wrote:
>>
>>> The VERSION file has to be same across all the data nodes directories.
>>>
>>> So I suggested to copy it as it is using OS command and start data node.
>>>
>>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> > Hi Susheel,
>>> >
>>> > Thanks for the reply. I'm not entirely sure what you mean.
>>> >
>>> > When I created the new directory on the new volume I simply created an
>>> > empty directory. I see from the existing data node directory that it
>>> has a
>>> > sub-directory called current containing a file called VERSION.
>>> >
>>> > Your advice is to create the 'current' sub-directory and copy the
>>> VERSION
>>> > file across to it without changes? I see it has various guids, and so
>>> I'm
>>> > worried about it clashing with the VERSION file in the other data
>>> > directory.
>>> >
>>> > Thanks,
>>> > Charles
>>> >
>>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <skgadalay@gmail.com
>>> >
>>> > wrote:
>>> >
>>> >> Is it something to do current/VERSION file in data node directory.
>>> >>
>>> >> Just copy from the existing directory and start.
>>> >>
>>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I am running out of space on a data node, so added a new volume to
>>> the
>>> >> > host, mounted it and made sure the permissions were set OK. Then I
>>> >> updated
>>> >> > the 'DataNode Directories' property in Ambari to include the new
>>> path
>>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>>> >> > restarted
>>> >> > the components with stale configs for that host, but the DataNode
>>> >> wouldn't
>>> >> > come back up, reporting 'connection refused'. When I remove the new
>>> >> > data
>>> >> > directory path from the property and restart, it starts fine.
>>> >> >
>>> >> > What am I doing wrong?
>>> >> >
>>> >> > Thanks,
>>> >> > Charles
>>> >> >
>>> >>
>>> >
>>>
>>
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
I've found this in the logs:

014-09-16 11:00:31,287 INFO  datanode.DataNode
(SignalLogger.java:register(91)) - registered UNIX signal handlers for
[TERM, HUP, INT]
2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /hadoop/hdfs/data should be specified as a URI in configuration files.
Please update hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path  should be specified as a URI in configuration files. Please update
hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /data/hdfs should be specified as a URI in configuration files. Please
update hdfs configuration.
2014-09-16 11:00:32,277 WARN  datanode.DataNode
(DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
/usr/lib/hadoop :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

/hadoop/hdfs/data is the original default value. /data/hdfs is the the path
I have added. All the documentation says it can be a comma-delimited list
of paths but this log is complaining it's not a URI? When it's
/hadoop/hdfs/data on its own it starts fine...?

Regards,
Charles

On 16 September 2014 12:08, Charles Robertson <ch...@gmail.com>
wrote:

> Hi Susheel,
>
> Tried that - same result. DataNode still not starting.
>
> Thanks,
> Charles
>
> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> The VERSION file has to be same across all the data nodes directories.
>>
>> So I suggested to copy it as it is using OS command and start data node.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi Susheel,
>> >
>> > Thanks for the reply. I'm not entirely sure what you mean.
>> >
>> > When I created the new directory on the new volume I simply created an
>> > empty directory. I see from the existing data node directory that it
>> has a
>> > sub-directory called current containing a file called VERSION.
>> >
>> > Your advice is to create the 'current' sub-directory and copy the
>> VERSION
>> > file across to it without changes? I see it has various guids, and so
>> I'm
>> > worried about it clashing with the VERSION file in the other data
>> > directory.
>> >
>> > Thanks,
>> > Charles
>> >
>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
>> > wrote:
>> >
>> >> Is it something to do current/VERSION file in data node directory.
>> >>
>> >> Just copy from the existing directory and start.
>> >>
>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> >> > Hi all,
>> >> >
>> >> > I am running out of space on a data node, so added a new volume to
>> the
>> >> > host, mounted it and made sure the permissions were set OK. Then I
>> >> updated
>> >> > the 'DataNode Directories' property in Ambari to include the new path
>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> >> > restarted
>> >> > the components with stale configs for that host, but the DataNode
>> >> wouldn't
>> >> > come back up, reporting 'connection refused'. When I remove the new
>> >> > data
>> >> > directory path from the property and restart, it starts fine.
>> >> >
>> >> > What am I doing wrong?
>> >> >
>> >> > Thanks,
>> >> > Charles
>> >> >
>> >>
>> >
>>
>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
I've found this in the logs:

014-09-16 11:00:31,287 INFO  datanode.DataNode
(SignalLogger.java:register(91)) - registered UNIX signal handlers for
[TERM, HUP, INT]
2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /hadoop/hdfs/data should be specified as a URI in configuration files.
Please update hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path  should be specified as a URI in configuration files. Please update
hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /data/hdfs should be specified as a URI in configuration files. Please
update hdfs configuration.
2014-09-16 11:00:32,277 WARN  datanode.DataNode
(DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
/usr/lib/hadoop :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

/hadoop/hdfs/data is the original default value. /data/hdfs is the the path
I have added. All the documentation says it can be a comma-delimited list
of paths but this log is complaining it's not a URI? When it's
/hadoop/hdfs/data on its own it starts fine...?

Regards,
Charles

On 16 September 2014 12:08, Charles Robertson <ch...@gmail.com>
wrote:

> Hi Susheel,
>
> Tried that - same result. DataNode still not starting.
>
> Thanks,
> Charles
>
> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> The VERSION file has to be same across all the data nodes directories.
>>
>> So I suggested to copy it as it is using OS command and start data node.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi Susheel,
>> >
>> > Thanks for the reply. I'm not entirely sure what you mean.
>> >
>> > When I created the new directory on the new volume I simply created an
>> > empty directory. I see from the existing data node directory that it
>> has a
>> > sub-directory called current containing a file called VERSION.
>> >
>> > Your advice is to create the 'current' sub-directory and copy the
>> VERSION
>> > file across to it without changes? I see it has various guids, and so
>> I'm
>> > worried about it clashing with the VERSION file in the other data
>> > directory.
>> >
>> > Thanks,
>> > Charles
>> >
>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
>> > wrote:
>> >
>> >> Is it something to do current/VERSION file in data node directory.
>> >>
>> >> Just copy from the existing directory and start.
>> >>
>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> >> > Hi all,
>> >> >
>> >> > I am running out of space on a data node, so added a new volume to
>> the
>> >> > host, mounted it and made sure the permissions were set OK. Then I
>> >> updated
>> >> > the 'DataNode Directories' property in Ambari to include the new path
>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> >> > restarted
>> >> > the components with stale configs for that host, but the DataNode
>> >> wouldn't
>> >> > come back up, reporting 'connection refused'. When I remove the new
>> >> > data
>> >> > directory path from the property and restart, it starts fine.
>> >> >
>> >> > What am I doing wrong?
>> >> >
>> >> > Thanks,
>> >> > Charles
>> >> >
>> >>
>> >
>>
>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
I've found this in the logs:

014-09-16 11:00:31,287 INFO  datanode.DataNode
(SignalLogger.java:register(91)) - registered UNIX signal handlers for
[TERM, HUP, INT]
2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /hadoop/hdfs/data should be specified as a URI in configuration files.
Please update hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path  should be specified as a URI in configuration files. Please update
hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /data/hdfs should be specified as a URI in configuration files. Please
update hdfs configuration.
2014-09-16 11:00:32,277 WARN  datanode.DataNode
(DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
/usr/lib/hadoop :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

/hadoop/hdfs/data is the original default value. /data/hdfs is the the path
I have added. All the documentation says it can be a comma-delimited list
of paths but this log is complaining it's not a URI? When it's
/hadoop/hdfs/data on its own it starts fine...?

Regards,
Charles

On 16 September 2014 12:08, Charles Robertson <ch...@gmail.com>
wrote:

> Hi Susheel,
>
> Tried that - same result. DataNode still not starting.
>
> Thanks,
> Charles
>
> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> The VERSION file has to be same across all the data nodes directories.
>>
>> So I suggested to copy it as it is using OS command and start data node.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi Susheel,
>> >
>> > Thanks for the reply. I'm not entirely sure what you mean.
>> >
>> > When I created the new directory on the new volume I simply created an
>> > empty directory. I see from the existing data node directory that it
>> has a
>> > sub-directory called current containing a file called VERSION.
>> >
>> > Your advice is to create the 'current' sub-directory and copy the
>> VERSION
>> > file across to it without changes? I see it has various guids, and so
>> I'm
>> > worried about it clashing with the VERSION file in the other data
>> > directory.
>> >
>> > Thanks,
>> > Charles
>> >
>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
>> > wrote:
>> >
>> >> Is it something to do current/VERSION file in data node directory.
>> >>
>> >> Just copy from the existing directory and start.
>> >>
>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> >> > Hi all,
>> >> >
>> >> > I am running out of space on a data node, so added a new volume to
>> the
>> >> > host, mounted it and made sure the permissions were set OK. Then I
>> >> updated
>> >> > the 'DataNode Directories' property in Ambari to include the new path
>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> >> > restarted
>> >> > the components with stale configs for that host, but the DataNode
>> >> wouldn't
>> >> > come back up, reporting 'connection refused'. When I remove the new
>> >> > data
>> >> > directory path from the property and restart, it starts fine.
>> >> >
>> >> > What am I doing wrong?
>> >> >
>> >> > Thanks,
>> >> > Charles
>> >> >
>> >>
>> >
>>
>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
I've found this in the logs:

014-09-16 11:00:31,287 INFO  datanode.DataNode
(SignalLogger.java:register(91)) - registered UNIX signal handlers for
[TERM, HUP, INT]
2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /hadoop/hdfs/data should be specified as a URI in configuration files.
Please update hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path  should be specified as a URI in configuration files. Please update
hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /data/hdfs should be specified as a URI in configuration files. Please
update hdfs configuration.
2014-09-16 11:00:32,277 WARN  datanode.DataNode
(DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
/usr/lib/hadoop :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

/hadoop/hdfs/data is the original default value. /data/hdfs is the the path
I have added. All the documentation says it can be a comma-delimited list
of paths but this log is complaining it's not a URI? When it's
/hadoop/hdfs/data on its own it starts fine...?

Regards,
Charles

On 16 September 2014 12:08, Charles Robertson <ch...@gmail.com>
wrote:

> Hi Susheel,
>
> Tried that - same result. DataNode still not starting.
>
> Thanks,
> Charles
>
> On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> The VERSION file has to be same across all the data nodes directories.
>>
>> So I suggested to copy it as it is using OS command and start data node.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi Susheel,
>> >
>> > Thanks for the reply. I'm not entirely sure what you mean.
>> >
>> > When I created the new directory on the new volume I simply created an
>> > empty directory. I see from the existing data node directory that it
>> has a
>> > sub-directory called current containing a file called VERSION.
>> >
>> > Your advice is to create the 'current' sub-directory and copy the
>> VERSION
>> > file across to it without changes? I see it has various guids, and so
>> I'm
>> > worried about it clashing with the VERSION file in the other data
>> > directory.
>> >
>> > Thanks,
>> > Charles
>> >
>> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
>> > wrote:
>> >
>> >> Is it something to do current/VERSION file in data node directory.
>> >>
>> >> Just copy from the existing directory and start.
>> >>
>> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> >> > Hi all,
>> >> >
>> >> > I am running out of space on a data node, so added a new volume to
>> the
>> >> > host, mounted it and made sure the permissions were set OK. Then I
>> >> updated
>> >> > the 'DataNode Directories' property in Ambari to include the new path
>> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> >> > restarted
>> >> > the components with stale configs for that host, but the DataNode
>> >> wouldn't
>> >> > come back up, reporting 'connection refused'. When I remove the new
>> >> > data
>> >> > directory path from the property and restart, it starts fine.
>> >> >
>> >> > What am I doing wrong?
>> >> >
>> >> > Thanks,
>> >> > Charles
>> >> >
>> >>
>> >
>>
>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Tried that - same result. DataNode still not starting.

Thanks,
Charles

On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> The VERSION file has to be same across all the data nodes directories.
>
> So I suggested to copy it as it is using OS command and start data node.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi Susheel,
> >
> > Thanks for the reply. I'm not entirely sure what you mean.
> >
> > When I created the new directory on the new volume I simply created an
> > empty directory. I see from the existing data node directory that it has
> a
> > sub-directory called current containing a file called VERSION.
> >
> > Your advice is to create the 'current' sub-directory and copy the VERSION
> > file across to it without changes? I see it has various guids, and so I'm
> > worried about it clashing with the VERSION file in the other data
> > directory.
> >
> > Thanks,
> > Charles
> >
> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> > wrote:
> >
> >> Is it something to do current/VERSION file in data node directory.
> >>
> >> Just copy from the existing directory and start.
> >>
> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> >> > Hi all,
> >> >
> >> > I am running out of space on a data node, so added a new volume to the
> >> > host, mounted it and made sure the permissions were set OK. Then I
> >> updated
> >> > the 'DataNode Directories' property in Ambari to include the new path
> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
> >> > restarted
> >> > the components with stale configs for that host, but the DataNode
> >> wouldn't
> >> > come back up, reporting 'connection refused'. When I remove the new
> >> > data
> >> > directory path from the property and restart, it starts fine.
> >> >
> >> > What am I doing wrong?
> >> >
> >> > Thanks,
> >> > Charles
> >> >
> >>
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Tried that - same result. DataNode still not starting.

Thanks,
Charles

On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> The VERSION file has to be same across all the data nodes directories.
>
> So I suggested to copy it as it is using OS command and start data node.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi Susheel,
> >
> > Thanks for the reply. I'm not entirely sure what you mean.
> >
> > When I created the new directory on the new volume I simply created an
> > empty directory. I see from the existing data node directory that it has
> a
> > sub-directory called current containing a file called VERSION.
> >
> > Your advice is to create the 'current' sub-directory and copy the VERSION
> > file across to it without changes? I see it has various guids, and so I'm
> > worried about it clashing with the VERSION file in the other data
> > directory.
> >
> > Thanks,
> > Charles
> >
> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> > wrote:
> >
> >> Is it something to do current/VERSION file in data node directory.
> >>
> >> Just copy from the existing directory and start.
> >>
> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> >> > Hi all,
> >> >
> >> > I am running out of space on a data node, so added a new volume to the
> >> > host, mounted it and made sure the permissions were set OK. Then I
> >> updated
> >> > the 'DataNode Directories' property in Ambari to include the new path
> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
> >> > restarted
> >> > the components with stale configs for that host, but the DataNode
> >> wouldn't
> >> > come back up, reporting 'connection refused'. When I remove the new
> >> > data
> >> > directory path from the property and restart, it starts fine.
> >> >
> >> > What am I doing wrong?
> >> >
> >> > Thanks,
> >> > Charles
> >> >
> >>
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Tried that - same result. DataNode still not starting.

Thanks,
Charles

On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> The VERSION file has to be same across all the data nodes directories.
>
> So I suggested to copy it as it is using OS command and start data node.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi Susheel,
> >
> > Thanks for the reply. I'm not entirely sure what you mean.
> >
> > When I created the new directory on the new volume I simply created an
> > empty directory. I see from the existing data node directory that it has
> a
> > sub-directory called current containing a file called VERSION.
> >
> > Your advice is to create the 'current' sub-directory and copy the VERSION
> > file across to it without changes? I see it has various guids, and so I'm
> > worried about it clashing with the VERSION file in the other data
> > directory.
> >
> > Thanks,
> > Charles
> >
> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> > wrote:
> >
> >> Is it something to do current/VERSION file in data node directory.
> >>
> >> Just copy from the existing directory and start.
> >>
> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> >> > Hi all,
> >> >
> >> > I am running out of space on a data node, so added a new volume to the
> >> > host, mounted it and made sure the permissions were set OK. Then I
> >> updated
> >> > the 'DataNode Directories' property in Ambari to include the new path
> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
> >> > restarted
> >> > the components with stale configs for that host, but the DataNode
> >> wouldn't
> >> > come back up, reporting 'connection refused'. When I remove the new
> >> > data
> >> > directory path from the property and restart, it starts fine.
> >> >
> >> > What am I doing wrong?
> >> >
> >> > Thanks,
> >> > Charles
> >> >
> >>
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Tried that - same result. DataNode still not starting.

Thanks,
Charles

On 16 September 2014 11:49, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> The VERSION file has to be same across all the data nodes directories.
>
> So I suggested to copy it as it is using OS command and start data node.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi Susheel,
> >
> > Thanks for the reply. I'm not entirely sure what you mean.
> >
> > When I created the new directory on the new volume I simply created an
> > empty directory. I see from the existing data node directory that it has
> a
> > sub-directory called current containing a file called VERSION.
> >
> > Your advice is to create the 'current' sub-directory and copy the VERSION
> > file across to it without changes? I see it has various guids, and so I'm
> > worried about it clashing with the VERSION file in the other data
> > directory.
> >
> > Thanks,
> > Charles
> >
> > On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> > wrote:
> >
> >> Is it something to do current/VERSION file in data node directory.
> >>
> >> Just copy from the existing directory and start.
> >>
> >> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> >> > Hi all,
> >> >
> >> > I am running out of space on a data node, so added a new volume to the
> >> > host, mounted it and made sure the permissions were set OK. Then I
> >> updated
> >> > the 'DataNode Directories' property in Ambari to include the new path
> >> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
> >> > restarted
> >> > the components with stale configs for that host, but the DataNode
> >> wouldn't
> >> > come back up, reporting 'connection refused'. When I remove the new
> >> > data
> >> > directory path from the property and restart, it starts fine.
> >> >
> >> > What am I doing wrong?
> >> >
> >> > Thanks,
> >> > Charles
> >> >
> >>
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
The VERSION file has to be same across all the data nodes directories.

So I suggested to copy it as it is using OS command and start data node.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi Susheel,
>
> Thanks for the reply. I'm not entirely sure what you mean.
>
> When I created the new directory on the new volume I simply created an
> empty directory. I see from the existing data node directory that it has a
> sub-directory called current containing a file called VERSION.
>
> Your advice is to create the 'current' sub-directory and copy the VERSION
> file across to it without changes? I see it has various guids, and so I'm
> worried about it clashing with the VERSION file in the other data
> directory.
>
> Thanks,
> Charles
>
> On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> Is it something to do current/VERSION file in data node directory.
>>
>> Just copy from the existing directory and start.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi all,
>> >
>> > I am running out of space on a data node, so added a new volume to the
>> > host, mounted it and made sure the permissions were set OK. Then I
>> updated
>> > the 'DataNode Directories' property in Ambari to include the new path
>> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> > restarted
>> > the components with stale configs for that host, but the DataNode
>> wouldn't
>> > come back up, reporting 'connection refused'. When I remove the new
>> > data
>> > directory path from the property and restart, it starts fine.
>> >
>> > What am I doing wrong?
>> >
>> > Thanks,
>> > Charles
>> >
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
The VERSION file has to be same across all the data nodes directories.

So I suggested to copy it as it is using OS command and start data node.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi Susheel,
>
> Thanks for the reply. I'm not entirely sure what you mean.
>
> When I created the new directory on the new volume I simply created an
> empty directory. I see from the existing data node directory that it has a
> sub-directory called current containing a file called VERSION.
>
> Your advice is to create the 'current' sub-directory and copy the VERSION
> file across to it without changes? I see it has various guids, and so I'm
> worried about it clashing with the VERSION file in the other data
> directory.
>
> Thanks,
> Charles
>
> On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> Is it something to do current/VERSION file in data node directory.
>>
>> Just copy from the existing directory and start.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi all,
>> >
>> > I am running out of space on a data node, so added a new volume to the
>> > host, mounted it and made sure the permissions were set OK. Then I
>> updated
>> > the 'DataNode Directories' property in Ambari to include the new path
>> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> > restarted
>> > the components with stale configs for that host, but the DataNode
>> wouldn't
>> > come back up, reporting 'connection refused'. When I remove the new
>> > data
>> > directory path from the property and restart, it starts fine.
>> >
>> > What am I doing wrong?
>> >
>> > Thanks,
>> > Charles
>> >
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
The VERSION file has to be same across all the data nodes directories.

So I suggested to copy it as it is using OS command and start data node.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi Susheel,
>
> Thanks for the reply. I'm not entirely sure what you mean.
>
> When I created the new directory on the new volume I simply created an
> empty directory. I see from the existing data node directory that it has a
> sub-directory called current containing a file called VERSION.
>
> Your advice is to create the 'current' sub-directory and copy the VERSION
> file across to it without changes? I see it has various guids, and so I'm
> worried about it clashing with the VERSION file in the other data
> directory.
>
> Thanks,
> Charles
>
> On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> Is it something to do current/VERSION file in data node directory.
>>
>> Just copy from the existing directory and start.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi all,
>> >
>> > I am running out of space on a data node, so added a new volume to the
>> > host, mounted it and made sure the permissions were set OK. Then I
>> updated
>> > the 'DataNode Directories' property in Ambari to include the new path
>> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> > restarted
>> > the components with stale configs for that host, but the DataNode
>> wouldn't
>> > come back up, reporting 'connection refused'. When I remove the new
>> > data
>> > directory path from the property and restart, it starts fine.
>> >
>> > What am I doing wrong?
>> >
>> > Thanks,
>> > Charles
>> >
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
The VERSION file has to be same across all the data nodes directories.

So I suggested to copy it as it is using OS command and start data node.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi Susheel,
>
> Thanks for the reply. I'm not entirely sure what you mean.
>
> When I created the new directory on the new volume I simply created an
> empty directory. I see from the existing data node directory that it has a
> sub-directory called current containing a file called VERSION.
>
> Your advice is to create the 'current' sub-directory and copy the VERSION
> file across to it without changes? I see it has various guids, and so I'm
> worried about it clashing with the VERSION file in the other data
> directory.
>
> Thanks,
> Charles
>
> On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
> wrote:
>
>> Is it something to do current/VERSION file in data node directory.
>>
>> Just copy from the existing directory and start.
>>
>> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
>> > Hi all,
>> >
>> > I am running out of space on a data node, so added a new volume to the
>> > host, mounted it and made sure the permissions were set OK. Then I
>> updated
>> > the 'DataNode Directories' property in Ambari to include the new path
>> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
>> > restarted
>> > the components with stale configs for that host, but the DataNode
>> wouldn't
>> > come back up, reporting 'connection refused'. When I remove the new
>> > data
>> > directory path from the property and restart, it starts fine.
>> >
>> > What am I doing wrong?
>> >
>> > Thanks,
>> > Charles
>> >
>>
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Thanks for the reply. I'm not entirely sure what you mean.

When I created the new directory on the new volume I simply created an
empty directory. I see from the existing data node directory that it has a
sub-directory called current containing a file called VERSION.

Your advice is to create the 'current' sub-directory and copy the VERSION
file across to it without changes? I see it has various guids, and so I'm
worried about it clashing with the VERSION file in the other data directory.

Thanks,
Charles

On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> Is it something to do current/VERSION file in data node directory.
>
> Just copy from the existing directory and start.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi all,
> >
> > I am running out of space on a data node, so added a new volume to the
> > host, mounted it and made sure the permissions were set OK. Then I
> updated
> > the 'DataNode Directories' property in Ambari to include the new path
> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> > the components with stale configs for that host, but the DataNode
> wouldn't
> > come back up, reporting 'connection refused'. When I remove the new data
> > directory path from the property and restart, it starts fine.
> >
> > What am I doing wrong?
> >
> > Thanks,
> > Charles
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Thanks for the reply. I'm not entirely sure what you mean.

When I created the new directory on the new volume I simply created an
empty directory. I see from the existing data node directory that it has a
sub-directory called current containing a file called VERSION.

Your advice is to create the 'current' sub-directory and copy the VERSION
file across to it without changes? I see it has various guids, and so I'm
worried about it clashing with the VERSION file in the other data directory.

Thanks,
Charles

On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> Is it something to do current/VERSION file in data node directory.
>
> Just copy from the existing directory and start.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi all,
> >
> > I am running out of space on a data node, so added a new volume to the
> > host, mounted it and made sure the permissions were set OK. Then I
> updated
> > the 'DataNode Directories' property in Ambari to include the new path
> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> > the components with stale configs for that host, but the DataNode
> wouldn't
> > come back up, reporting 'connection refused'. When I remove the new data
> > directory path from the property and restart, it starts fine.
> >
> > What am I doing wrong?
> >
> > Thanks,
> > Charles
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Thanks for the reply. I'm not entirely sure what you mean.

When I created the new directory on the new volume I simply created an
empty directory. I see from the existing data node directory that it has a
sub-directory called current containing a file called VERSION.

Your advice is to create the 'current' sub-directory and copy the VERSION
file across to it without changes? I see it has various guids, and so I'm
worried about it clashing with the VERSION file in the other data directory.

Thanks,
Charles

On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> Is it something to do current/VERSION file in data node directory.
>
> Just copy from the existing directory and start.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi all,
> >
> > I am running out of space on a data node, so added a new volume to the
> > host, mounted it and made sure the permissions were set OK. Then I
> updated
> > the 'DataNode Directories' property in Ambari to include the new path
> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> > the components with stale configs for that host, but the DataNode
> wouldn't
> > come back up, reporting 'connection refused'. When I remove the new data
> > directory path from the property and restart, it starts fine.
> >
> > What am I doing wrong?
> >
> > Thanks,
> > Charles
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Charles Robertson <ch...@gmail.com>.
Hi Susheel,

Thanks for the reply. I'm not entirely sure what you mean.

When I created the new directory on the new volume I simply created an
empty directory. I see from the existing data node directory that it has a
sub-directory called current containing a file called VERSION.

Your advice is to create the 'current' sub-directory and copy the VERSION
file across to it without changes? I see it has various guids, and so I'm
worried about it clashing with the VERSION file in the other data directory.

Thanks,
Charles

On 16 September 2014 10:57, Susheel Kumar Gadalay <sk...@gmail.com>
wrote:

> Is it something to do current/VERSION file in data node directory.
>
> Just copy from the existing directory and start.
>
> On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> > Hi all,
> >
> > I am running out of space on a data node, so added a new volume to the
> > host, mounted it and made sure the permissions were set OK. Then I
> updated
> > the 'DataNode Directories' property in Ambari to include the new path
> > (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> > the components with stale configs for that host, but the DataNode
> wouldn't
> > come back up, reporting 'connection refused'. When I remove the new data
> > directory path from the property and restart, it starts fine.
> >
> > What am I doing wrong?
> >
> > Thanks,
> > Charles
> >
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Is it something to do current/VERSION file in data node directory.

Just copy from the existing directory and start.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi all,
>
> I am running out of space on a data node, so added a new volume to the
> host, mounted it and made sure the permissions were set OK. Then I updated
> the 'DataNode Directories' property in Ambari to include the new path
> (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> the components with stale configs for that host, but the DataNode wouldn't
> come back up, reporting 'connection refused'. When I remove the new data
> directory path from the property and restart, it starts fine.
>
> What am I doing wrong?
>
> Thanks,
> Charles
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Is it something to do current/VERSION file in data node directory.

Just copy from the existing directory and start.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi all,
>
> I am running out of space on a data node, so added a new volume to the
> host, mounted it and made sure the permissions were set OK. Then I updated
> the 'DataNode Directories' property in Ambari to include the new path
> (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> the components with stale configs for that host, but the DataNode wouldn't
> come back up, reporting 'connection refused'. When I remove the new data
> directory path from the property and restart, it starts fine.
>
> What am I doing wrong?
>
> Thanks,
> Charles
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Is it something to do current/VERSION file in data node directory.

Just copy from the existing directory and start.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi all,
>
> I am running out of space on a data node, so added a new volume to the
> host, mounted it and made sure the permissions were set OK. Then I updated
> the 'DataNode Directories' property in Ambari to include the new path
> (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> the components with stale configs for that host, but the DataNode wouldn't
> come back up, reporting 'connection refused'. When I remove the new data
> directory path from the property and restart, it starts fine.
>
> What am I doing wrong?
>
> Thanks,
> Charles
>

Re: Cannot start DataNode after adding new volume

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Is it something to do current/VERSION file in data node directory.

Just copy from the existing directory and start.

On 9/16/14, Charles Robertson <ch...@gmail.com> wrote:
> Hi all,
>
> I am running out of space on a data node, so added a new volume to the
> host, mounted it and made sure the permissions were set OK. Then I updated
> the 'DataNode Directories' property in Ambari to include the new path
> (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
> the components with stale configs for that host, but the DataNode wouldn't
> come back up, reporting 'connection refused'. When I remove the new data
> directory path from the property and restart, it starts fine.
>
> What am I doing wrong?
>
> Thanks,
> Charles
>