You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Dustine Rene Bernasor <du...@thecyberguardian.com> on 2013/03/14 08:13:07 UTC

NameNode is failing to start

Hello,

I was installing Ambari 1.2.1. When I reach step 9, after the services 
are installed,
NameNode cannot be started.

The ff. exception appeared in the log

2013-03-14 10:58:00,426 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException: NameNode is not formatted.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
2013-03-14 10:58:00,427 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
NameNode is not formatted.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)

2013-03-14 10:58:00,428 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Crawler51.localdomain.com/192.168.3.51
************************************************************/

Thanks.

Dustine



Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Jira opened.

Here's the link: https://issues.apache.org/jira/browse/AMBARI-1643

On 3/15/2013 10:49 AM, Mahadev Konar wrote:
> Dustine,
>   Can you please open a jira and attach ambari-agent logs to it
> (including ambari-agent.out and ambari-agent.log on the host where
> namenode is running) ?
>
> thanks
> mahadev
>
> On Thu, Mar 14, 2013 at 7:47 PM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>> Hello,
>>
>> I am already using 1.2.1.
>>
>>
>> On 3/15/2013 10:43 AM, Mahadev Konar wrote:
>>> Dustine,
>>>    What version of Ambari are you running? There is a bug 1.2.0 which
>>> causes this issue to happen. If thats the case you can upgrade to
>>> 1.2.1 (which is currently under vote).
>>>
>>>
>>> http://incubator.apache.org/ambari/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1.html
>>>
>>> Has instructions!
>>>
>>> thanks
>>> mahadev
>>>
>>> On Thu, Mar 14, 2013 at 7:15 PM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>> Here's the result
>>>>
>>>>    "href" :
>>>>
>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*",
>>>>     "HostRoles" : {
>>>>       "configs" : { },
>>>>
>>>>       "cluster_name" : "BigData",
>>>>       "desired_configs" : { },
>>>>       "desired_state" : "STARTED",
>>>>       "state" : "START_FAILED",
>>>>
>>>>       "component_name" : "NAMENODE",
>>>>       "host_name" : "Crawler51.localdomain.com"
>>>>     },
>>>>     "host" : {
>>>>       "href" :
>>>>
>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com"
>>>>     },
>>>>     "component" : [
>>>>       {
>>>>         "href" :
>>>>
>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE",
>>>>         "ServiceComponentInfo" : {
>>>>           "cluster_name" : "BigData",
>>>>
>>>>           "component_name" : "NAMENODE",
>>>>           "service_name" : "HDFS"
>>>>         }
>>>>       }
>>>>     ]
>>>>
>>>>
>>>>
>>>> On 3/15/2013 1:11 AM, Mahadev Konar wrote:
>>>>> To get more information can you run one more api command?
>>>>>
>>>>> curl -u admin:admin
>>>>>
>>>>>
>>>>> http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*
>>>>>
>>>>> thanks
>>>>> mahadev
>>>>>
>>>>>
>>>>> On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
>>>>> <du...@thecyberguardian.com> wrote:
>>>>>> Ooops. I didn't notice.
>>>>>>
>>>>>> Anyway, here's the result
>>>>>>
>>>>>> {
>>>>>>      "href" :
>>>>>>
>>>>>>
>>>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>>>>>>      "metrics" : {
>>>>>>        "boottime" : 0,
>>>>>>        "process" : {
>>>>>>          "proc_total" : 0,
>>>>>>          "proc_run" : 0
>>>>>>        },
>>>>>>        "ugi" : {
>>>>>>          "loginSuccess_num_ops" : 0,
>>>>>>          "loginFailure_num_ops" : 0,
>>>>>>          "loginSuccess_avg_time" : 0,
>>>>>>          "loginFailure_avg_time" : 0
>>>>>>        },
>>>>>>        "dfs" : {
>>>>>>          "namenode" : {
>>>>>>            "fsImageLoadTime" : 0,
>>>>>>            "FilesRenamed" : 0,
>>>>>>            "JournalTransactionsBatchedInSync" : 0,
>>>>>>            "SafemodeTime" : 0,
>>>>>>            "FilesDeleted" : 0,
>>>>>>            "DeleteFileOps" : 0,
>>>>>>            "FilesAppended" : 0
>>>>>>          }
>>>>>>        },
>>>>>>        "disk" : {
>>>>>>          "disk_total" : 0,
>>>>>>          "disk_free" : 0,
>>>>>>          "part_max_used" : 0
>>>>>>        },
>>>>>>        "cpu" : {
>>>>>>          "cpu_speed" : 0,
>>>>>>          "cpu_num" : 0,
>>>>>>          "cpu_wio" : 0,
>>>>>>          "cpu_idle" : 0,
>>>>>>          "cpu_nice" : 0,
>>>>>>          "cpu_aidle" : 0,
>>>>>>          "cpu_system" : 0,
>>>>>>          "cpu_user" : 0
>>>>>>        },
>>>>>>        "rpcdetailed" : {
>>>>>>          "delete_avg_time" : 0,
>>>>>>          "rename_avg_time" : 0,
>>>>>>          "register_num_ops" : 0,
>>>>>>          "versionRequest_num_ops" : 0,
>>>>>>          "blocksBeingWrittenReport_avg_time" : 0,
>>>>>>          "rename_num_ops" : 0,
>>>>>>          "register_avg_time" : 0,
>>>>>>          "mkdirs_avg_time" : 0,
>>>>>>          "setPermission_num_ops" : 0,
>>>>>>          "delete_num_ops" : 0,
>>>>>>          "versionRequest_avg_time" : 0,
>>>>>>          "setOwner_num_ops" : 0,
>>>>>>          "setSafeMode_avg_time" : 0,
>>>>>>          "setOwner_avg_time" : 0,
>>>>>>          "setSafeMode_num_ops" : 0,
>>>>>>          "blocksBeingWrittenReport_num_ops" : 0,
>>>>>>          "setReplication_num_ops" : 0,
>>>>>>          "setPermission_avg_time" : 0,
>>>>>>          "mkdirs_num_ops" : 0,
>>>>>>          "setReplication_avg_time" : 0
>>>>>>        },
>>>>>>        "load" : {
>>>>>>          "load_fifteen" : 0,
>>>>>>          "load_one" : 0,
>>>>>>          "load_five" : 0
>>>>>>        },
>>>>>>        "network" : {
>>>>>>          "pkts_out" : 0,
>>>>>>          "bytes_in" : 0,
>>>>>>          "bytes_out" : 0,
>>>>>>          "pkts_in" : 0
>>>>>>        },
>>>>>>        "memory" : {
>>>>>>          "mem_total" : 0,
>>>>>>          "swap_free" : 0,
>>>>>>          "mem_buffers" : 0,
>>>>>>          "mem_shared" : 0,
>>>>>>          "mem_cached" : 0,
>>>>>>          "mem_free" : 0,
>>>>>>          "swap_total" : 0
>>>>>>        }
>>>>>>      },
>>>>>>      "ServiceComponentInfo" : {
>>>>>>        "cluster_name" : "BigData",
>>>>>>        "desired_configs" : { },
>>>>>>        "state" : "STARTED",
>>>>>>        "component_name" : "NAMENODE",
>>>>>>        "service_name" : "HDFS"
>>>>>>      },
>>>>>>      "host_components" : [
>>>>>>        {
>>>>>>          "href" :
>>>>>>
>>>>>>
>>>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>>>>>>          "HostRoles" : {
>>>>>>            "cluster_name" : "BigData",
>>>>>>            "component_name" : "NAMENODE",
>>>>>>            "host_name" : "Crawler51.localdomain.com"
>>>>>>          }
>>>>>>        }
>>>>>>      ]
>>>>>>
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>>>>>>
>>>>>> Hi Dustine,
>>>>>>     I had a typo :). Sorry, can you run:
>>>>>>
>>>>>> curl -u admin:admin
>>>>>>
>>>>>>
>>>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>>>
>>>>>> thanks
>>>>>> mahadev
>>>>>>
>>>>>>
>>>>>> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
>>>>>> <du...@thecyberguardian.com> wrote:
>>>>>>
>>>>>> Start/Stop button's still disabled.
>>>>>>
>>>>>> Here's the result of the API call
>>>>>>
>>>>>> <html>
>>>>>> <head>
>>>>>> <meta http-equiv="Content-Type"
>>>>>> content="text/html;charset=ISO-8859-1"/>
>>>>>> <title>Error 403 Bad credentials</title>
>>>>>> </head>
>>>>>> <body>
>>>>>> <h2>HTTP ERROR: 403</h2>
>>>>>> <p>Problem accessing
>>>>>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>>>>>> <pre>    Bad credentials</pre></p>
>>>>>> <hr /><i><small>Powered by Jetty://</small></i>
>>>>>>
>>>>>>
>>>>>> </body>
>>>>>> </html>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>>>>>
>>>>>> Yes. The start stop button should re activate is some time (usually
>>>>>> takes
>>>>>> seconds) if it is 1.2.1 release.
>>>>>>
>>>>>> If not can you make an API call to see what the status of Namenode is:
>>>>>>
>>>>>> curl -u admin:amdin
>>>>>>
>>>>>>
>>>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>>>
>>>>>> (see
>>>>>>
>>>>>>
>>>>>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
>>>>>> for more details on API's)
>>>>>>
>>>>>> mahadev
>>>>>>
>>>>>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
>>>>>> <du...@thecyberguardian.com> wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>>>>>
>>>>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>>>>> but the Start and Stop buttons are disabled.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> Dustine
>>>>>>
>>>>>>
>>>>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>>>>
>>>>>> Hi Dustine,
>>>>>>     Are you installing on a cluster that was already installed via
>>>>>> Ambari? If yes, then remove this directory in
>>>>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>>>>> and it should work.
>>>>>>
>>>>>>     If not then its a bug and please create jira nad attach logs for
>>>>>> Namenode/amabari agent and server.
>>>>>>
>>>>>> thanks
>>>>>> mahadev
>>>>>>
>>>>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>>>>> <du...@thecyberguardian.com> wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I was installing Ambari 1.2.1. When I reach step 9, after the services
>>>>>> are
>>>>>> installed,
>>>>>> NameNode cannot be started.
>>>>>>
>>>>>> The ff. exception appeared in the log
>>>>>>
>>>>>> 2013-03-14 10:58:00,426 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>>            at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>>            at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>>> 2013-03-14 10:58:00,427 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>>            at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>>            at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>>            at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>>>
>>>>>> 2013-03-14 10:58:00,428 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>>>> Crawler51.localdomain.com/192.168.3.51
>>>>>> ************************************************************/
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> Dustine
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@apache.org>.
Dustine,
 Can you please open a jira and attach ambari-agent logs to it
(including ambari-agent.out and ambari-agent.log on the host where
namenode is running) ?

thanks
mahadev

On Thu, Mar 14, 2013 at 7:47 PM, Dustine Rene Bernasor
<du...@thecyberguardian.com> wrote:
> Hello,
>
> I am already using 1.2.1.
>
>
> On 3/15/2013 10:43 AM, Mahadev Konar wrote:
>>
>> Dustine,
>>   What version of Ambari are you running? There is a bug 1.2.0 which
>> causes this issue to happen. If thats the case you can upgrade to
>> 1.2.1 (which is currently under vote).
>>
>>
>> http://incubator.apache.org/ambari/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1.html
>>
>> Has instructions!
>>
>> thanks
>> mahadev
>>
>> On Thu, Mar 14, 2013 at 7:15 PM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>>
>>> Here's the result
>>>
>>>   "href" :
>>>
>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*",
>>>    "HostRoles" : {
>>>      "configs" : { },
>>>
>>>      "cluster_name" : "BigData",
>>>      "desired_configs" : { },
>>>      "desired_state" : "STARTED",
>>>      "state" : "START_FAILED",
>>>
>>>      "component_name" : "NAMENODE",
>>>      "host_name" : "Crawler51.localdomain.com"
>>>    },
>>>    "host" : {
>>>      "href" :
>>>
>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com"
>>>    },
>>>    "component" : [
>>>      {
>>>        "href" :
>>>
>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE",
>>>        "ServiceComponentInfo" : {
>>>          "cluster_name" : "BigData",
>>>
>>>          "component_name" : "NAMENODE",
>>>          "service_name" : "HDFS"
>>>        }
>>>      }
>>>    ]
>>>
>>>
>>>
>>> On 3/15/2013 1:11 AM, Mahadev Konar wrote:
>>>>
>>>> To get more information can you run one more api command?
>>>>
>>>> curl -u admin:admin
>>>>
>>>>
>>>> http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*
>>>>
>>>> thanks
>>>> mahadev
>>>>
>>>>
>>>> On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
>>>> <du...@thecyberguardian.com> wrote:
>>>>>
>>>>> Ooops. I didn't notice.
>>>>>
>>>>> Anyway, here's the result
>>>>>
>>>>> {
>>>>>     "href" :
>>>>>
>>>>>
>>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>>>>>     "metrics" : {
>>>>>       "boottime" : 0,
>>>>>       "process" : {
>>>>>         "proc_total" : 0,
>>>>>         "proc_run" : 0
>>>>>       },
>>>>>       "ugi" : {
>>>>>         "loginSuccess_num_ops" : 0,
>>>>>         "loginFailure_num_ops" : 0,
>>>>>         "loginSuccess_avg_time" : 0,
>>>>>         "loginFailure_avg_time" : 0
>>>>>       },
>>>>>       "dfs" : {
>>>>>         "namenode" : {
>>>>>           "fsImageLoadTime" : 0,
>>>>>           "FilesRenamed" : 0,
>>>>>           "JournalTransactionsBatchedInSync" : 0,
>>>>>           "SafemodeTime" : 0,
>>>>>           "FilesDeleted" : 0,
>>>>>           "DeleteFileOps" : 0,
>>>>>           "FilesAppended" : 0
>>>>>         }
>>>>>       },
>>>>>       "disk" : {
>>>>>         "disk_total" : 0,
>>>>>         "disk_free" : 0,
>>>>>         "part_max_used" : 0
>>>>>       },
>>>>>       "cpu" : {
>>>>>         "cpu_speed" : 0,
>>>>>         "cpu_num" : 0,
>>>>>         "cpu_wio" : 0,
>>>>>         "cpu_idle" : 0,
>>>>>         "cpu_nice" : 0,
>>>>>         "cpu_aidle" : 0,
>>>>>         "cpu_system" : 0,
>>>>>         "cpu_user" : 0
>>>>>       },
>>>>>       "rpcdetailed" : {
>>>>>         "delete_avg_time" : 0,
>>>>>         "rename_avg_time" : 0,
>>>>>         "register_num_ops" : 0,
>>>>>         "versionRequest_num_ops" : 0,
>>>>>         "blocksBeingWrittenReport_avg_time" : 0,
>>>>>         "rename_num_ops" : 0,
>>>>>         "register_avg_time" : 0,
>>>>>         "mkdirs_avg_time" : 0,
>>>>>         "setPermission_num_ops" : 0,
>>>>>         "delete_num_ops" : 0,
>>>>>         "versionRequest_avg_time" : 0,
>>>>>         "setOwner_num_ops" : 0,
>>>>>         "setSafeMode_avg_time" : 0,
>>>>>         "setOwner_avg_time" : 0,
>>>>>         "setSafeMode_num_ops" : 0,
>>>>>         "blocksBeingWrittenReport_num_ops" : 0,
>>>>>         "setReplication_num_ops" : 0,
>>>>>         "setPermission_avg_time" : 0,
>>>>>         "mkdirs_num_ops" : 0,
>>>>>         "setReplication_avg_time" : 0
>>>>>       },
>>>>>       "load" : {
>>>>>         "load_fifteen" : 0,
>>>>>         "load_one" : 0,
>>>>>         "load_five" : 0
>>>>>       },
>>>>>       "network" : {
>>>>>         "pkts_out" : 0,
>>>>>         "bytes_in" : 0,
>>>>>         "bytes_out" : 0,
>>>>>         "pkts_in" : 0
>>>>>       },
>>>>>       "memory" : {
>>>>>         "mem_total" : 0,
>>>>>         "swap_free" : 0,
>>>>>         "mem_buffers" : 0,
>>>>>         "mem_shared" : 0,
>>>>>         "mem_cached" : 0,
>>>>>         "mem_free" : 0,
>>>>>         "swap_total" : 0
>>>>>       }
>>>>>     },
>>>>>     "ServiceComponentInfo" : {
>>>>>       "cluster_name" : "BigData",
>>>>>       "desired_configs" : { },
>>>>>       "state" : "STARTED",
>>>>>       "component_name" : "NAMENODE",
>>>>>       "service_name" : "HDFS"
>>>>>     },
>>>>>     "host_components" : [
>>>>>       {
>>>>>         "href" :
>>>>>
>>>>>
>>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>>>>>         "HostRoles" : {
>>>>>           "cluster_name" : "BigData",
>>>>>           "component_name" : "NAMENODE",
>>>>>           "host_name" : "Crawler51.localdomain.com"
>>>>>         }
>>>>>       }
>>>>>     ]
>>>>>
>>>>> }
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>>>>>
>>>>> Hi Dustine,
>>>>>    I had a typo :). Sorry, can you run:
>>>>>
>>>>> curl -u admin:admin
>>>>>
>>>>>
>>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>>
>>>>> thanks
>>>>> mahadev
>>>>>
>>>>>
>>>>> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
>>>>> <du...@thecyberguardian.com> wrote:
>>>>>
>>>>> Start/Stop button's still disabled.
>>>>>
>>>>> Here's the result of the API call
>>>>>
>>>>> <html>
>>>>> <head>
>>>>> <meta http-equiv="Content-Type"
>>>>> content="text/html;charset=ISO-8859-1"/>
>>>>> <title>Error 403 Bad credentials</title>
>>>>> </head>
>>>>> <body>
>>>>> <h2>HTTP ERROR: 403</h2>
>>>>> <p>Problem accessing
>>>>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>>>>> <pre>    Bad credentials</pre></p>
>>>>> <hr /><i><small>Powered by Jetty://</small></i>
>>>>>
>>>>>
>>>>> </body>
>>>>> </html>
>>>>>
>>>>>
>>>>>
>>>>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>>>>
>>>>> Yes. The start stop button should re activate is some time (usually
>>>>> takes
>>>>> seconds) if it is 1.2.1 release.
>>>>>
>>>>> If not can you make an API call to see what the status of Namenode is:
>>>>>
>>>>> curl -u admin:amdin
>>>>>
>>>>>
>>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>>
>>>>> (see
>>>>>
>>>>>
>>>>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
>>>>> for more details on API's)
>>>>>
>>>>> mahadev
>>>>>
>>>>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
>>>>> <du...@thecyberguardian.com> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>>>>
>>>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>>>> but the Start and Stop buttons are disabled.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Dustine
>>>>>
>>>>>
>>>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>>>
>>>>> Hi Dustine,
>>>>>    Are you installing on a cluster that was already installed via
>>>>> Ambari? If yes, then remove this directory in
>>>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>>>> and it should work.
>>>>>
>>>>>    If not then its a bug and please create jira nad attach logs for
>>>>> Namenode/amabari agent and server.
>>>>>
>>>>> thanks
>>>>> mahadev
>>>>>
>>>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>>>> <du...@thecyberguardian.com> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> I was installing Ambari 1.2.1. When I reach step 9, after the services
>>>>> are
>>>>> installed,
>>>>> NameNode cannot be started.
>>>>>
>>>>> The ff. exception appeared in the log
>>>>>
>>>>> 2013-03-14 10:58:00,426 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>           at
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>           at
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>> 2013-03-14 10:58:00,427 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>           at
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>           at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>           at
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>>
>>>>> 2013-03-14 10:58:00,428 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>>> Crawler51.localdomain.com/192.168.3.51
>>>>> ************************************************************/
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Dustine
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>

Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Hello,

I am already using 1.2.1.

On 3/15/2013 10:43 AM, Mahadev Konar wrote:
> Dustine,
>   What version of Ambari are you running? There is a bug 1.2.0 which
> causes this issue to happen. If thats the case you can upgrade to
> 1.2.1 (which is currently under vote).
>
> http://incubator.apache.org/ambari/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1.html
>
> Has instructions!
>
> thanks
> mahadev
>
> On Thu, Mar 14, 2013 at 7:15 PM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>> Here's the result
>>
>>   "href" :
>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*",
>>    "HostRoles" : {
>>      "configs" : { },
>>
>>      "cluster_name" : "BigData",
>>      "desired_configs" : { },
>>      "desired_state" : "STARTED",
>>      "state" : "START_FAILED",
>>
>>      "component_name" : "NAMENODE",
>>      "host_name" : "Crawler51.localdomain.com"
>>    },
>>    "host" : {
>>      "href" :
>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com"
>>    },
>>    "component" : [
>>      {
>>        "href" :
>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE",
>>        "ServiceComponentInfo" : {
>>          "cluster_name" : "BigData",
>>
>>          "component_name" : "NAMENODE",
>>          "service_name" : "HDFS"
>>        }
>>      }
>>    ]
>>
>>
>>
>> On 3/15/2013 1:11 AM, Mahadev Konar wrote:
>>> To get more information can you run one more api command?
>>>
>>> curl -u admin:admin
>>>
>>> http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*
>>>
>>> thanks
>>> mahadev
>>>
>>>
>>> On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>> Ooops. I didn't notice.
>>>>
>>>> Anyway, here's the result
>>>>
>>>> {
>>>>     "href" :
>>>>
>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>>>>     "metrics" : {
>>>>       "boottime" : 0,
>>>>       "process" : {
>>>>         "proc_total" : 0,
>>>>         "proc_run" : 0
>>>>       },
>>>>       "ugi" : {
>>>>         "loginSuccess_num_ops" : 0,
>>>>         "loginFailure_num_ops" : 0,
>>>>         "loginSuccess_avg_time" : 0,
>>>>         "loginFailure_avg_time" : 0
>>>>       },
>>>>       "dfs" : {
>>>>         "namenode" : {
>>>>           "fsImageLoadTime" : 0,
>>>>           "FilesRenamed" : 0,
>>>>           "JournalTransactionsBatchedInSync" : 0,
>>>>           "SafemodeTime" : 0,
>>>>           "FilesDeleted" : 0,
>>>>           "DeleteFileOps" : 0,
>>>>           "FilesAppended" : 0
>>>>         }
>>>>       },
>>>>       "disk" : {
>>>>         "disk_total" : 0,
>>>>         "disk_free" : 0,
>>>>         "part_max_used" : 0
>>>>       },
>>>>       "cpu" : {
>>>>         "cpu_speed" : 0,
>>>>         "cpu_num" : 0,
>>>>         "cpu_wio" : 0,
>>>>         "cpu_idle" : 0,
>>>>         "cpu_nice" : 0,
>>>>         "cpu_aidle" : 0,
>>>>         "cpu_system" : 0,
>>>>         "cpu_user" : 0
>>>>       },
>>>>       "rpcdetailed" : {
>>>>         "delete_avg_time" : 0,
>>>>         "rename_avg_time" : 0,
>>>>         "register_num_ops" : 0,
>>>>         "versionRequest_num_ops" : 0,
>>>>         "blocksBeingWrittenReport_avg_time" : 0,
>>>>         "rename_num_ops" : 0,
>>>>         "register_avg_time" : 0,
>>>>         "mkdirs_avg_time" : 0,
>>>>         "setPermission_num_ops" : 0,
>>>>         "delete_num_ops" : 0,
>>>>         "versionRequest_avg_time" : 0,
>>>>         "setOwner_num_ops" : 0,
>>>>         "setSafeMode_avg_time" : 0,
>>>>         "setOwner_avg_time" : 0,
>>>>         "setSafeMode_num_ops" : 0,
>>>>         "blocksBeingWrittenReport_num_ops" : 0,
>>>>         "setReplication_num_ops" : 0,
>>>>         "setPermission_avg_time" : 0,
>>>>         "mkdirs_num_ops" : 0,
>>>>         "setReplication_avg_time" : 0
>>>>       },
>>>>       "load" : {
>>>>         "load_fifteen" : 0,
>>>>         "load_one" : 0,
>>>>         "load_five" : 0
>>>>       },
>>>>       "network" : {
>>>>         "pkts_out" : 0,
>>>>         "bytes_in" : 0,
>>>>         "bytes_out" : 0,
>>>>         "pkts_in" : 0
>>>>       },
>>>>       "memory" : {
>>>>         "mem_total" : 0,
>>>>         "swap_free" : 0,
>>>>         "mem_buffers" : 0,
>>>>         "mem_shared" : 0,
>>>>         "mem_cached" : 0,
>>>>         "mem_free" : 0,
>>>>         "swap_total" : 0
>>>>       }
>>>>     },
>>>>     "ServiceComponentInfo" : {
>>>>       "cluster_name" : "BigData",
>>>>       "desired_configs" : { },
>>>>       "state" : "STARTED",
>>>>       "component_name" : "NAMENODE",
>>>>       "service_name" : "HDFS"
>>>>     },
>>>>     "host_components" : [
>>>>       {
>>>>         "href" :
>>>>
>>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>>>>         "HostRoles" : {
>>>>           "cluster_name" : "BigData",
>>>>           "component_name" : "NAMENODE",
>>>>           "host_name" : "Crawler51.localdomain.com"
>>>>         }
>>>>       }
>>>>     ]
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>>
>>>> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>>>>
>>>> Hi Dustine,
>>>>    I had a typo :). Sorry, can you run:
>>>>
>>>> curl -u admin:admin
>>>>
>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>
>>>> thanks
>>>> mahadev
>>>>
>>>>
>>>> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
>>>> <du...@thecyberguardian.com> wrote:
>>>>
>>>> Start/Stop button's still disabled.
>>>>
>>>> Here's the result of the API call
>>>>
>>>> <html>
>>>> <head>
>>>> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
>>>> <title>Error 403 Bad credentials</title>
>>>> </head>
>>>> <body>
>>>> <h2>HTTP ERROR: 403</h2>
>>>> <p>Problem accessing
>>>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>>>> <pre>    Bad credentials</pre></p>
>>>> <hr /><i><small>Powered by Jetty://</small></i>
>>>>
>>>>
>>>> </body>
>>>> </html>
>>>>
>>>>
>>>>
>>>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>>>
>>>> Yes. The start stop button should re activate is some time (usually takes
>>>> seconds) if it is 1.2.1 release.
>>>>
>>>> If not can you make an API call to see what the status of Namenode is:
>>>>
>>>> curl -u admin:amdin
>>>>
>>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>>
>>>> (see
>>>>
>>>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
>>>> for more details on API's)
>>>>
>>>> mahadev
>>>>
>>>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
>>>> <du...@thecyberguardian.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>>>
>>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>>> but the Start and Stop buttons are disabled.
>>>>
>>>> Thanks.
>>>>
>>>> Dustine
>>>>
>>>>
>>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>>
>>>> Hi Dustine,
>>>>    Are you installing on a cluster that was already installed via
>>>> Ambari? If yes, then remove this directory in
>>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>>> and it should work.
>>>>
>>>>    If not then its a bug and please create jira nad attach logs for
>>>> Namenode/amabari agent and server.
>>>>
>>>> thanks
>>>> mahadev
>>>>
>>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>>> <du...@thecyberguardian.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> I was installing Ambari 1.2.1. When I reach step 9, after the services
>>>> are
>>>> installed,
>>>> NameNode cannot be started.
>>>>
>>>> The ff. exception appeared in the log
>>>>
>>>> 2013-03-14 10:58:00,426 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>           at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>           at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>> 2013-03-14 10:58:00,427 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>           at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>           at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>           at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>
>>>> 2013-03-14 10:58:00,428 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>> Crawler51.localdomain.com/192.168.3.51
>>>> ************************************************************/
>>>>
>>>> Thanks.
>>>>
>>>> Dustine
>>>>
>>>>
>>>>
>>>>
>>>>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@apache.org>.
Dustine,
 What version of Ambari are you running? There is a bug 1.2.0 which
causes this issue to happen. If thats the case you can upgrade to
1.2.1 (which is currently under vote).

http://incubator.apache.org/ambari/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1.html

Has instructions!

thanks
mahadev

On Thu, Mar 14, 2013 at 7:15 PM, Dustine Rene Bernasor
<du...@thecyberguardian.com> wrote:
> Here's the result
>
>  "href" :
> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*",
>   "HostRoles" : {
>     "configs" : { },
>
>     "cluster_name" : "BigData",
>     "desired_configs" : { },
>     "desired_state" : "STARTED",
>     "state" : "START_FAILED",
>
>     "component_name" : "NAMENODE",
>     "host_name" : "Crawler51.localdomain.com"
>   },
>   "host" : {
>     "href" :
> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com"
>   },
>   "component" : [
>     {
>       "href" :
> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE",
>       "ServiceComponentInfo" : {
>         "cluster_name" : "BigData",
>
>         "component_name" : "NAMENODE",
>         "service_name" : "HDFS"
>       }
>     }
>   ]
>
>
>
> On 3/15/2013 1:11 AM, Mahadev Konar wrote:
>>
>> To get more information can you run one more api command?
>>
>> curl -u admin:admin
>>
>> http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*
>>
>> thanks
>> mahadev
>>
>>
>> On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>>
>>> Ooops. I didn't notice.
>>>
>>> Anyway, here's the result
>>>
>>> {
>>>    "href" :
>>>
>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>>>    "metrics" : {
>>>      "boottime" : 0,
>>>      "process" : {
>>>        "proc_total" : 0,
>>>        "proc_run" : 0
>>>      },
>>>      "ugi" : {
>>>        "loginSuccess_num_ops" : 0,
>>>        "loginFailure_num_ops" : 0,
>>>        "loginSuccess_avg_time" : 0,
>>>        "loginFailure_avg_time" : 0
>>>      },
>>>      "dfs" : {
>>>        "namenode" : {
>>>          "fsImageLoadTime" : 0,
>>>          "FilesRenamed" : 0,
>>>          "JournalTransactionsBatchedInSync" : 0,
>>>          "SafemodeTime" : 0,
>>>          "FilesDeleted" : 0,
>>>          "DeleteFileOps" : 0,
>>>          "FilesAppended" : 0
>>>        }
>>>      },
>>>      "disk" : {
>>>        "disk_total" : 0,
>>>        "disk_free" : 0,
>>>        "part_max_used" : 0
>>>      },
>>>      "cpu" : {
>>>        "cpu_speed" : 0,
>>>        "cpu_num" : 0,
>>>        "cpu_wio" : 0,
>>>        "cpu_idle" : 0,
>>>        "cpu_nice" : 0,
>>>        "cpu_aidle" : 0,
>>>        "cpu_system" : 0,
>>>        "cpu_user" : 0
>>>      },
>>>      "rpcdetailed" : {
>>>        "delete_avg_time" : 0,
>>>        "rename_avg_time" : 0,
>>>        "register_num_ops" : 0,
>>>        "versionRequest_num_ops" : 0,
>>>        "blocksBeingWrittenReport_avg_time" : 0,
>>>        "rename_num_ops" : 0,
>>>        "register_avg_time" : 0,
>>>        "mkdirs_avg_time" : 0,
>>>        "setPermission_num_ops" : 0,
>>>        "delete_num_ops" : 0,
>>>        "versionRequest_avg_time" : 0,
>>>        "setOwner_num_ops" : 0,
>>>        "setSafeMode_avg_time" : 0,
>>>        "setOwner_avg_time" : 0,
>>>        "setSafeMode_num_ops" : 0,
>>>        "blocksBeingWrittenReport_num_ops" : 0,
>>>        "setReplication_num_ops" : 0,
>>>        "setPermission_avg_time" : 0,
>>>        "mkdirs_num_ops" : 0,
>>>        "setReplication_avg_time" : 0
>>>      },
>>>      "load" : {
>>>        "load_fifteen" : 0,
>>>        "load_one" : 0,
>>>        "load_five" : 0
>>>      },
>>>      "network" : {
>>>        "pkts_out" : 0,
>>>        "bytes_in" : 0,
>>>        "bytes_out" : 0,
>>>        "pkts_in" : 0
>>>      },
>>>      "memory" : {
>>>        "mem_total" : 0,
>>>        "swap_free" : 0,
>>>        "mem_buffers" : 0,
>>>        "mem_shared" : 0,
>>>        "mem_cached" : 0,
>>>        "mem_free" : 0,
>>>        "swap_total" : 0
>>>      }
>>>    },
>>>    "ServiceComponentInfo" : {
>>>      "cluster_name" : "BigData",
>>>      "desired_configs" : { },
>>>      "state" : "STARTED",
>>>      "component_name" : "NAMENODE",
>>>      "service_name" : "HDFS"
>>>    },
>>>    "host_components" : [
>>>      {
>>>        "href" :
>>>
>>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>>>        "HostRoles" : {
>>>          "cluster_name" : "BigData",
>>>          "component_name" : "NAMENODE",
>>>          "host_name" : "Crawler51.localdomain.com"
>>>        }
>>>      }
>>>    ]
>>>
>>> }
>>>
>>>
>>>
>>>
>>> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>>>
>>> Hi Dustine,
>>>   I had a typo :). Sorry, can you run:
>>>
>>> curl -u admin:admin
>>>
>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>
>>> thanks
>>> mahadev
>>>
>>>
>>> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>
>>> Start/Stop button's still disabled.
>>>
>>> Here's the result of the API call
>>>
>>> <html>
>>> <head>
>>> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
>>> <title>Error 403 Bad credentials</title>
>>> </head>
>>> <body>
>>> <h2>HTTP ERROR: 403</h2>
>>> <p>Problem accessing
>>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>>> <pre>    Bad credentials</pre></p>
>>> <hr /><i><small>Powered by Jetty://</small></i>
>>>
>>>
>>> </body>
>>> </html>
>>>
>>>
>>>
>>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>>
>>> Yes. The start stop button should re activate is some time (usually takes
>>> seconds) if it is 1.2.1 release.
>>>
>>> If not can you make an API call to see what the status of Namenode is:
>>>
>>> curl -u admin:amdin
>>>
>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>
>>> (see
>>>
>>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
>>> for more details on API's)
>>>
>>> mahadev
>>>
>>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>
>>> Hello,
>>>
>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>>
>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>> but the Start and Stop buttons are disabled.
>>>
>>> Thanks.
>>>
>>> Dustine
>>>
>>>
>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>
>>> Hi Dustine,
>>>   Are you installing on a cluster that was already installed via
>>> Ambari? If yes, then remove this directory in
>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>> and it should work.
>>>
>>>   If not then its a bug and please create jira nad attach logs for
>>> Namenode/amabari agent and server.
>>>
>>> thanks
>>> mahadev
>>>
>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>
>>> Hello,
>>>
>>> I was installing Ambari 1.2.1. When I reach step 9, after the services
>>> are
>>> installed,
>>> NameNode cannot be started.
>>>
>>> The ff. exception appeared in the log
>>>
>>> 2013-03-14 10:58:00,426 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>          at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>          at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>> 2013-03-14 10:58:00,427 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>          at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>          at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>          at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>
>>> 2013-03-14 10:58:00,428 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at
>>> Crawler51.localdomain.com/192.168.3.51
>>> ************************************************************/
>>>
>>> Thanks.
>>>
>>> Dustine
>>>
>>>
>>>
>>>
>>>
>

Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Here's the result

  "href" : 
"http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*",
   "HostRoles" : {
     "configs" : { },
     "cluster_name" : "BigData",
     "desired_configs" : { },
     "desired_state" : "STARTED",
     "state" : "START_FAILED",
     "component_name" : "NAMENODE",
     "host_name" : "Crawler51.localdomain.com"
   },
   "host" : {
     "href" : 
"http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com"
   },
   "component" : [
     {
       "href" : 
"http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE",
       "ServiceComponentInfo" : {
         "cluster_name" : "BigData",
         "component_name" : "NAMENODE",
         "service_name" : "HDFS"
       }
     }
   ]


On 3/15/2013 1:11 AM, Mahadev Konar wrote:
> To get more information can you run one more api command?
>
> curl -u admin:admin
> http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*
>
> thanks
> mahadev
>
>
> On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>> Ooops. I didn't notice.
>>
>> Anyway, here's the result
>>
>> {
>>    "href" :
>> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>>    "metrics" : {
>>      "boottime" : 0,
>>      "process" : {
>>        "proc_total" : 0,
>>        "proc_run" : 0
>>      },
>>      "ugi" : {
>>        "loginSuccess_num_ops" : 0,
>>        "loginFailure_num_ops" : 0,
>>        "loginSuccess_avg_time" : 0,
>>        "loginFailure_avg_time" : 0
>>      },
>>      "dfs" : {
>>        "namenode" : {
>>          "fsImageLoadTime" : 0,
>>          "FilesRenamed" : 0,
>>          "JournalTransactionsBatchedInSync" : 0,
>>          "SafemodeTime" : 0,
>>          "FilesDeleted" : 0,
>>          "DeleteFileOps" : 0,
>>          "FilesAppended" : 0
>>        }
>>      },
>>      "disk" : {
>>        "disk_total" : 0,
>>        "disk_free" : 0,
>>        "part_max_used" : 0
>>      },
>>      "cpu" : {
>>        "cpu_speed" : 0,
>>        "cpu_num" : 0,
>>        "cpu_wio" : 0,
>>        "cpu_idle" : 0,
>>        "cpu_nice" : 0,
>>        "cpu_aidle" : 0,
>>        "cpu_system" : 0,
>>        "cpu_user" : 0
>>      },
>>      "rpcdetailed" : {
>>        "delete_avg_time" : 0,
>>        "rename_avg_time" : 0,
>>        "register_num_ops" : 0,
>>        "versionRequest_num_ops" : 0,
>>        "blocksBeingWrittenReport_avg_time" : 0,
>>        "rename_num_ops" : 0,
>>        "register_avg_time" : 0,
>>        "mkdirs_avg_time" : 0,
>>        "setPermission_num_ops" : 0,
>>        "delete_num_ops" : 0,
>>        "versionRequest_avg_time" : 0,
>>        "setOwner_num_ops" : 0,
>>        "setSafeMode_avg_time" : 0,
>>        "setOwner_avg_time" : 0,
>>        "setSafeMode_num_ops" : 0,
>>        "blocksBeingWrittenReport_num_ops" : 0,
>>        "setReplication_num_ops" : 0,
>>        "setPermission_avg_time" : 0,
>>        "mkdirs_num_ops" : 0,
>>        "setReplication_avg_time" : 0
>>      },
>>      "load" : {
>>        "load_fifteen" : 0,
>>        "load_one" : 0,
>>        "load_five" : 0
>>      },
>>      "network" : {
>>        "pkts_out" : 0,
>>        "bytes_in" : 0,
>>        "bytes_out" : 0,
>>        "pkts_in" : 0
>>      },
>>      "memory" : {
>>        "mem_total" : 0,
>>        "swap_free" : 0,
>>        "mem_buffers" : 0,
>>        "mem_shared" : 0,
>>        "mem_cached" : 0,
>>        "mem_free" : 0,
>>        "swap_total" : 0
>>      }
>>    },
>>    "ServiceComponentInfo" : {
>>      "cluster_name" : "BigData",
>>      "desired_configs" : { },
>>      "state" : "STARTED",
>>      "component_name" : "NAMENODE",
>>      "service_name" : "HDFS"
>>    },
>>    "host_components" : [
>>      {
>>        "href" :
>> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>>        "HostRoles" : {
>>          "cluster_name" : "BigData",
>>          "component_name" : "NAMENODE",
>>          "host_name" : "Crawler51.localdomain.com"
>>        }
>>      }
>>    ]
>>
>> }
>>
>>
>>
>>
>> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>>
>> Hi Dustine,
>>   I had a typo :). Sorry, can you run:
>>
>> curl -u admin:admin
>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>
>> thanks
>> mahadev
>>
>>
>> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>
>> Start/Stop button's still disabled.
>>
>> Here's the result of the API call
>>
>> <html>
>> <head>
>> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
>> <title>Error 403 Bad credentials</title>
>> </head>
>> <body>
>> <h2>HTTP ERROR: 403</h2>
>> <p>Problem accessing
>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>> <pre>    Bad credentials</pre></p>
>> <hr /><i><small>Powered by Jetty://</small></i>
>>
>>
>> </body>
>> </html>
>>
>>
>>
>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>
>> Yes. The start stop button should re activate is some time (usually takes
>> seconds) if it is 1.2.1 release.
>>
>> If not can you make an API call to see what the status of Namenode is:
>>
>> curl -u admin:amdin
>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>
>> (see
>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
>> for more details on API's)
>>
>> mahadev
>>
>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>
>> Hello,
>>
>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>
>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>> but the Start and Stop buttons are disabled.
>>
>> Thanks.
>>
>> Dustine
>>
>>
>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>
>> Hi Dustine,
>>   Are you installing on a cluster that was already installed via
>> Ambari? If yes, then remove this directory in
>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>> and it should work.
>>
>>   If not then its a bug and please create jira nad attach logs for
>> Namenode/amabari agent and server.
>>
>> thanks
>> mahadev
>>
>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>
>> Hello,
>>
>> I was installing Ambari 1.2.1. When I reach step 9, after the services are
>> installed,
>> NameNode cannot be started.
>>
>> The ff. exception appeared in the log
>>
>> 2013-03-14 10:58:00,426 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>> 2013-03-14 10:58:00,427 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>
>> 2013-03-14 10:58:00,428 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at
>> Crawler51.localdomain.com/192.168.3.51
>> ************************************************************/
>>
>> Thanks.
>>
>> Dustine
>>
>>
>>
>>
>>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@hortonworks.com>.
To get more information can you run one more api command?

curl -u admin:admin
http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE?fields=*

thanks
mahadev


On Thu, Mar 14, 2013 at 12:55 AM, Dustine Rene Bernasor
<du...@thecyberguardian.com> wrote:
> Ooops. I didn't notice.
>
> Anyway, here's the result
>
> {
>   "href" :
> "http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
>   "metrics" : {
>     "boottime" : 0,
>     "process" : {
>       "proc_total" : 0,
>       "proc_run" : 0
>     },
>     "ugi" : {
>       "loginSuccess_num_ops" : 0,
>       "loginFailure_num_ops" : 0,
>       "loginSuccess_avg_time" : 0,
>       "loginFailure_avg_time" : 0
>     },
>     "dfs" : {
>       "namenode" : {
>         "fsImageLoadTime" : 0,
>         "FilesRenamed" : 0,
>         "JournalTransactionsBatchedInSync" : 0,
>         "SafemodeTime" : 0,
>         "FilesDeleted" : 0,
>         "DeleteFileOps" : 0,
>         "FilesAppended" : 0
>       }
>     },
>     "disk" : {
>       "disk_total" : 0,
>       "disk_free" : 0,
>       "part_max_used" : 0
>     },
>     "cpu" : {
>       "cpu_speed" : 0,
>       "cpu_num" : 0,
>       "cpu_wio" : 0,
>       "cpu_idle" : 0,
>       "cpu_nice" : 0,
>       "cpu_aidle" : 0,
>       "cpu_system" : 0,
>       "cpu_user" : 0
>     },
>     "rpcdetailed" : {
>       "delete_avg_time" : 0,
>       "rename_avg_time" : 0,
>       "register_num_ops" : 0,
>       "versionRequest_num_ops" : 0,
>       "blocksBeingWrittenReport_avg_time" : 0,
>       "rename_num_ops" : 0,
>       "register_avg_time" : 0,
>       "mkdirs_avg_time" : 0,
>       "setPermission_num_ops" : 0,
>       "delete_num_ops" : 0,
>       "versionRequest_avg_time" : 0,
>       "setOwner_num_ops" : 0,
>       "setSafeMode_avg_time" : 0,
>       "setOwner_avg_time" : 0,
>       "setSafeMode_num_ops" : 0,
>       "blocksBeingWrittenReport_num_ops" : 0,
>       "setReplication_num_ops" : 0,
>       "setPermission_avg_time" : 0,
>       "mkdirs_num_ops" : 0,
>       "setReplication_avg_time" : 0
>     },
>     "load" : {
>       "load_fifteen" : 0,
>       "load_one" : 0,
>       "load_five" : 0
>     },
>     "network" : {
>       "pkts_out" : 0,
>       "bytes_in" : 0,
>       "bytes_out" : 0,
>       "pkts_in" : 0
>     },
>     "memory" : {
>       "mem_total" : 0,
>       "swap_free" : 0,
>       "mem_buffers" : 0,
>       "mem_shared" : 0,
>       "mem_cached" : 0,
>       "mem_free" : 0,
>       "swap_total" : 0
>     }
>   },
>   "ServiceComponentInfo" : {
>     "cluster_name" : "BigData",
>     "desired_configs" : { },
>     "state" : "STARTED",
>     "component_name" : "NAMENODE",
>     "service_name" : "HDFS"
>   },
>   "host_components" : [
>     {
>       "href" :
> "http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
>       "HostRoles" : {
>         "cluster_name" : "BigData",
>         "component_name" : "NAMENODE",
>         "host_name" : "Crawler51.localdomain.com"
>       }
>     }
>   ]
>
> }
>
>
>
>
> On 3/14/2013 3:51 PM, Mahadev Konar wrote:
>
> Hi Dustine,
>  I had a typo :). Sorry, can you run:
>
> curl -u admin:admin
> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>
> thanks
> mahadev
>
>
> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>
> Start/Stop button's still disabled.
>
> Here's the result of the API call
>
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
> <title>Error 403 Bad credentials</title>
> </head>
> <body>
> <h2>HTTP ERROR: 403</h2>
> <p>Problem accessing
> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
> <pre>    Bad credentials</pre></p>
> <hr /><i><small>Powered by Jetty://</small></i>
>
>
> </body>
> </html>
>
>
>
> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>
> Yes. The start stop button should re activate is some time (usually takes
> seconds) if it is 1.2.1 release.
>
> If not can you make an API call to see what the status of Namenode is:
>
> curl -u admin:amdin
> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>
> (see
> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
> for more details on API's)
>
> mahadev
>
> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>
> Hello,
>
> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>
> I cannot restart namenode from the UI. HDFS icon keeps on blinking
> but the Start and Stop buttons are disabled.
>
> Thanks.
>
> Dustine
>
>
> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>
> Hi Dustine,
>  Are you installing on a cluster that was already installed via
> Ambari? If yes, then remove this directory in
> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
> and it should work.
>
>  If not then its a bug and please create jira nad attach logs for
> Namenode/amabari agent and server.
>
> thanks
> mahadev
>
> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>
> Hello,
>
> I was installing Ambari 1.2.1. When I reach step 9, after the services are
> installed,
> NameNode cannot be started.
>
> The ff. exception appeared in the log
>
> 2013-03-14 10:58:00,426 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
> 2013-03-14 10:58:00,427 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>
> 2013-03-14 10:58:00,428 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at
> Crawler51.localdomain.com/192.168.3.51
> ************************************************************/
>
> Thanks.
>
> Dustine
>
>
>
>
>

Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Ooops. I didn't notice.

Anyway, here's the result

{
   "href" : 
"http://192.168.1.51:8080/api/v1/clusters/BigData/services/HDFS/components/NAMENODE?fields=*",
   "metrics" : {
     "boottime" : 0,
     "process" : {
       "proc_total" : 0,
       "proc_run" : 0
     },
     "ugi" : {
       "loginSuccess_num_ops" : 0,
       "loginFailure_num_ops" : 0,
       "loginSuccess_avg_time" : 0,
       "loginFailure_avg_time" : 0
     },
     "dfs" : {
       "namenode" : {
         "fsImageLoadTime" : 0,
         "FilesRenamed" : 0,
         "JournalTransactionsBatchedInSync" : 0,
         "SafemodeTime" : 0,
         "FilesDeleted" : 0,
         "DeleteFileOps" : 0,
         "FilesAppended" : 0
       }
     },
     "disk" : {
       "disk_total" : 0,
       "disk_free" : 0,
       "part_max_used" : 0
     },
     "cpu" : {
       "cpu_speed" : 0,
       "cpu_num" : 0,
       "cpu_wio" : 0,
       "cpu_idle" : 0,
       "cpu_nice" : 0,
       "cpu_aidle" : 0,
       "cpu_system" : 0,
       "cpu_user" : 0
     },
     "rpcdetailed" : {
       "delete_avg_time" : 0,
       "rename_avg_time" : 0,
       "register_num_ops" : 0,
       "versionRequest_num_ops" : 0,
       "blocksBeingWrittenReport_avg_time" : 0,
       "rename_num_ops" : 0,
       "register_avg_time" : 0,
       "mkdirs_avg_time" : 0,
       "setPermission_num_ops" : 0,
       "delete_num_ops" : 0,
       "versionRequest_avg_time" : 0,
       "setOwner_num_ops" : 0,
       "setSafeMode_avg_time" : 0,
       "setOwner_avg_time" : 0,
       "setSafeMode_num_ops" : 0,
       "blocksBeingWrittenReport_num_ops" : 0,
       "setReplication_num_ops" : 0,
       "setPermission_avg_time" : 0,
       "mkdirs_num_ops" : 0,
       "setReplication_avg_time" : 0
     },
     "load" : {
       "load_fifteen" : 0,
       "load_one" : 0,
       "load_five" : 0
     },
     "network" : {
       "pkts_out" : 0,
       "bytes_in" : 0,
       "bytes_out" : 0,
       "pkts_in" : 0
     },
     "memory" : {
       "mem_total" : 0,
       "swap_free" : 0,
       "mem_buffers" : 0,
       "mem_shared" : 0,
       "mem_cached" : 0,
       "mem_free" : 0,
       "swap_total" : 0
     }
   },
   "ServiceComponentInfo" : {
     "cluster_name" : "BigData",
     "desired_configs" : { },
     "state" : "STARTED",
     "component_name" : "NAMENODE",
     "service_name" : "HDFS"
   },
   "host_components" : [
     {
       "href" : 
"http://192.168.1.51:8080/api/v1/clusters/BigData/hosts/Crawler51.localdomain.com/host_components/NAMENODE",
       "HostRoles" : {
         "cluster_name" : "BigData",
         "component_name" : "NAMENODE",
         "host_name" : "Crawler51.localdomain.com"
       }
     }
   ]
}




On 3/14/2013 3:51 PM, Mahadev Konar wrote:
> Hi Dustine,
>  I had a typo :). Sorry, can you run:
>
> curl -u 
> admin:*admin* http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>
> thanks
> mahadev
>
>
> On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor 
> <dustine@thecyberguardian.com <ma...@thecyberguardian.com>> 
> wrote:
>
>> Start/Stop button's still disabled.
>>
>> Here's the result of the API call
>>
>> <html>
>> <head>
>> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
>> <title>Error 403 Bad credentials</title>
>> </head>
>> <body>
>> <h2>HTTP ERROR: 403</h2>
>> <p>Problem accessing 
>> /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
>> <pre>    Bad credentials</pre></p>
>> <hr /><i><small>Powered by Jetty://</small></i>
>>
>>
>> </body>
>> </html>
>>
>>
>>
>> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>>> Yes. The start stop button should re activate is some time (usually 
>>> takes seconds) if it is 1.2.1 release.
>>>
>>> If not can you make an API call to see what the status of Namenode is:
>>>
>>> curl -u admin:amdin 
>>> http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>>>
>>> (see 
>>> https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md 
>>> for more details on API's)
>>>
>>> mahadev
>>>
>>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor 
>>> <dustine@thecyberguardian.com <ma...@thecyberguardian.com>> 
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>>>
>>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>>> but the Start and Stop buttons are disabled.
>>>>
>>>> Thanks.
>>>>
>>>> Dustine
>>>>
>>>>
>>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>>> Hi Dustine,
>>>>>  Are you installing on a cluster that was already installed via
>>>>> Ambari? If yes, then remove this directory in
>>>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>>>> and it should work.
>>>>>
>>>>>  If not then its a bug and please create jira nad attach logs for
>>>>> Namenode/amabari agent and server.
>>>>>
>>>>> thanks
>>>>> mahadev
>>>>>
>>>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>>>> <dustine@thecyberguardian.com 
>>>>> <ma...@thecyberguardian.com>> wrote:
>>>>>> Hello,
>>>>>>
>>>>>> I was installing Ambari 1.2.1. When I reach step 9, after the 
>>>>>> services are
>>>>>> installed,
>>>>>> NameNode cannot be started.
>>>>>>
>>>>>> The ff. exception appeared in the log
>>>>>>
>>>>>> 2013-03-14 10:58:00,426 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>>> 2013-03-14 10:58:00,427 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>>>
>>>>>> 2013-03-14 10:58:00,428 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>>>> Crawler51.localdomain.com/192.168.3.51 
>>>>>> <http://Crawler51.localdomain.com/192.168.3.51>
>>>>>> ************************************************************/
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> Dustine
>>>>>>
>>>>>>
>>
>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@hortonworks.com>.
Hi Dustine,
 I had a typo :). Sorry, can you run:

curl -u admin:admin http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*

thanks
mahadev


On Mar 14, 2013, at 12:46 AM, Dustine Rene Bernasor <du...@thecyberguardian.com> wrote:

> Start/Stop button's still disabled.
> 
> Here's the result of the API call
> 
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
> <title>Error 403 Bad credentials</title>
> </head>
> <body>
> <h2>HTTP ERROR: 403</h2>
> <p>Problem accessing /api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
> <pre>    Bad credentials</pre></p>
> <hr /><i><small>Powered by Jetty://</small></i>
> 
> 
> </body>
> </html>
> 
> 
> 
> On 3/14/2013 3:29 PM, Mahadev Konar wrote:
>> Yes. The start stop button should re activate is some time (usually takes seconds) if it is 1.2.1 release.
>> 
>> If not can you make an API call to see what the status of Namenode is:
>> 
>> curl -u admin:amdin http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>> 
>> (see https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md for more details on API's)
>> 
>> mahadev
>> 
>> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor <du...@thecyberguardian.com> wrote:
>> 
>>> Hello,
>>> 
>>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>> 
>>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>>> but the Start and Stop buttons are disabled.
>>> 
>>> Thanks.
>>> 
>>> Dustine
>>> 
>>> 
>>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>>> Hi Dustine,
>>>>  Are you installing on a cluster that was already installed via
>>>> Ambari? If yes, then remove this directory in
>>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>>> and it should work.
>>>> 
>>>>  If not then its a bug and please create jira nad attach logs for
>>>> Namenode/amabari agent and server.
>>>> 
>>>> thanks
>>>> mahadev
>>>> 
>>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>>> <du...@thecyberguardian.com> wrote:
>>>>> Hello,
>>>>> 
>>>>> I was installing Ambari 1.2.1. When I reach step 9, after the services are
>>>>> installed,
>>>>> NameNode cannot be started.
>>>>> 
>>>>> The ff. exception appeared in the log
>>>>> 
>>>>> 2013-03-14 10:58:00,426 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>> 2013-03-14 10:58:00,427 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>> 
>>>>> 2013-03-14 10:58:00,428 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>>> Crawler51.localdomain.com/192.168.3.51
>>>>> ************************************************************/
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> Dustine
>>>>> 
>>>>> 
> 


Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Start/Stop button's still disabled.

Here's the result of the API call

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 403 Bad credentials</title>
</head>
<body>
<h2>HTTP ERROR: 403</h2>
<p>Problem accessing 
/api/v1/clusters/BigData/services/HDFS/components/NAMENODE. Reason:
<pre>    Bad credentials</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>


</body>
</html>



On 3/14/2013 3:29 PM, Mahadev Konar wrote:
> Yes. The start stop button should re activate is some time (usually takes seconds) if it is 1.2.1 release.
>
> If not can you make an API call to see what the status of Namenode is:
>
> curl -u admin:amdin http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*
>
> (see https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md for more details on API's)
>
> mahadev
>
> On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor <du...@thecyberguardian.com> wrote:
>
>> Hello,
>>
>> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
>>
>> I cannot restart namenode from the UI. HDFS icon keeps on blinking
>> but the Start and Stop buttons are disabled.
>>
>> Thanks.
>>
>> Dustine
>>
>>
>> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>>> Hi Dustine,
>>>   Are you installing on a cluster that was already installed via
>>> Ambari? If yes, then remove this directory in
>>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>>> and it should work.
>>>
>>>   If not then its a bug and please create jira nad attach logs for
>>> Namenode/amabari agent and server.
>>>
>>> thanks
>>> mahadev
>>>
>>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>>> <du...@thecyberguardian.com> wrote:
>>>> Hello,
>>>>
>>>> I was installing Ambari 1.2.1. When I reach step 9, after the services are
>>>> installed,
>>>> NameNode cannot be started.
>>>>
>>>> The ff. exception appeared in the log
>>>>
>>>> 2013-03-14 10:58:00,426 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>> 2013-03-14 10:58:00,427 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>>          at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>>>
>>>> 2013-03-14 10:58:00,428 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at
>>>> Crawler51.localdomain.com/192.168.3.51
>>>> ************************************************************/
>>>>
>>>> Thanks.
>>>>
>>>> Dustine
>>>>
>>>>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@hortonworks.com>.
Yes. The start stop button should re activate is some time (usually takes seconds) if it is 1.2.1 release. 

If not can you make an API call to see what the status of Namenode is:

curl -u admin:amdin http://<ambari-server>:8080/api/v1/clusters/<clustername>/services/HDFS/components/NAMENODE?fields=*

(see https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md for more details on API's)

mahadev

On Mar 14, 2013, at 12:23 AM, Dustine Rene Bernasor <du...@thecyberguardian.com> wrote:

> Hello,
> 
> Did you mean /var/run/hadoop/hdfs/namenode/formatted?
> 
> I cannot restart namenode from the UI. HDFS icon keeps on blinking
> but the Start and Stop buttons are disabled.
> 
> Thanks.
> 
> Dustine
> 
> 
> On 3/14/2013 3:17 PM, Mahadev Konar wrote:
>> Hi Dustine,
>>  Are you installing on a cluster that was already installed via
>> Ambari? If yes, then remove this directory in
>> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
>> and it should work.
>> 
>>  If not then its a bug and please create jira nad attach logs for
>> Namenode/amabari agent and server.
>> 
>> thanks
>> mahadev
>> 
>> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
>> <du...@thecyberguardian.com> wrote:
>>> Hello,
>>> 
>>> I was installing Ambari 1.2.1. When I reach step 9, after the services are
>>> installed,
>>> NameNode cannot be started.
>>> 
>>> The ff. exception appeared in the log
>>> 
>>> 2013-03-14 10:58:00,426 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>> 2013-03-14 10:58:00,427 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>> 
>>> 2013-03-14 10:58:00,428 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at
>>> Crawler51.localdomain.com/192.168.3.51
>>> ************************************************************/
>>> 
>>> Thanks.
>>> 
>>> Dustine
>>> 
>>> 
> 


Re: NameNode is failing to start

Posted by Dustine Rene Bernasor <du...@thecyberguardian.com>.
Hello,

Did you mean /var/run/hadoop/hdfs/namenode/formatted?

I cannot restart namenode from the UI. HDFS icon keeps on blinking
but the Start and Stop buttons are disabled.

Thanks.

Dustine


On 3/14/2013 3:17 PM, Mahadev Konar wrote:
> Hi Dustine,
>   Are you installing on a cluster that was already installed via
> Ambari? If yes, then remove this directory in
> /var/run/hadoop/hdfs/formatted and restart namenode from the the UI
> and it should work.
>
>   If not then its a bug and please create jira nad attach logs for
> Namenode/amabari agent and server.
>
> thanks
> mahadev
>
> On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
> <du...@thecyberguardian.com> wrote:
>> Hello,
>>
>> I was installing Ambari 1.2.1. When I reach step 9, after the services are
>> installed,
>> NameNode cannot be started.
>>
>> The ff. exception appeared in the log
>>
>> 2013-03-14 10:58:00,426 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>> 2013-03-14 10:58:00,427 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>>          at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>>
>> 2013-03-14 10:58:00,428 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at
>> Crawler51.localdomain.com/192.168.3.51
>> ************************************************************/
>>
>> Thanks.
>>
>> Dustine
>>
>>


Re: NameNode is failing to start

Posted by Mahadev Konar <ma...@hortonworks.com>.
Hi Dustine,
 Are you installing on a cluster that was already installed via
Ambari? If yes, then remove this directory in
/var/run/hadoop/hdfs/formatted and restart namenode from the the UI
and it should work.

 If not then its a bug and please create jira nad attach logs for
Namenode/amabari agent and server.

thanks
mahadev

On Thu, Mar 14, 2013 at 12:13 AM, Dustine Rene Bernasor
<du...@thecyberguardian.com> wrote:
> Hello,
>
> I was installing Ambari 1.2.1. When I reach step 9, after the services are
> installed,
> NameNode cannot be started.
>
> The ff. exception appeared in the log
>
> 2013-03-14 10:58:00,426 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
> 2013-03-14 10:58:00,427 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:287)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:548)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1431)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1440)
>
> 2013-03-14 10:58:00,428 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at
> Crawler51.localdomain.com/192.168.3.51
> ************************************************************/
>
> Thanks.
>
> Dustine
>
>