You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ra...@accenture.com on 2012/06/04 13:07:24 UTC

SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Hello. I'm facing a issue when trying to configure my SecondaryNameNode on a different machine than my NameNode. When both are on the same machine everything works fine but after moving the secondary to a new machine I get:

2012-05-28 09:57:36,832 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.ConnectException: Connection refused
2012-05-28 09:57:36,832 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint:
2012-05-28 09:57:36,834 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
        at java.net.Socket.connect(Socket.java:546)
        at java.net.Socket.connect(Socket.java:495)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)

Is there any configuration I'm missing? At this point my mapred-site.xml is very simple just:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>hadoop00:9001</value>
  </property>
  <property>
    <name>mapred.system.dir</name>
    <value>/home/hadoop/mapred/system</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
    <value>/home/hadoop/mapred/local</value>
  </property>
  <property>
    <name>mapred.jobtracker.taskScheduler</name>
    <value>org.apache.hadoop.mapred.FairScheduler</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allocation.file</name>
    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
  </property>
</configuration>



________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com

Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by praveenesh kumar <pr...@gmail.com>.
I would say not to use 127.0.0.1 in distributed mode. Comment out the first
2 lines of your /etc/hosts.

Rather have your /etc/hosts file like this -

Suppose you are on hadoop00 -- there /etc/hosts would look like

192.168.0.10 hadoop00 localhost
192.168.0.11 hadoop01
192.168.0.12 hadoop02

On hadoop01 -

192.168.0.10 hadoop00
192.168.0.11 hadoop01 localhost
192.168.0.12 hadoop02

On hadoop02 -

192.168.0.10 hadoop00
192.168.0.11 hadoop01
192.168.0.12 hadoop02 localhost

In this way localhost will be mapped to the actual IP and hostname.. not to
127.0.0.1

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:42 PM, <ra...@accenture.com> wrote:

> /etc/hosts
>
> 127.0.0.1               localhost.localdomain localhost
> ::1             localhost6.localdomain6 localhost6
> 192.168.0.10 hadoop00
> 192.168.0.11 hadoop01
> 192.168.0.12 hadoop02
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:09
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Also can you share your /etc/hosts file of both the VMs
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:
>
> > Right. No firewalls. This is my 'toy' environment running as virtual
> > machines on my desktop computer. I'm playing with this here because
> > have the same problem on my real cluster. Will try to explicitly
> > configure starting IP for this SNN.
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 14:02
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> > your SNN.
> > In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better
> > if we provide which IP it should start at.
> > Also I am assuming you are not having any firewalls enable between
> > these 2 machines right ?
> >
> > Regards,
> > Praveenesh
> >
> > On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
> >
> > > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> > >
> > > /************************************************************
> > > STARTUP_MSG: Starting SecondaryNameNode
> > > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > > STARTUP_MSG:   args = [-checkpoint, force]
> > > STARTUP_MSG:   version = 1.0.3
> > > STARTUP_MSG:   build =
> > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > > ************************************************************/
> > > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > > server
> > as:
> > > hadoop
> > > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > org.mortbay.log.Slf4jLog
> > > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > > Opening the listener on 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > > returned
> > > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > > 12/06/04 13:34:25 INFO mortbay.log: Started
> > > SelectChannelConnector@0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > > done
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > > Web-server up
> > > at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > > servlet up at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> > > :3600 secs (60 min)
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
> > >  :67108864 bytes (65536 KB)
> > > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > > Connection refused
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > > /************************************************************
> > > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > > hadoop01/192.168.0.11
> > > ************************************************************/
> > >
> > > -----Original Message-----
> > > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > > Sent: lunes, 04 de junio de 2012 13:15
> > > To: common-user@hadoop.apache.org
> > > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > > PriviledgedActionException
> > >
> > > I am not sure what could be the exact issue but when configuring
> > > secondary NN to NN, you need to tell your SNN where the actual NN
> > resides.
> > > Try adding - dfs.http.address on your secondary namenode machine
> > > having value as <NN:port> on hdfs-site.xml Port should be on which
> > > your NN url is opening - means your NN web browser http port.
> > >
> > > Regards,
> > > Praveenesh
> > > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> > >
> > > > Hello. I'm facing a issue when trying to configure my
> > > > SecondaryNameNode on a different machine than my NameNode. When
> > > > both are on the same machine everything works fine but after
> > > > moving the
> > > secondary to a new machine I get:
> > > >
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.security.UserGroupInformation:
> > > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > > Connection refused
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > Exception in
> > > > doCheckpoint:
> > > > 2012-05-28 09:57:36,834 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > java.net.ConnectException: Connection refused
> > > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.j
> > > av
> > > a:327)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocke
> > > tI
> > > mpl.java:191)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > > 180)
> > > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > > >        at java.net.Socket.connect(Socket.java:546)
> > > >        at java.net.Socket.connect(Socket.java:495)
> > > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURL
> > > Co
> > > nnection.java:935)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConn
> > > ec
> > > tion.java:876)
> > > >        at
> > > >
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > > java:801)
> > > >
> > > > Is there any configuration I'm missing? At this point my
> > > > mapred-site.xml is very simple just:
> > > >
> > > > <?xml version="1.0"?>
> > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > > <configuration>  <property>
> > > >    <name>mapred.job.tracker</name>
> > > >    <value>hadoop00:9001</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.system.dir</name>
> > > >    <value>/home/hadoop/mapred/system</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.local.dir</name>
> > > >    <value>/home/hadoop/mapred/local</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.jobtracker.taskScheduler</name>
> > > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.fairscheduler.allocation.file</name>
> > > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > > >  </property>
> > > > </configuration>
> > > >
> > > >
> > > >
> > > > ________________________________
> > > > Subject to local law, communications with Accenture and its
> > > > affiliates including telephone calls and emails (including
> > > > content), may be monitored by our systems for the purposes of
> > > > security and the assessment of internal compliance with Accenture
> policy.
> > > >
> > > > __________________________________________________________________
> > > > __
> > > > __
> > > > ________________
> > > >
> > > > www.accenture.com
> > > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including content),
> > > may be monitored by our systems for the purposes of security and the
> > > assessment of internal compliance with Accenture policy.
> > >
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its affiliates
> > including telephone calls and emails (including content), may be
> > monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ______________________________________________________________________
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>
>

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
/etc/hosts

127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.0.10 hadoop00
192.168.0.11 hadoop01
192.168.0.12 hadoop02

-----Original Message-----
From: praveenesh kumar [mailto:praveenesh@gmail.com]
Sent: lunes, 04 de junio de 2012 14:09
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Also can you share your /etc/hosts file of both the VMs

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:

> Right. No firewalls. This is my 'toy' environment running as virtual
> machines on my desktop computer. I'm playing with this here because
> have the same problem on my real cluster. Will try to explicitly
> configure starting IP for this SNN.
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:02
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> your SNN.
> In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better
> if we provide which IP it should start at.
> Also I am assuming you are not having any firewalls enable between
> these 2 machines right ?
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
>
> > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> >
> > /************************************************************
> > STARTUP_MSG: Starting SecondaryNameNode
> > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > STARTUP_MSG:   args = [-checkpoint, force]
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > server
> as:
> > hadoop
> > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > Opening the listener on 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > returned
> > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > 12/06/04 13:34:25 INFO mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > done
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > Web-server up
> > at: 0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > servlet up at: 0.0.0.0:50090
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> > :3600 secs (60 min)
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
> >  :67108864 bytes (65536 KB)
> > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > Connection refused
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > hadoop01/192.168.0.11
> > ************************************************************/
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 13:15
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > I am not sure what could be the exact issue but when configuring
> > secondary NN to NN, you need to tell your SNN where the actual NN
> resides.
> > Try adding - dfs.http.address on your secondary namenode machine
> > having value as <NN:port> on hdfs-site.xml Port should be on which
> > your NN url is opening - means your NN web browser http port.
> >
> > Regards,
> > Praveenesh
> > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> >
> > > Hello. I'm facing a issue when trying to configure my
> > > SecondaryNameNode on a different machine than my NameNode. When
> > > both are on the same machine everything works fine but after
> > > moving the
> > secondary to a new machine I get:
> > >
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > Exception in
> > > doCheckpoint:
> > > 2012-05-28 09:57:36,834 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > java.net.ConnectException: Connection refused
> > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.j
> > av
> > a:327)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocke
> > tI
> > mpl.java:191)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > 180)
> > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > >        at java.net.Socket.connect(Socket.java:546)
> > >        at java.net.Socket.connect(Socket.java:495)
> > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURL
> > Co
> > nnection.java:935)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConn
> > ec
> > tion.java:876)
> > >        at
> > > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > java:801)
> > >
> > > Is there any configuration I'm missing? At this point my
> > > mapred-site.xml is very simple just:
> > >
> > > <?xml version="1.0"?>
> > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > <configuration>  <property>
> > >    <name>mapred.job.tracker</name>
> > >    <value>hadoop00:9001</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.system.dir</name>
> > >    <value>/home/hadoop/mapred/system</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.local.dir</name>
> > >    <value>/home/hadoop/mapred/local</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.jobtracker.taskScheduler</name>
> > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.fairscheduler.allocation.file</name>
> > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > >  </property>
> > > </configuration>
> > >
> > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including
> > > content), may be monitored by our systems for the purposes of
> > > security and the assessment of internal compliance with Accenture policy.
> > >
> > > __________________________________________________________________
> > > __
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its
> > affiliates including telephone calls and emails (including content),
> > may be monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ____________________________________________________________________
> > __
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>
>

________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by shashwat shriparv <dw...@gmail.com>.
did you configure dfs.namenode.secondary.http -address in
hdfs-site.xml.

On Mon, Jun 4, 2012 at 7:53 PM, <ra...@accenture.com> wrote:

> Right. Silly mistake.... Now using 50070 and IT WORKS!!!
>
> Thx a lot Praveenesh. I will replicate this solution to my real cluster.
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:25
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Its trying to connect to your NN on port 50030.. I think it should be
> 50070. In your hdfs-site.xml -- for dfs.http.address -- I am assuming you
> have given hadoop01:50070, right ?
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:50 PM, <ra...@accenture.com> wrote:
>
> > Now I see SNN machine name on the logs. Still refuses to connect to NN
> > but now got a diferent message:
> >
> > PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> > http://hadoop00:50030/getimage?getimage=1
> >
> > May be something is missing on my NN configuration?
> >
> > 12/06/04 14:13:08 INFO namenode.SecondaryNameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting SecondaryNameNode
> > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > STARTUP_MSG:   args = [-checkpoint, force]
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> > 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 12/06/04 14:13:09 INFO namenode.SecondaryNameNode: Starting web server
> as:
> > hadoop
> > 12/06/04 14:13:09 INFO mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 12/06/04 14:13:09 INFO http.HttpServer: Added global filtersafety
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 12/06/04 14:13:10 INFO http.HttpServer: Port returned by
> > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > Opening the listener on 50090
> > 12/06/04 14:13:10 INFO http.HttpServer: listener.getLocalPort()
> > returned
> > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > 12/06/04 14:13:10 INFO http.HttpServer: Jetty bound to port 50090
> > 12/06/04 14:13:10 INFO mortbay.log: jetty-6.1.26
> > 12/06/04 14:13:10 INFO mortbay.log: Started
> > SelectChannelConnector@hadoop01
> > :50090
> > 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Web server init
> > done
> > 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary
> > Web-server up
> > at: hadoop01:50090
> > 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary image
> > servlet up at: hadoop01:50090
> > 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Checkpoint Period
> > :3600 secs (60 min)
> > 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Log Size Trigger
> >  :67108864 bytes (65536 KB)
> > 12/06/04 14:13:10 ERROR security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> > http://hadoop00:50030/getimage?getimage=1
> > 12/06/04 14:13:10 ERROR namenode.SecondaryNameNode: checkpoint:
> > http://hadoop00:50030/getimage?getimage=1
> > 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> > ************************************************************/
> >
> > -----Original Message-----
> > From: Pin, Ramón
> > Sent: lunes, 04 de junio de 2012 14:12
> > To: common-user@hadoop.apache.org
> > Subject: RE: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > /etc/hosts
> >
> > 127.0.0.1               localhost.localdomain localhost
> > ::1             localhost6.localdomain6 localhost6
> > 192.168.0.10 hadoop00
> > 192.168.0.11 hadoop01
> > 192.168.0.12 hadoop02
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 14:09
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > Also can you share your /etc/hosts file of both the VMs
> >
> > Regards,
> > Praveenesh
> >
> > On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:
> >
> > > Right. No firewalls. This is my 'toy' environment running as virtual
> > > machines on my desktop computer. I'm playing with this here because
> > > have the same problem on my real cluster. Will try to explicitly
> > > configure starting IP for this SNN.
> > >
> > > -----Original Message-----
> > > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > > Sent: lunes, 04 de junio de 2012 14:02
> > > To: common-user@hadoop.apache.org
> > > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > > PriviledgedActionException
> > >
> > > Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> > > your SNN.
> > > In your logs, its starting SNN webserver at 0.0.0.0:50090. Its
> > > better if we provide which IP it should start at.
> > > Also I am assuming you are not having any firewalls enable between
> > > these 2 machines right ?
> > >
> > > Regards,
> > > Praveenesh
> > >
> > > On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
> > >
> > > > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> > > >
> > > > /************************************************************
> > > > STARTUP_MSG: Starting SecondaryNameNode
> > > > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > > > STARTUP_MSG:   args = [-checkpoint, force]
> > > > STARTUP_MSG:   version = 1.0.3
> > > > STARTUP_MSG:   build =
> > > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > > > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > > > ************************************************************/
> > > > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > > > server
> > > as:
> > > > hadoop
> > > > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > > org.mortbay.log.Slf4jLog
> > > > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > > > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > > > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > > > Opening the listener on 50090
> > > > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > > > returned
> > > > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > > > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > > > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > > > 12/06/04 13:34:25 INFO mortbay.log: Started
> > > > SelectChannelConnector@0.0.0.0:50090
> > > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > > > done
> > > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > > > Web-server up
> > > > at: 0.0.0.0:50090
> > > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > > > servlet up at: 0.0.0.0:50090
> > > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint
> > > > Period
> > > > :3600 secs (60 min)
> > > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size
> > > > Trigger
> > > >  :67108864 bytes (65536 KB)
> > > > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > > Connection refused
> > > > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > > > Connection refused
> > > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > > > /************************************************************
> > > > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > > > hadoop01/192.168.0.11
> > > > ************************************************************/
> > > >
> > > > -----Original Message-----
> > > > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > > > Sent: lunes, 04 de junio de 2012 13:15
> > > > To: common-user@hadoop.apache.org
> > > > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > > > PriviledgedActionException
> > > >
> > > > I am not sure what could be the exact issue but when configuring
> > > > secondary NN to NN, you need to tell your SNN where the actual NN
> > > resides.
> > > > Try adding - dfs.http.address on your secondary namenode machine
> > > > having value as <NN:port> on hdfs-site.xml Port should be on which
> > > > your NN url is opening - means your NN web browser http port.
> > > >
> > > > Regards,
> > > > Praveenesh
> > > > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> > > >
> > > > > Hello. I'm facing a issue when trying to configure my
> > > > > SecondaryNameNode on a different machine than my NameNode. When
> > > > > both are on the same machine everything works fine but after
> > > > > moving the
> > > > secondary to a new machine I get:
> > > > >
> > > > > 2012-05-28 09:57:36,832 ERROR
> > > > > org.apache.hadoop.security.UserGroupInformation:
> > > > > PriviledgedActionException as:hadoop
> cause:java.net.ConnectException:
> > > > > Connection refused
> > > > > 2012-05-28 09:57:36,832 ERROR
> > > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > > Exception in
> > > > > doCheckpoint:
> > > > > 2012-05-28 09:57:36,834 ERROR
> > > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > > java.net.ConnectException: Connection refused
> > > > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > > > >        at
> > > > >
> > > > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl
> > > > .j
> > > > av
> > > > a:327)
> > > > >        at
> > > > >
> > > > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSoc
> > > > ke
> > > > tI
> > > > mpl.java:191)
> > > > >        at
> > > > >
> > > >
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > > > 180)
> > > > >        at
> java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > > > >        at java.net.Socket.connect(Socket.java:546)
> > > > >        at java.net.Socket.connect(Socket.java:495)
> > > > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > > > >        at
> sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > > > >        at
> sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > > > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > > > >        at
> > > > >
> > > > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpU
> > > > RL
> > > > Co
> > > > nnection.java:935)
> > > > >        at
> > > > >
> > > > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLCo
> > > > nn
> > > > ec
> > > > tion.java:876)
> > > > >        at
> > > > >
> > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > > > java:801)
> > > > >
> > > > > Is there any configuration I'm missing? At this point my
> > > > > mapred-site.xml is very simple just:
> > > > >
> > > > > <?xml version="1.0"?>
> > > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > > > <configuration>  <property>
> > > > >    <name>mapred.job.tracker</name>
> > > > >    <value>hadoop00:9001</value>
> > > > >  </property>
> > > > >  <property>
> > > > >    <name>mapred.system.dir</name>
> > > > >    <value>/home/hadoop/mapred/system</value>
> > > > >  </property>
> > > > >  <property>
> > > > >    <name>mapred.local.dir</name>
> > > > >    <value>/home/hadoop/mapred/local</value>
> > > > >  </property>
> > > > >  <property>
> > > > >    <name>mapred.jobtracker.taskScheduler</name>
> > > > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > > > >  </property>
> > > > >  <property>
> > > > >    <name>mapred.fairscheduler.allocation.file</name>
> > > > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > > > >  </property>
> > > > > </configuration>
> > > > >
> > > > >
> > > > >
> > > > > ________________________________ Subject to local law,
> > > > > communications with Accenture and its affiliates including
> > > > > telephone calls and emails (including content), may be monitored
> > > > > by our systems for the purposes of security and the assessment
> > > > > of internal compliance with Accenture
> > policy.
> > > > >
> > > > > ________________________________________________________________
> > > > > __
> > > > > __
> > > > > __
> > > > > ________________
> > > > >
> > > > > www.accenture.com
> > > > >
> > > >
> > > > ________________________________
> > > > Subject to local law, communications with Accenture and its
> > > > affiliates including telephone calls and emails (including
> > > > content), may be monitored by our systems for the purposes of
> > > > security and the assessment of internal compliance with Accenture
> policy.
> > > >
> > > > __________________________________________________________________
> > > > __
> > > > __
> > > > ________________
> > > >
> > > > www.accenture.com
> > > >
> > > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including content),
> > > may be monitored by our systems for the purposes of security and the
> > > assessment of internal compliance with Accenture policy.
> > >
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its affiliates
> > including telephone calls and emails (including content), may be
> > monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ______________________________________________________________________
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>
>


-- 


∞
Shashwat Shriparv

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
Right. Silly mistake.... Now using 50070 and IT WORKS!!!

Thx a lot Praveenesh. I will replicate this solution to my real cluster.

-----Original Message-----
From: praveenesh kumar [mailto:praveenesh@gmail.com]
Sent: lunes, 04 de junio de 2012 14:25
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Its trying to connect to your NN on port 50030.. I think it should be 50070. In your hdfs-site.xml -- for dfs.http.address -- I am assuming you have given hadoop01:50070, right ?

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:50 PM, <ra...@accenture.com> wrote:

> Now I see SNN machine name on the logs. Still refuses to connect to NN
> but now got a diferent message:
>
> PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> http://hadoop00:50030/getimage?getimage=1
>
> May be something is missing on my NN configuration?
>
> 12/06/04 14:13:08 INFO namenode.SecondaryNameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting SecondaryNameNode
> STARTUP_MSG:   host = hadoop01/192.168.0.11
> STARTUP_MSG:   args = [-checkpoint, force]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> ************************************************************/
> 12/06/04 14:13:09 INFO namenode.SecondaryNameNode: Starting web server as:
> hadoop
> 12/06/04 14:13:09 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 12/06/04 14:13:09 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/06/04 14:13:10 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50090
> 12/06/04 14:13:10 INFO http.HttpServer: listener.getLocalPort()
> returned
> 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> 12/06/04 14:13:10 INFO http.HttpServer: Jetty bound to port 50090
> 12/06/04 14:13:10 INFO mortbay.log: jetty-6.1.26
> 12/06/04 14:13:10 INFO mortbay.log: Started
> SelectChannelConnector@hadoop01
> :50090
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Web server init
> done
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary
> Web-server up
> at: hadoop01:50090
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary image
> servlet up at: hadoop01:50090
> 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Checkpoint Period
> :3600 secs (60 min)
> 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Log Size Trigger
>  :67108864 bytes (65536 KB)
> 12/06/04 14:13:10 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> http://hadoop00:50030/getimage?getimage=1
> 12/06/04 14:13:10 ERROR namenode.SecondaryNameNode: checkpoint:
> http://hadoop00:50030/getimage?getimage=1
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> ************************************************************/
>
> -----Original Message-----
> From: Pin, Ramón
> Sent: lunes, 04 de junio de 2012 14:12
> To: common-user@hadoop.apache.org
> Subject: RE: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> /etc/hosts
>
> 127.0.0.1               localhost.localdomain localhost
> ::1             localhost6.localdomain6 localhost6
> 192.168.0.10 hadoop00
> 192.168.0.11 hadoop01
> 192.168.0.12 hadoop02
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:09
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Also can you share your /etc/hosts file of both the VMs
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:
>
> > Right. No firewalls. This is my 'toy' environment running as virtual
> > machines on my desktop computer. I'm playing with this here because
> > have the same problem on my real cluster. Will try to explicitly
> > configure starting IP for this SNN.
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 14:02
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> > your SNN.
> > In your logs, its starting SNN webserver at 0.0.0.0:50090. Its
> > better if we provide which IP it should start at.
> > Also I am assuming you are not having any firewalls enable between
> > these 2 machines right ?
> >
> > Regards,
> > Praveenesh
> >
> > On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
> >
> > > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> > >
> > > /************************************************************
> > > STARTUP_MSG: Starting SecondaryNameNode
> > > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > > STARTUP_MSG:   args = [-checkpoint, force]
> > > STARTUP_MSG:   version = 1.0.3
> > > STARTUP_MSG:   build =
> > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > > ************************************************************/
> > > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > > server
> > as:
> > > hadoop
> > > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > org.mortbay.log.Slf4jLog
> > > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > > Opening the listener on 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > > returned
> > > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > > 12/06/04 13:34:25 INFO mortbay.log: Started
> > > SelectChannelConnector@0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > > done
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > > Web-server up
> > > at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > > servlet up at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint
> > > Period
> > > :3600 secs (60 min)
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size
> > > Trigger
> > >  :67108864 bytes (65536 KB)
> > > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > > Connection refused
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > > /************************************************************
> > > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > > hadoop01/192.168.0.11
> > > ************************************************************/
> > >
> > > -----Original Message-----
> > > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > > Sent: lunes, 04 de junio de 2012 13:15
> > > To: common-user@hadoop.apache.org
> > > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > > PriviledgedActionException
> > >
> > > I am not sure what could be the exact issue but when configuring
> > > secondary NN to NN, you need to tell your SNN where the actual NN
> > resides.
> > > Try adding - dfs.http.address on your secondary namenode machine
> > > having value as <NN:port> on hdfs-site.xml Port should be on which
> > > your NN url is opening - means your NN web browser http port.
> > >
> > > Regards,
> > > Praveenesh
> > > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> > >
> > > > Hello. I'm facing a issue when trying to configure my
> > > > SecondaryNameNode on a different machine than my NameNode. When
> > > > both are on the same machine everything works fine but after
> > > > moving the
> > > secondary to a new machine I get:
> > > >
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.security.UserGroupInformation:
> > > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > > Connection refused
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > Exception in
> > > > doCheckpoint:
> > > > 2012-05-28 09:57:36,834 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > java.net.ConnectException: Connection refused
> > > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl
> > > .j
> > > av
> > > a:327)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSoc
> > > ke
> > > tI
> > > mpl.java:191)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > > 180)
> > > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > > >        at java.net.Socket.connect(Socket.java:546)
> > > >        at java.net.Socket.connect(Socket.java:495)
> > > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpU
> > > RL
> > > Co
> > > nnection.java:935)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLCo
> > > nn
> > > ec
> > > tion.java:876)
> > > >        at
> > > >
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > > java:801)
> > > >
> > > > Is there any configuration I'm missing? At this point my
> > > > mapred-site.xml is very simple just:
> > > >
> > > > <?xml version="1.0"?>
> > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > > <configuration>  <property>
> > > >    <name>mapred.job.tracker</name>
> > > >    <value>hadoop00:9001</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.system.dir</name>
> > > >    <value>/home/hadoop/mapred/system</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.local.dir</name>
> > > >    <value>/home/hadoop/mapred/local</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.jobtracker.taskScheduler</name>
> > > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.fairscheduler.allocation.file</name>
> > > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > > >  </property>
> > > > </configuration>
> > > >
> > > >
> > > >
> > > > ________________________________ Subject to local law,
> > > > communications with Accenture and its affiliates including
> > > > telephone calls and emails (including content), may be monitored
> > > > by our systems for the purposes of security and the assessment
> > > > of internal compliance with Accenture
> policy.
> > > >
> > > > ________________________________________________________________
> > > > __
> > > > __
> > > > __
> > > > ________________
> > > >
> > > > www.accenture.com
> > > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including
> > > content), may be monitored by our systems for the purposes of
> > > security and the assessment of internal compliance with Accenture policy.
> > >
> > > __________________________________________________________________
> > > __
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its
> > affiliates including telephone calls and emails (including content),
> > may be monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ____________________________________________________________________
> > __
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>
>

________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by praveenesh kumar <pr...@gmail.com>.
Its trying to connect to your NN on port 50030.. I think it should be
50070. In your hdfs-site.xml -- for dfs.http.address -- I am assuming you
have given hadoop01:50070, right ?

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:50 PM, <ra...@accenture.com> wrote:

> Now I see SNN machine name on the logs. Still refuses to connect to NN but
> now got a diferent message:
>
> PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> http://hadoop00:50030/getimage?getimage=1
>
> May be something is missing on my NN configuration?
>
> 12/06/04 14:13:08 INFO namenode.SecondaryNameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting SecondaryNameNode
> STARTUP_MSG:   host = hadoop01/192.168.0.11
> STARTUP_MSG:   args = [-checkpoint, force]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> ************************************************************/
> 12/06/04 14:13:09 INFO namenode.SecondaryNameNode: Starting web server as:
> hadoop
> 12/06/04 14:13:09 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 12/06/04 14:13:09 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/06/04 14:13:10 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
> the listener on 50090
> 12/06/04 14:13:10 INFO http.HttpServer: listener.getLocalPort() returned
> 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> 12/06/04 14:13:10 INFO http.HttpServer: Jetty bound to port 50090
> 12/06/04 14:13:10 INFO mortbay.log: jetty-6.1.26
> 12/06/04 14:13:10 INFO mortbay.log: Started SelectChannelConnector@hadoop01
> :50090
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Web server init done
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary Web-server up
> at: hadoop01:50090
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary image servlet
> up at: hadoop01:50090
> 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Checkpoint Period
> :3600 secs (60 min)
> 12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Log Size Trigger
>  :67108864 bytes (65536 KB)
> 12/06/04 14:13:10 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException:
> http://hadoop00:50030/getimage?getimage=1
> 12/06/04 14:13:10 ERROR namenode.SecondaryNameNode: checkpoint:
> http://hadoop00:50030/getimage?getimage=1
> 12/06/04 14:13:10 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> ************************************************************/
>
> -----Original Message-----
> From: Pin, Ramón
> Sent: lunes, 04 de junio de 2012 14:12
> To: common-user@hadoop.apache.org
> Subject: RE: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> /etc/hosts
>
> 127.0.0.1               localhost.localdomain localhost
> ::1             localhost6.localdomain6 localhost6
> 192.168.0.10 hadoop00
> 192.168.0.11 hadoop01
> 192.168.0.12 hadoop02
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:09
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Also can you share your /etc/hosts file of both the VMs
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:
>
> > Right. No firewalls. This is my 'toy' environment running as virtual
> > machines on my desktop computer. I'm playing with this here because
> > have the same problem on my real cluster. Will try to explicitly
> > configure starting IP for this SNN.
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 14:02
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> > your SNN.
> > In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better
> > if we provide which IP it should start at.
> > Also I am assuming you are not having any firewalls enable between
> > these 2 machines right ?
> >
> > Regards,
> > Praveenesh
> >
> > On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
> >
> > > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> > >
> > > /************************************************************
> > > STARTUP_MSG: Starting SecondaryNameNode
> > > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > > STARTUP_MSG:   args = [-checkpoint, force]
> > > STARTUP_MSG:   version = 1.0.3
> > > STARTUP_MSG:   build =
> > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > > ************************************************************/
> > > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > > server
> > as:
> > > hadoop
> > > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > org.mortbay.log.Slf4jLog
> > > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > > Opening the listener on 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > > returned
> > > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > > 12/06/04 13:34:25 INFO mortbay.log: Started
> > > SelectChannelConnector@0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > > done
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > > Web-server up
> > > at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > > servlet up at: 0.0.0.0:50090
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> > > :3600 secs (60 min)
> > > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
> > >  :67108864 bytes (65536 KB)
> > > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > > Connection refused
> > > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > > /************************************************************
> > > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > > hadoop01/192.168.0.11
> > > ************************************************************/
> > >
> > > -----Original Message-----
> > > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > > Sent: lunes, 04 de junio de 2012 13:15
> > > To: common-user@hadoop.apache.org
> > > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > > PriviledgedActionException
> > >
> > > I am not sure what could be the exact issue but when configuring
> > > secondary NN to NN, you need to tell your SNN where the actual NN
> > resides.
> > > Try adding - dfs.http.address on your secondary namenode machine
> > > having value as <NN:port> on hdfs-site.xml Port should be on which
> > > your NN url is opening - means your NN web browser http port.
> > >
> > > Regards,
> > > Praveenesh
> > > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> > >
> > > > Hello. I'm facing a issue when trying to configure my
> > > > SecondaryNameNode on a different machine than my NameNode. When
> > > > both are on the same machine everything works fine but after
> > > > moving the
> > > secondary to a new machine I get:
> > > >
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.security.UserGroupInformation:
> > > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > > Connection refused
> > > > 2012-05-28 09:57:36,832 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > Exception in
> > > > doCheckpoint:
> > > > 2012-05-28 09:57:36,834 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > > java.net.ConnectException: Connection refused
> > > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.j
> > > av
> > > a:327)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocke
> > > tI
> > > mpl.java:191)
> > > >        at
> > > >
> > > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > > 180)
> > > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > > >        at java.net.Socket.connect(Socket.java:546)
> > > >        at java.net.Socket.connect(Socket.java:495)
> > > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURL
> > > Co
> > > nnection.java:935)
> > > >        at
> > > >
> > > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConn
> > > ec
> > > tion.java:876)
> > > >        at
> > > >
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > > java:801)
> > > >
> > > > Is there any configuration I'm missing? At this point my
> > > > mapred-site.xml is very simple just:
> > > >
> > > > <?xml version="1.0"?>
> > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > > <configuration>  <property>
> > > >    <name>mapred.job.tracker</name>
> > > >    <value>hadoop00:9001</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.system.dir</name>
> > > >    <value>/home/hadoop/mapred/system</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.local.dir</name>
> > > >    <value>/home/hadoop/mapred/local</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.jobtracker.taskScheduler</name>
> > > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.fairscheduler.allocation.file</name>
> > > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > > >  </property>
> > > > </configuration>
> > > >
> > > >
> > > >
> > > > ________________________________
> > > > Subject to local law, communications with Accenture and its
> > > > affiliates including telephone calls and emails (including
> > > > content), may be monitored by our systems for the purposes of
> > > > security and the assessment of internal compliance with Accenture
> policy.
> > > >
> > > > __________________________________________________________________
> > > > __
> > > > __
> > > > ________________
> > > >
> > > > www.accenture.com
> > > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including content),
> > > may be monitored by our systems for the purposes of security and the
> > > assessment of internal compliance with Accenture policy.
> > >
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its affiliates
> > including telephone calls and emails (including content), may be
> > monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ______________________________________________________________________
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>
>

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
Now I see SNN machine name on the logs. Still refuses to connect to NN but now got a diferent message:

PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: http://hadoop00:50030/getimage?getimage=1

May be something is missing on my NN configuration?

12/06/04 14:13:08 INFO namenode.SecondaryNameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting SecondaryNameNode
STARTUP_MSG:   host = hadoop01/192.168.0.11
STARTUP_MSG:   args = [-checkpoint, force]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
12/06/04 14:13:09 INFO namenode.SecondaryNameNode: Starting web server as: hadoop
12/06/04 14:13:09 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
12/06/04 14:13:09 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/06/04 14:13:10 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50090
12/06/04 14:13:10 INFO http.HttpServer: listener.getLocalPort() returned 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
12/06/04 14:13:10 INFO http.HttpServer: Jetty bound to port 50090
12/06/04 14:13:10 INFO mortbay.log: jetty-6.1.26
12/06/04 14:13:10 INFO mortbay.log: Started SelectChannelConnector@hadoop01:50090
12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Web server init done
12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary Web-server up at: hadoop01:50090
12/06/04 14:13:10 INFO namenode.SecondaryNameNode: Secondary image servlet up at: hadoop01:50090
12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Checkpoint Period   :3600 secs (60 min)
12/06/04 14:13:10 WARN namenode.SecondaryNameNode: Log Size Trigger    :67108864 bytes (65536 KB)
12/06/04 14:13:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: http://hadoop00:50030/getimage?getimage=1
12/06/04 14:13:10 ERROR namenode.SecondaryNameNode: checkpoint: http://hadoop00:50030/getimage?getimage=1
12/06/04 14:13:10 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
************************************************************/

-----Original Message-----
From: Pin, Ramón
Sent: lunes, 04 de junio de 2012 14:12
To: common-user@hadoop.apache.org
Subject: RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

/etc/hosts

127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.0.10 hadoop00
192.168.0.11 hadoop01
192.168.0.12 hadoop02

-----Original Message-----
From: praveenesh kumar [mailto:praveenesh@gmail.com]
Sent: lunes, 04 de junio de 2012 14:09
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Also can you share your /etc/hosts file of both the VMs

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:

> Right. No firewalls. This is my 'toy' environment running as virtual
> machines on my desktop computer. I'm playing with this here because
> have the same problem on my real cluster. Will try to explicitly
> configure starting IP for this SNN.
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:02
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Try giving value to dfs.secondary.http.address in hdfs-site.xml on
> your SNN.
> In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better
> if we provide which IP it should start at.
> Also I am assuming you are not having any firewalls enable between
> these 2 machines right ?
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
>
> > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> >
> > /************************************************************
> > STARTUP_MSG: Starting SecondaryNameNode
> > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > STARTUP_MSG:   args = [-checkpoint, force]
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0
> > -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web
> > server
> as:
> > hadoop
> > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > Opening the listener on 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > returned
> > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > 12/06/04 13:34:25 INFO mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > done
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > Web-server up
> > at: 0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > servlet up at: 0.0.0.0:50090
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> > :3600 secs (60 min)
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
> >  :67108864 bytes (65536 KB)
> > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > Connection refused
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down SecondaryNameNode at
> > hadoop01/192.168.0.11
> > ************************************************************/
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 13:15
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > I am not sure what could be the exact issue but when configuring
> > secondary NN to NN, you need to tell your SNN where the actual NN
> resides.
> > Try adding - dfs.http.address on your secondary namenode machine
> > having value as <NN:port> on hdfs-site.xml Port should be on which
> > your NN url is opening - means your NN web browser http port.
> >
> > Regards,
> > Praveenesh
> > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> >
> > > Hello. I'm facing a issue when trying to configure my
> > > SecondaryNameNode on a different machine than my NameNode. When
> > > both are on the same machine everything works fine but after
> > > moving the
> > secondary to a new machine I get:
> > >
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > Exception in
> > > doCheckpoint:
> > > 2012-05-28 09:57:36,834 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > java.net.ConnectException: Connection refused
> > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.j
> > av
> > a:327)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocke
> > tI
> > mpl.java:191)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > 180)
> > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > >        at java.net.Socket.connect(Socket.java:546)
> > >        at java.net.Socket.connect(Socket.java:495)
> > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURL
> > Co
> > nnection.java:935)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConn
> > ec
> > tion.java:876)
> > >        at
> > > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > java:801)
> > >
> > > Is there any configuration I'm missing? At this point my
> > > mapred-site.xml is very simple just:
> > >
> > > <?xml version="1.0"?>
> > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > <configuration>  <property>
> > >    <name>mapred.job.tracker</name>
> > >    <value>hadoop00:9001</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.system.dir</name>
> > >    <value>/home/hadoop/mapred/system</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.local.dir</name>
> > >    <value>/home/hadoop/mapred/local</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.jobtracker.taskScheduler</name>
> > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.fairscheduler.allocation.file</name>
> > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > >  </property>
> > > </configuration>
> > >
> > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including
> > > content), may be monitored by our systems for the purposes of
> > > security and the assessment of internal compliance with Accenture policy.
> > >
> > > __________________________________________________________________
> > > __
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its
> > affiliates including telephone calls and emails (including content),
> > may be monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ____________________________________________________________________
> > __
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>
>

________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by praveenesh kumar <pr...@gmail.com>.
Also can you share your /etc/hosts file of both the VMs

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:35 PM, <ra...@accenture.com> wrote:

> Right. No firewalls. This is my 'toy' environment running as virtual
> machines on my desktop computer. I'm playing with this here because have
> the same problem on my real cluster. Will try to explicitly configure
> starting IP for this SNN.
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 14:02
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> Try giving value to dfs.secondary.http.address in hdfs-site.xml on your
> SNN.
> In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better if
> we provide which IP it should start at.
> Also I am assuming you are not having any firewalls enable between these 2
> machines right ?
>
> Regards,
> Praveenesh
>
> On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:
>
> > I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
> >
> > /************************************************************
> > STARTUP_MSG: Starting SecondaryNameNode
> > STARTUP_MSG:   host = hadoop01/192.168.0.11
> > STARTUP_MSG:   args = [-checkpoint, force]
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> > 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web server
> as:
> > hadoop
> > 12/06/04 13:34:24 INFO mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> > webServer.getConnectors()[0].getLocalPort() before open() is -1.
> > Opening the listener on 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> > returned
> > 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> > 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> > 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> > 12/06/04 13:34:25 INFO mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> > done
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> > Web-server up
> > at: 0.0.0.0:50090
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> > servlet up at: 0.0.0.0:50090
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> > :3600 secs (60 min)
> > 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
> >  :67108864 bytes (65536 KB)
> > 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> > Connection refused
> > 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> > ************************************************************/
> >
> > -----Original Message-----
> > From: praveenesh kumar [mailto:praveenesh@gmail.com]
> > Sent: lunes, 04 de junio de 2012 13:15
> > To: common-user@hadoop.apache.org
> > Subject: Re: SecondaryNameNode not connecting to NameNode :
> > PriviledgedActionException
> >
> > I am not sure what could be the exact issue but when configuring
> > secondary NN to NN, you need to tell your SNN where the actual NN
> resides.
> > Try adding - dfs.http.address on your secondary namenode machine
> > having value as <NN:port> on hdfs-site.xml Port should be on which
> > your NN url is opening - means your NN web browser http port.
> >
> > Regards,
> > Praveenesh
> > On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
> >
> > > Hello. I'm facing a issue when trying to configure my
> > > SecondaryNameNode on a different machine than my NameNode. When both
> > > are on the same machine everything works fine but after moving the
> > secondary to a new machine I get:
> > >
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.security.UserGroupInformation:
> > > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > > Connection refused
> > > 2012-05-28 09:57:36,832 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> > > in
> > > doCheckpoint:
> > > 2012-05-28 09:57:36,834 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > > java.net.ConnectException: Connection refused
> > >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.jav
> > a:327)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketI
> > mpl.java:191)
> > >        at
> > >
> > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> > 180)
> > >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> > >        at java.net.Socket.connect(Socket.java:546)
> > >        at java.net.Socket.connect(Socket.java:495)
> > >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> > >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> > >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> > >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLCo
> > nnection.java:935)
> > >        at
> > >
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnec
> > tion.java:876)
> > >        at
> > > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > > java:801)
> > >
> > > Is there any configuration I'm missing? At this point my
> > > mapred-site.xml is very simple just:
> > >
> > > <?xml version="1.0"?>
> > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > > <configuration>  <property>
> > >    <name>mapred.job.tracker</name>
> > >    <value>hadoop00:9001</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.system.dir</name>
> > >    <value>/home/hadoop/mapred/system</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.local.dir</name>
> > >    <value>/home/hadoop/mapred/local</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.jobtracker.taskScheduler</name>
> > >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.fairscheduler.allocation.file</name>
> > >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> > >  </property>
> > > </configuration>
> > >
> > >
> > >
> > > ________________________________
> > > Subject to local law, communications with Accenture and its
> > > affiliates including telephone calls and emails (including content),
> > > may be monitored by our systems for the purposes of security and the
> > > assessment of internal compliance with Accenture policy.
> > >
> > > ____________________________________________________________________
> > > __
> > > ________________
> > >
> > > www.accenture.com
> > >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its affiliates
> > including telephone calls and emails (including content), may be
> > monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ______________________________________________________________________
> > ________________
> >
> > www.accenture.com
> >
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>
>

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
Right. No firewalls. This is my 'toy' environment running as virtual machines on my desktop computer. I'm playing with this here because have the same problem on my real cluster. Will try to explicitly configure starting IP for this SNN.

-----Original Message-----
From: praveenesh kumar [mailto:praveenesh@gmail.com]
Sent: lunes, 04 de junio de 2012 14:02
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Try giving value to dfs.secondary.http.address in hdfs-site.xml on your SNN.
In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better if we provide which IP it should start at.
Also I am assuming you are not having any firewalls enable between these 2 machines right ?

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:

> I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
>
> /************************************************************
> STARTUP_MSG: Starting SecondaryNameNode
> STARTUP_MSG:   host = hadoop01/192.168.0.11
> STARTUP_MSG:   args = [-checkpoint, force]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> ************************************************************/
> 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web server as:
> hadoop
> 12/06/04 13:34:24 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50090
> 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> returned
> 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> 12/06/04 13:34:25 INFO mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> done
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> Web-server up
> at: 0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> servlet up at: 0.0.0.0:50090
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> :3600 secs (60 min)
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
>  :67108864 bytes (65536 KB)
> 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> Connection refused
> 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> Connection refused
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> ************************************************************/
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 13:15
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> I am not sure what could be the exact issue but when configuring
> secondary NN to NN, you need to tell your SNN where the actual NN resides.
> Try adding - dfs.http.address on your secondary namenode machine
> having value as <NN:port> on hdfs-site.xml Port should be on which
> your NN url is opening - means your NN web browser http port.
>
> Regards,
> Praveenesh
> On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
>
> > Hello. I'm facing a issue when trying to configure my
> > SecondaryNameNode on a different machine than my NameNode. When both
> > are on the same machine everything works fine but after moving the
> secondary to a new machine I get:
> >
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> > in
> > doCheckpoint:
> > 2012-05-28 09:57:36,834 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > java.net.ConnectException: Connection refused
> >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> >        at
> >
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.jav
> a:327)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketI
> mpl.java:191)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> 180)
> >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> >        at java.net.Socket.connect(Socket.java:546)
> >        at java.net.Socket.connect(Socket.java:495)
> >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLCo
> nnection.java:935)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnec
> tion.java:876)
> >        at
> > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > java:801)
> >
> > Is there any configuration I'm missing? At this point my
> > mapred-site.xml is very simple just:
> >
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > <configuration>  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>hadoop00:9001</value>
> >  </property>
> >  <property>
> >    <name>mapred.system.dir</name>
> >    <value>/home/hadoop/mapred/system</value>
> >  </property>
> >  <property>
> >    <name>mapred.local.dir</name>
> >    <value>/home/hadoop/mapred/local</value>
> >  </property>
> >  <property>
> >    <name>mapred.jobtracker.taskScheduler</name>
> >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> >  </property>
> >  <property>
> >    <name>mapred.fairscheduler.allocation.file</name>
> >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> >  </property>
> > </configuration>
> >
> >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its
> > affiliates including telephone calls and emails (including content),
> > may be monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ____________________________________________________________________
> > __
> > ________________
> >
> > www.accenture.com
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>
>

________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by praveenesh kumar <pr...@gmail.com>.
Try giving value to dfs.secondary.http.address in hdfs-site.xml on your SNN.
In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better if we
provide which IP it should start at.
Also I am assuming you are not having any firewalls enable between these 2
machines right ?

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:05 PM, <ra...@accenture.com> wrote:

> I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
>
> /************************************************************
> STARTUP_MSG: Starting SecondaryNameNode
> STARTUP_MSG:   host = hadoop01/192.168.0.11
> STARTUP_MSG:   args = [-checkpoint, force]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> ************************************************************/
> 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web server as:
> hadoop
> 12/06/04 13:34:24 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
> the listener on 50090
> 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort() returned
> 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> 12/06/04 13:34:25 INFO mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init done
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary Web-server up
> at: 0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image servlet
> up at: 0.0.0.0:50090
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> :3600 secs (60 min)
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
>  :67108864 bytes (65536 KB)
> 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> Connection refused
> 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint: Connection
> refused
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> ************************************************************/
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveenesh@gmail.com]
> Sent: lunes, 04 de junio de 2012 13:15
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> I am not sure what could be the exact issue but when configuring secondary
> NN to NN, you need to tell your SNN where the actual NN resides.
> Try adding - dfs.http.address on your secondary namenode machine having
> value as <NN:port> on hdfs-site.xml Port should be on which your NN url is
> opening - means your NN web browser http port.
>
> Regards,
> Praveenesh
> On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:
>
> > Hello. I'm facing a issue when trying to configure my
> > SecondaryNameNode on a different machine than my NameNode. When both
> > are on the same machine everything works fine but after moving the
> secondary to a new machine I get:
> >
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
> > doCheckpoint:
> > 2012-05-28 09:57:36,834 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > java.net.ConnectException: Connection refused
> >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> >        at
> >
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
> >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> >        at java.net.Socket.connect(Socket.java:546)
> >        at java.net.Socket.connect(Socket.java:495)
> >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
> >        at
> > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > java:801)
> >
> > Is there any configuration I'm missing? At this point my
> > mapred-site.xml is very simple just:
> >
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > <configuration>  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>hadoop00:9001</value>
> >  </property>
> >  <property>
> >    <name>mapred.system.dir</name>
> >    <value>/home/hadoop/mapred/system</value>
> >  </property>
> >  <property>
> >    <name>mapred.local.dir</name>
> >    <value>/home/hadoop/mapred/local</value>
> >  </property>
> >  <property>
> >    <name>mapred.jobtracker.taskScheduler</name>
> >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> >  </property>
> >  <property>
> >    <name>mapred.fairscheduler.allocation.file</name>
> >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> >  </property>
> > </configuration>
> >
> >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its affiliates
> > including telephone calls and emails (including content), may be
> > monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ______________________________________________________________________
> > ________________
> >
> > www.accenture.com
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>
>

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
I configured dfs.http.address on SNN's hdfs-site.xml but still gets:

/************************************************************
STARTUP_MSG: Starting SecondaryNameNode
STARTUP_MSG:   host = hadoop01/192.168.0.11
STARTUP_MSG:   args = [-checkpoint, force]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web server as: hadoop
12/06/04 13:34:24 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/06/04 13:34:24 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50090
12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort() returned 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
12/06/04 13:34:25 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:50090
12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init done
12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary Web-server up at: 0.0.0.0:50090
12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image servlet up at: 0.0.0.0:50090
12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period   :3600 secs (60 min)
12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger    :67108864 bytes (65536 KB)
12/06/04 13:34:25 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.ConnectException: Connection refused
12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint: Connection refused
12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
************************************************************/

-----Original Message-----
From: praveenesh kumar [mailto:praveenesh@gmail.com]
Sent: lunes, 04 de junio de 2012 13:15
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

I am not sure what could be the exact issue but when configuring secondary NN to NN, you need to tell your SNN where the actual NN resides.
Try adding - dfs.http.address on your secondary namenode machine having value as <NN:port> on hdfs-site.xml Port should be on which your NN url is opening - means your NN web browser http port.

Regards,
Praveenesh
On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:

> Hello. I'm facing a issue when trying to configure my
> SecondaryNameNode on a different machine than my NameNode. When both
> are on the same machine everything works fine but after moving the secondary to a new machine I get:
>
> 2012-05-28 09:57:36,832 ERROR
> org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> Connection refused
> 2012-05-28 09:57:36,832 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
> doCheckpoint:
> 2012-05-28 09:57:36,834 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.net.ConnectException: Connection refused
>        at java.net.PlainSocketImpl.socketConnect(Native Method)
>        at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
>        at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
>        at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
>        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
>        at java.net.Socket.connect(Socket.java:546)
>        at java.net.Socket.connect(Socket.java:495)
>        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
>        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
>        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
>        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
>        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
>        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
>        at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
>        at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
>        at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> java:801)
>
> Is there any configuration I'm missing? At this point my
> mapred-site.xml is very simple just:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>  <property>
>    <name>mapred.job.tracker</name>
>    <value>hadoop00:9001</value>
>  </property>
>  <property>
>    <name>mapred.system.dir</name>
>    <value>/home/hadoop/mapred/system</value>
>  </property>
>  <property>
>    <name>mapred.local.dir</name>
>    <value>/home/hadoop/mapred/local</value>
>  </property>
>  <property>
>    <name>mapred.jobtracker.taskScheduler</name>
>    <value>org.apache.hadoop.mapred.FairScheduler</value>
>  </property>
>  <property>
>    <name>mapred.fairscheduler.allocation.file</name>
>    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
>  </property>
> </configuration>
>
>
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>

________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by praveenesh kumar <pr...@gmail.com>.
I am not sure what could be the exact issue but when configuring secondary
NN to NN, you need to tell your SNN where the actual NN resides.
Try adding - dfs.http.address on your secondary namenode machine having
value as <NN:port> on hdfs-site.xml
Port should be on which your NN url is opening - means your NN web browser
http port.

Regards,
Praveenesh
On Mon, Jun 4, 2012 at 4:37 PM, <ra...@accenture.com> wrote:

> Hello. I'm facing a issue when trying to configure my SecondaryNameNode on
> a different machine than my NameNode. When both are on the same machine
> everything works fine but after moving the secondary to a new machine I get:
>
> 2012-05-28 09:57:36,832 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.net.ConnectException: Connection refused
> 2012-05-28 09:57:36,832 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
> doCheckpoint:
> 2012-05-28 09:57:36,834 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.net.ConnectException: Connection refused
>        at java.net.PlainSocketImpl.socketConnect(Native Method)
>        at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
>        at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
>        at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
>        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
>        at java.net.Socket.connect(Socket.java:546)
>        at java.net.Socket.connect(Socket.java:495)
>        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
>        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
>        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
>        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
>        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
>        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
>        at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
>        at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
>        at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
>
> Is there any configuration I'm missing? At this point my mapred-site.xml
> is very simple just:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>hadoop00:9001</value>
>  </property>
>  <property>
>    <name>mapred.system.dir</name>
>    <value>/home/hadoop/mapred/system</value>
>  </property>
>  <property>
>    <name>mapred.local.dir</name>
>    <value>/home/hadoop/mapred/local</value>
>  </property>
>  <property>
>    <name>mapred.jobtracker.taskScheduler</name>
>    <value>org.apache.hadoop.mapred.FairScheduler</value>
>  </property>
>  <property>
>    <name>mapred.fairscheduler.allocation.file</name>
>    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
>  </property>
> </configuration>
>
>
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be monitored
> by our systems for the purposes of security and the assessment of internal
> compliance with Accenture policy.
>
> ______________________________________________________________________________________
>
> www.accenture.com
>

RE: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by ra...@accenture.com.
Now is pointing correctly, Rajive. That was the problem. Thx for your help.
________________________________________
De: rajive [rajive_c@yahoo.com]
Enviado el: miércoles, 06 de junio de 2012 14:01
Para: common-user@hadoop.apache.org; Pin, Ramón
Asunto: Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

what is dfs.https.address set to?



----- Original Message -----
> From: "ramon.pin@accenture.com" <ra...@accenture.com>
> To: core-user@hadoop.apache.org
> Cc:
> Sent: Monday, June 4, 2012 4:07 AM
> Subject: SecondaryNameNode not connecting to NameNode : PriviledgedActionException
>
> Hello. I'm facing a issue when trying to configure my SecondaryNameNode on a
> different machine than my NameNode. When both are on the same machine everything
> works fine but after moving the secondary to a new machine I get:
>
> 2012-05-28 09:57:36,832 ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.net.ConnectException: Connection
> refused
> 2012-05-28 09:57:36,832 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
> doCheckpoint:
> 2012-05-28 09:57:36,834 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.net.ConnectException: Connection refused
>         at java.net.PlainSocketImpl.socketConnect(Native Method)
>         at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
>         at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
>         at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
>         at java.net.Socket.connect(Socket.java:546)
>         at java.net.Socket.connect(Socket.java:495)
>         at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
>         at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:321)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:338)
>         at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
>         at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
>         at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
>
> Is there any configuration I'm missing? At this point my mapred-site.xml is
> very simple just:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl"
> href="configuration.xsl"?>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>hadoop00:9001</value>
>   </property>
>   <property>
>     <name>mapred.system.dir</name>
>     <value>/home/hadoop/mapred/system</value>
>   </property>
>   <property>
>     <name>mapred.local.dir</name>
>     <value>/home/hadoop/mapred/local</value>
>   </property>
>   <property>
>     <name>mapred.jobtracker.taskScheduler</name>
>     <value>org.apache.hadoop.mapred.FairScheduler</value>
>   </property>
>   <property>
>     <name>mapred.fairscheduler.allocation.file</name>
>     <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
>   </property>
> </configuration>
>
>
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates including
> telephone calls and emails (including content), may be monitored by our systems
> for the purposes of security and the assessment of internal compliance with
> Accenture policy.
> ______________________________________________________________________________________
>
> www.accenture.com
>


________________________________
Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

Posted by rajive <ra...@yahoo.com>.
what is dfs.https.address set to?



----- Original Message -----
> From: "ramon.pin@accenture.com" <ra...@accenture.com>
> To: core-user@hadoop.apache.org
> Cc: 
> Sent: Monday, June 4, 2012 4:07 AM
> Subject: SecondaryNameNode not connecting to NameNode : PriviledgedActionException
> 
> Hello. I'm facing a issue when trying to configure my SecondaryNameNode on a 
> different machine than my NameNode. When both are on the same machine everything 
> works fine but after moving the secondary to a new machine I get:
> 
> 2012-05-28 09:57:36,832 ERROR org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hadoop cause:java.net.ConnectException: Connection 
> refused
> 2012-05-28 09:57:36,832 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in 
> doCheckpoint:
> 2012-05-28 09:57:36,834 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: 
> java.net.ConnectException: Connection refused
>         at java.net.PlainSocketImpl.socketConnect(Native Method)
>         at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
>         at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:191)
>         at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
>         at java.net.Socket.connect(Socket.java:546)
>         at java.net.Socket.connect(Socket.java:495)
>         at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
>         at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:321)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:338)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801)
> 
> Is there any configuration I'm missing? At this point my mapred-site.xml is 
> very simple just:
> 
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" 
> href="configuration.xsl"?>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>hadoop00:9001</value>
>   </property>
>   <property>
>     <name>mapred.system.dir</name>
>     <value>/home/hadoop/mapred/system</value>
>   </property>
>   <property>
>     <name>mapred.local.dir</name>
>     <value>/home/hadoop/mapred/local</value>
>   </property>
>   <property>
>     <name>mapred.jobtracker.taskScheduler</name>
>     <value>org.apache.hadoop.mapred.FairScheduler</value>
>   </property>
>   <property>
>     <name>mapred.fairscheduler.allocation.file</name>
>     <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
>   </property>
> </configuration>
> 
> 
> 
> ________________________________
> Subject to local law, communications with Accenture and its affiliates including 
> telephone calls and emails (including content), may be monitored by our systems 
> for the purposes of security and the assessment of internal compliance with 
> Accenture policy.
> ______________________________________________________________________________________
> 
> www.accenture.com
>