You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jim Kellerman (JIRA)" <ji...@apache.org> on 2008/11/12 19:45:46 UTC

[jira] Commented: (HBASE-602) HBase Crash when network card has a IPv6 address

    [ https://issues.apache.org/jira/browse/HBASE-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12646995#action_12646995 ] 

Jim Kellerman commented on HBASE-602:
-------------------------------------

There are two places where the hlog directory name is built:
- HRegionServer.setupHLog
- master.ProcessServerShutdown constructor

If the host address is an IPV6 address, the directory name built contains ':' which is not acceptable for a Hadoop file name, and produces an error like the following:

{code}
2008-11-12 12:48:29,759 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer
: Unhandled exception. Aborting...
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: log_0:0:0:0:0:0:0:1_1226512108481_58009
        at org.apache.hadoop.fs.Path.initialize(Path.java:140)
        at org.apache.hadoop.fs.Path.<init>(Path.java:126)
        at org.apache.hadoop.fs.Path.<init>(Path.java:50)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.setupHLog(HRegionServer.java:545)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:518)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:290)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: log_0:0:0:0:0:0:0:1_1226512108481_58009
        at java.net.URI.checkPath(URI.java:1787)
        at java.net.URI.<init>(URI.java:735)
        at org.apache.hadoop.fs.Path.initialize(Path.java:137)
        ... 6 more
{code}


> HBase Crash when network card has a IPv6 address
> ------------------------------------------------
>
>                 Key: HBASE-602
>                 URL: https://issues.apache.org/jira/browse/HBASE-602
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 0.1.1
>         Environment: Linux, java jdk 1.5
> Network Card Address:
> eth0      Link encap:Ethernet  HWaddr 00:1E:C9:6B:2F:71  
>           inet addr:192.168.10.98  Bcast:192.168.10.255  Mask:255.255.255.0
>           inet6 addr: fe80::21e:c9ff:fe6b:2f71/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:2061472 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1936088 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:100 
>           RX bytes:493367388 (470.5 MB)  TX bytes:160961988 (153.5 MB)
>           Base address:0xfe00 Memory:fdfc0000-fdfe0000
>            Reporter: Zhou Wei
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I've met a problem startup HBase.
> I setup hbase with hdfs,
> My server's network card has a ipv4 address and also a ipv6 address.
> When I first startup hbase with default configuration file,
> I found that the region server can't 
> register to master. And I found lots of 127.0.0.1 in log.
> So I suppose interface "default" would not work and add following:
> <property>
>   <name>dfs.datanode.dns.interface</name>
>   <value>eth0</value>
>   <description>The name of the Network Interface from which a data node should 
>   report its IP address.
>   </description>
>  </property>
> However, when this is done. HBase master crashes;
> And I see ipv6 addresses in the log.
> So I dig into the source code,
> found that HBase fails to deal with IPv6 address.
> Details is in following:
> In class  org.apache.hadoop.hbase.HRegionServer
> the method getThisIP() invoke the method of class belongs to Hadoop-core package
> The class is: org.apache.hadoop.net.DNS
> the method is: getDefaultIP(String strInterface)
> This method invokes another method in the same class: getIPs(String strInterface)
> Method getIPs always returns the first ip address  no matter it is ipv4 or ipv6
> I have fixed it by modifying method of org.apache.hadoop.net.DNS.getIPs(String
> strInterface)
> Such that it always returns ipv4 address
> It is working now for me.
> But when hadoop upgrades, I have to modify again.
> In order to avoid the problem,
> I modify a method in class: org.apache.hadoop.net.DNS
> The following is the modified code of this method, it would not return IPv6
> address now.
> /**
>    * Returns all the IPs associated with the provided interface, if any, in
>    * textual form.
>    * 
>    * @param strInterface
>    *            The name of the network interface to query (e.g. eth0)
>    * @return A string vector of all the IPs associated with the provided
>    *         interface
>    * @throws UnknownHostException
>    *             If an UnknownHostException is encountered in querying the
>    *             default interface
>    * 
>    */
>   public static String[] getIPs(String strInterface)
>   throws UnknownHostException {
> 	    try {
> 	      NetworkInterface netIF = NetworkInterface.getByName(strInterface);
> 	      if (netIF == null)
> 	        return new String[] { InetAddress.getLocalHost()
> 	                              .getHostAddress() };
> 	      else {
> 	        Vector<String> ips = new Vector<String>();
> 	        Enumeration e = netIF.getInetAddresses();
> 	        while (e.hasMoreElements())
> 	          				{
> 	        	String addr=((InetAddress) e.nextElement()).getHostAddress();
> 	        	if(addr.length()<=15)//only when it is a IPv4 address
> 	        		ips.add(addr);
> 	        	//ips.add(((InetAddress) e.nextElement()).getHostAddress());
> 	          				}
> 	        return ips.toArray(new String[] {});
> 	      }
> 	    } catch (SocketException e) {
> 	      return new String[] { InetAddress.getLocalHost().getHostAddress() };
> 	    }
>   }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.