You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "dong (JIRA)" <ji...@apache.org> on 2013/04/16 14:09:16 UTC

[jira] [Created] (HADOOP-9478) The get operation of deprecatedKeyMap of org.apache.hadoop.conf.Configuration should be synchronized.

dong created HADOOP-9478:
----------------------------

             Summary: The get operation of deprecatedKeyMap of org.apache.hadoop.conf.Configuration should be synchronized. 
                 Key: HADOOP-9478
                 URL: https://issues.apache.org/jira/browse/HADOOP-9478
             Project: Hadoop Common
          Issue Type: Bug
          Components: conf
    Affects Versions: 2.0.0-alpha
         Environment: OS:
CentOS release 6.3 (Final)

JDK:
java version "1.6.0_27"
Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)

Hadoop:
hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0

Security:
Kerberos
            Reporter: dong


When we lanuch the client appliation which use kerberos security,the FileSystem can't be create because the exception ' java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil'.

I check the exception stack trace,it maybe caused by the unsafe get operation of the deprecatedKeyMap which used by the org.apache.hadoop.conf.Configuration.

So I write a simple test case:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.HdfsConfiguration;

public class HTest {
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        conf.addResource("core-site.xml");
        conf.addResource("hdfs-site.xml");
        FileSystem fileSystem = FileSystem.get(conf);
        System.out.println(fileSystem);
        System.exit(0);
    }
}

Then I launch this test case many times,the following exception is thrown:

Exception in thread "TGT Renewer for XXX" java.lang.ExceptionInInitializerError
     at org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
     at org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
     at org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
     at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
     at java.util.HashMap.getEntry(HashMap.java:345)
     at java.util.HashMap.containsKey(HashMap.java:335)
     at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
     at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
     at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
     at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
     at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
     at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
     at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:76)
     ... 4 more
Exception in thread "main" java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
     at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
     at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:436)
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:403)
     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
     at HTest.main(HTest.java:11)
Caused by: java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
     at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
     at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442)
     ... 11 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil
     at org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231)
     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159)
     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148)
     at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452)
     at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434)
     at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496)
     at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:88)
     ... 16 more


If the HashMap used at multi-thread enviroment,not only the put operation be synchronized,the get operation(eg. containKey) should be synchronzied too.

The simple solution is trigger the init of SecurityUtil before creating the FileSystem,but I think it's should be synchronized for get of deprecatedKeyMap.

Thanks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira