You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2013/11/01 02:52:19 UTC

[jira] [Commented] (HADOOP-9478) Fix race conditions during the initialization of Configuration related to deprecatedKeyMap

    [ https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13810945#comment-13810945 ] 

Andrew Wang commented on HADOOP-9478:
-------------------------------------

Hey Colin, thanks for taking this on. I like the overall idea; it's a pity we can't use an built-in java class for this, but needs must when synchronizing across two maps. Some review comments:

* testAndSetAccessed: should this be instead named getAndSetAccessed?
* DeprecationContext#containsKey is never used
* I prefer using {{Preconditions}} checks over throwing a raw IllegalArgumentException, it'll have a nicer message
* I'd rather not expose that new {{addDeprecations(DeprecationDelta[] delta}} method publicly, users prefer manipulating strings. It seems like some past coder was also trying to move in the direction of simplifying this API to just the {{String}} variants by deprecating the {{String[]}} versions.
* The above would also help quash the diff
* Can we just do {{deprecationContext.get()}} in {{handleDeprecation(String)}} rather than passing it down in {{handleDeprecation}} etc?
* loadResource, could you move the global deprecation get down to where it's used for the first time?

> Fix race conditions during the initialization of Configuration related to deprecatedKeyMap
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9478
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9478
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.0.0-alpha
>         Environment: OS:
> CentOS release 6.3 (Final)
> JDK:
> java version "1.6.0_27"
> Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
> Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
> Hadoop:
> hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0
> Security:
> Kerberos
>            Reporter: Dongyong Wang
>            Assignee: Colin Patrick McCabe
>         Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, HADOOP-9478.003.patch, hadoop-9478-1.patch, hadoop-9478-2.patch
>
>
> When we lanuch the client appliation which use kerberos security,the FileSystem can't be create because the exception ' java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil'.
> I check the exception stack trace,it maybe caused by the unsafe get operation of the deprecatedKeyMap which used by the org.apache.hadoop.conf.Configuration.
> So I write a simple test case:
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.hdfs.HdfsConfiguration;
> public class HTest {
>     public static void main(String[] args) throws Exception {
>         Configuration conf = new Configuration();
>         conf.addResource("core-site.xml");
>         conf.addResource("hdfs-site.xml");
>         FileSystem fileSystem = FileSystem.get(conf);
>         System.out.println(fileSystem);
>         System.exit(0);
>     }
> }
> Then I launch this test case many times,the following exception is thrown:
> Exception in thread "TGT Renewer for XXX" java.lang.ExceptionInInitializerError
>      at org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
>      at org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
>      at org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
>      at java.util.HashMap.getEntry(HashMap.java:345)
>      at java.util.HashMap.containsKey(HashMap.java:335)
>      at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
>      at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
>      at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
>      at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
>      at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
>      at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
>      at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:76)
>      ... 4 more
> Exception in thread "main" java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>      at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
>      at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:436)
>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:403)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
>      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
>      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
>      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
>      at HTest.main(HTest.java:11)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>      at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:442)
>      ... 11 more
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil
>      at org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:231)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:159)
>      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:148)
>      at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:452)
>      at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:434)
>      at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:496)
>      at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:88)
>      ... 16 more
> If the HashMap used at multi-thread enviroment,not only the put operation be synchronized,the get operation(eg. containKey) should be synchronzied too.
> The simple solution is trigger the init of SecurityUtil before creating the FileSystem,but I think it's should be synchronized for get of deprecatedKeyMap.
> Thanks. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)