You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "zhihai xu (JIRA)" <ji...@apache.org> on 2016/06/08 02:27:21 UTC

[jira] [Updated] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close

     [ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhihai xu updated HADOOP-13247:
-------------------------------
    Attachment: HADOOP-13247.000.patch

> The CACHE entry in FileSystem is not removed if exception happened in close
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-13247
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13247
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.8.0
>            Reporter: zhihai xu
>            Assignee: zhihai xu
>         Attachments: HADOOP-13247.000.patch
>
>
> The CACHE entry in FileSystem is not removed if exception happened in close. It causes "Filesystem closed" IOException if the same filesystem is used later.
> The following is stack trace for the exception coming out of close:
> {code}
> 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.reflect.UndeclaredThrowableException
>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172)
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047)
>         at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158)
>         at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76)
>         at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>         at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
>         at org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900)
>         at org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122)
>         at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497)
>         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345)
>         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652)
>         at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>         at com.sun.proxy.$Proxy8.createTable(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:1909)
>         at com.sun.proxy.$Proxy8.createTable(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:750)
>         ... 21 more
> Caused by: java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:151)
>         ... 48 more
> {code}
> After this, the old DistributedFileSystem is still stored in the FileSystem CACHE although DFSClient in DistributedFileSystem is closed.
> This caused any later DistributedFileSystem operation failed with "Filesystem closed" IOException:
> {code}
> 2016-06-07 18:21:19,024 WARN org.apache.hive.service.cli.thrift.ThriftCLIService: Error opening session:
> org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.io.IOException: Filesystem closed
>         at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:288)
>         at org.apache.hive.service.cli.CLIService.openSession(CLIService.java:178)
>         at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:428)
>         at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:319)
>         at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
>         at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
>         at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>         at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>         at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:511)
>         at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:138)
>         at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:280)
>         ... 12 more
> Caused by: java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1986)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
>         at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:587)
>         at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:545)
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:497)
>         ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org