You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Lirong Jian (JIRA)" <ji...@apache.org> on 2015/10/09 04:31:27 UTC

[jira] [Commented] (HAWQ-27) filespace create in same directory cause problems

    [ https://issues.apache.org/jira/browse/HAWQ-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14949787#comment-14949787 ] 

Lirong Jian commented on HAWQ-27:
---------------------------------

In this case, the two filespaces share the same hdfs path. So, when one is deleted, the corresponding directory is also deleted. And then we cannot access the other filespace anymore.

This issue is introduced by a work around fix for system initialization when the default filespace directory already exists. The code snippet is as follows:

filespace.c: CreateFileSpace
{code}
	bool existed;
	if (HdfsPathExistAndNonEmpty(encoded, &existed))
		ereport(ERROR, 
				(errcode_for_file_access(),
				 errmsg("%s: File exists and non empty", encoded)));
{code}

To address this issue, we should add more logic to check whether these is another filespace share the same hdfs path.

> filespace create in same directory cause problems
> -------------------------------------------------
>
>                 Key: HAWQ-27
>                 URL: https://issues.apache.org/jira/browse/HAWQ-27
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Storage
>            Reporter: Dong Li
>            Assignee: Lirong Jian
>
> --
> -- if your hdfs port is 9000, use localhost:9000 to run the test
> --
> create FILESPACE fs1 ON hdfs ('localhost:8020/fs');
> create FILESPACE fs2 ON hdfs ('localhost:8020/fs');
> create tablespace tsinfs1 filespace fs1;
> create table a (i int) tablespace tsinfs1;
> insert into a VALUE (1);
> drop filespace fs2;
> select * from a;
> ERROR:  Append-Only Storage Read could not open segment file 'hdfs://localhost:8020/testfs/17201/17198/17203/1' for relation 'a'  (seg0 localhost:40000 pid=25656)
> DETAIL:
> File does not exist: /testfs/17201/17198/17203/1
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:68)
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1895)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1836)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1816)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1788)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:543)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> The directory fs was removed, and the table doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)