You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Dong Li (JIRA)" <ji...@apache.org> on 2015/10/09 04:33:26 UTC

[jira] [Updated] (HAWQ-27) filespace created in same directory cause problems

     [ https://issues.apache.org/jira/browse/HAWQ-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dong Li updated HAWQ-27:
------------------------
    Summary: filespace created in same directory cause problems  (was: filespace create in same directory cause problems)

> filespace created in same directory cause problems
> --------------------------------------------------
>
>                 Key: HAWQ-27
>                 URL: https://issues.apache.org/jira/browse/HAWQ-27
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Storage
>            Reporter: Dong Li
>            Assignee: Lirong Jian
>
> --
> -- if your hdfs port is 9000, use localhost:9000 to run the test
> --
> create FILESPACE fs1 ON hdfs ('localhost:8020/fs');
> create FILESPACE fs2 ON hdfs ('localhost:8020/fs');
> create tablespace tsinfs1 filespace fs1;
> create table a (i int) tablespace tsinfs1;
> insert into a VALUE (1);
> drop filespace fs2;
> select * from a;
> ERROR:  Append-Only Storage Read could not open segment file 'hdfs://localhost:8020/testfs/17201/17198/17203/1' for relation 'a'  (seg0 localhost:40000 pid=25656)
> DETAIL:
> File does not exist: /testfs/17201/17198/17203/1
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:68)
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1895)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1836)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1816)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1788)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:543)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> The directory fs was removed, and the table doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)