You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Lei Chang (JIRA)" <ji...@apache.org> on 2016/04/10 12:03:25 UTC

[jira] [Closed] (HAWQ-484) Fix bug in gpcopy and pgdump bug.

     [ https://issues.apache.org/jira/browse/HAWQ-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lei Chang closed HAWQ-484.
--------------------------
       Resolution: Fixed
    Fix Version/s: 2.0.0

> Fix bug in gpcopy and pgdump bug.
> ---------------------------------
>
>                 Key: HAWQ-484
>                 URL: https://issues.apache.org/jira/browse/HAWQ-484
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Core
>            Reporter: Hubert Zhang
>            Assignee: Lei Chang
>             Fix For: 2.0.0
>
>
> when copy a parquet table there is an error:
> Error from segment 5: ERROR:  file open error in file 'hdfs://localhost:8020/hawq_default/16385/71546/71596/71596' for relation 'copy_regression_nocol': No such file or directory
> DETAIL:  File does not exist: /hawq_default/16385/71546/71596/71596
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
> 	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1728)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1671)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1651)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1625)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:497)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)