You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "luoyuxia (Jira)" <ji...@apache.org> on 2022/04/14 05:30:00 UTC

[jira] [Created] (FLINK-27244) Support subdirectories with Hive tables

luoyuxia created FLINK-27244:
--------------------------------

             Summary: Support subdirectories with Hive tables
                 Key: FLINK-27244
                 URL: https://issues.apache.org/jira/browse/FLINK-27244
             Project: Flink
          Issue Type: Sub-task
          Components: Connectors / Hive
            Reporter: luoyuxia


Hive support to read recursive directory by setting the property 'set mapred.input.dir.recursive=true', and Spark also support [such behavior|[https://stackoverflow.com/questions/42026043/how-to-recursively-read-hadoop-files-from-directory-using-spark]].

For normal case, it won't happed for reading recursive directory. But it may happen in the following case:

I have a paritioned table `fact_tz` with partition day/hour
{code:java}
CREATE TABLE fact_tz(x int) PARTITIONED BY (ds STRING, hr STRING) {code}
Then I want to create an external table `fact_daily` refering to  `fact_tz`, but with a coarse-grained partition day. 
{code:java}
create external table fact_daily(x int) PARTITIONED BY (ds STRING) location 'fact_tz_localtion' ;

ALTER TABLE fact_daily ADD PARTITION (ds='1') location 'fact_tz_localtion/ds=1'{code}
But it wll throw exception "Not a file: fact_tz_localtion/ds=1" when try to query this table `fact_daily` for it's the first level of the origin partition and is actually a directory .

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)