You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/04/26 02:47:00 UTC
[jira] [Commented] (SPARK-21661) SparkSQL can't merge load table
from Hadoop
[ https://issues.apache.org/jira/browse/SPARK-21661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16453394#comment-16453394 ]
Hyukjin Kwon commented on SPARK-21661:
--------------------------------------
Another note: we now have {{spark.hadoopRDD.ignoreEmptySplits}} configuration too for HadoopRDD related operations.
> SparkSQL can't merge load table from Hadoop
> -------------------------------------------
>
> Key: SPARK-21661
> URL: https://issues.apache.org/jira/browse/SPARK-21661
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: Dapeng Sun
> Assignee: Li Yuanjian
> Priority: Major
>
> Here is the original text of external table on HDFS:
> {noformat}
> Permission Owner Group Size Last Modified Replication Block Size Name
> -rw-r--r-- root supergroup 0 B 8/6/2017, 11:43:03 PM 3 256 MB income_band_001.dat
> -rw-r--r-- root supergroup 0 B 8/6/2017, 11:39:31 PM 3 256 MB income_band_002.dat
> ...
> -rw-r--r-- root supergroup 327 B 8/6/2017, 11:44:47 PM 3 256 MB income_band_530.dat
> {noformat}
> After SparkSQL load, every files have a output file, even the files are 0B. For the load on Hive, the data files would be merged according the data size of original files.
> Reproduce:
> {noformat}
> CREATE EXTERNAL TABLE t1 (a int,b string) STORED AS TEXTFILE LOCATION "hdfs://xxx:9000/data/t1"
> CREATE TABLE t2 STORED AS PARQUET AS SELECT * FROM t1;
> {noformat}
> The table t2 have many small files without data.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org