You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Julien Baley (JIRA)" <ji...@apache.org> on 2016/02/18 20:24:18 UTC

[jira] [Comment Edited] (SPARK-13046) Partitioning looks broken in 1.6

    [ https://issues.apache.org/jira/browse/SPARK-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152718#comment-15152718 ] 

Julien Baley edited comment on SPARK-13046 at 2/18/16 7:24 PM:
---------------------------------------------------------------

Sorry it took me so long to come back to you.

We're using Hive (and Java), and I'm calling the `hiveContext.createExternalTable("table_name", "s3://bucket/some_path/", "parquet");`, i.e. I believe I'm passing the correct path and then Spark perhaps infers something wrongly in the middle?

I've changed my call to:
`hiveContext.createExternalTable("table_name", "parquet", ImmutableMap.of("path", "s3://bucket/some_path/", "basePath", "s3://bucket/some_path/"));`
Is that what you meant [~yhuai] ?

It gets me:
org.apache.spark.SparkException: Failed to merge incompatible data types StringType and StructType(StructField(name,StringType,true), StructField(version,StringType,true))
when I try to query it afterwards, so I assume things still go wrong underneath.


was (Author: julien.baley):
Sorry it took me so long to come back to you.

We're using Hive (and Java), and I'm calling the `hiveContext.createExternalTable("table_name", "s3://bucket/some_path/", "parquet");`, i.e. I believe I'm passing the correct path and then Spark perhaps infers something wrongly in the middle?

I don't think I have a way to set basePath from there? [~yhuai], do you mean calling `sqlContext.read.option(key, value)`? Is there a way I can access the SQLContext from my HiveContext?

> Partitioning looks broken in 1.6
> --------------------------------
>
>                 Key: SPARK-13046
>                 URL: https://issues.apache.org/jira/browse/SPARK-13046
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Julien Baley
>
> Hello,
> I have a list of files in s3:
> {code}
> s3://bucket/some_path/date_received=2016-01-13/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some parquet files}
> s3://bucket/some_path/date_received=2016-01-14/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some parquet files}
> s3://bucket/some_path/date_received=2016-01-15/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some parquet files}
> {code}
> Until 1.5.2, it all worked well and passing s3://bucket/some_path/ (the same for the three lines) would correctly identify 2 pairs of key/value, one `date_received` and one `fingerprint`.
> From 1.6.0, I get the following exception:
> {code}
> assertion failed: Conflicting directory structures detected. Suspicious paths
> s3://bucket/some_path/date_received=2016-01-13
> s3://bucket/some_path/date_received=2016-01-14
> s3://bucket/some_path/date_received=2016-01-15
> {code}
> That is to say, the partitioning code now fails to identify date_received=2016-01-13 as a key/value pair.
> I can see that there has been some activity on spark/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala recently, so that seems related (especially the commits https://github.com/apache/spark/commit/7b5d9051cf91c099458d092a6705545899134b3b  and https://github.com/apache/spark/commit/de289bf279e14e47859b5fbcd70e97b9d0759f14 ).
> If I read correctly the tests added in those commits:
> -they don't seem to actually test the return value, only that it doesn't crash
> -they only test cases where the s3 path contain 1 key/value pair (which otherwise would catch the bug)
> This is problematic for us as we're trying to migrate all of our spark services to 1.6.0 and this bug is a real blocker. I know it's possible to force a 'union', but I'd rather not do that if the bug can be fixed.
> Any question, please shoot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org