You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dr Mich Talebzadeh (JIRA)" <ji...@apache.org> on 2016/08/13 13:32:20 UTC
[jira] [Created] (SPARK-17047) Spark 2 cannot create ORC table when
CLUSTERED.
Dr Mich Talebzadeh created SPARK-17047:
------------------------------------------
Summary: Spark 2 cannot create ORC table when CLUSTERED.
Key: SPARK-17047
URL: https://issues.apache.org/jira/browse/SPARK-17047
Project: Spark
Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Dr Mich Talebzadeh
This does not work with CLUSTERED BY clause in Spark 2 now!
CREATE TABLE test.dummy2
(
ID INT
, CLUSTERED INT
, SCATTERED INT
, RANDOMISED INT
, RANDOM_STRING VARCHAR(50)
, SMALL_VC VARCHAR(10)
, PADDING VARCHAR(10)
)
CLUSTERED BY (ID) INTO 256 BUCKETS
STORED AS ORC
TBLPROPERTIES ( "orc.compress"="SNAPPY",
"orc.create.index"="true",
"orc.bloom.filter.columns"="ID",
"orc.bloom.filter.fpp"="0.05",
"orc.stripe.size"="268435456",
"orc.row.index.stride"="10000" )
scala> HiveContext.sql(sqltext)
org.apache.spark.sql.catalyst.parser.ParseException:
Operation not allowed: CREATE TABLE ... CLUSTERED BY(line 2, pos 0)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org