You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2017/03/30 01:20:41 UTC
[jira] [Resolved] (SPARK-15427) Spark SQL doesn't support field
case sensitive when load data use Phoenix
[ https://issues.apache.org/jira/browse/SPARK-15427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-15427.
----------------------------------
Resolution: Not A Problem
{{SELECT * FROM $table WHERE 1=0}} seems now changable via dialect in favour of SPARK-17614. I am resolving this. Please reopen this if I misunderstood.
I am also resolving this as it seems the related code path has been changed radically to me.
> Spark SQL doesn't support field case sensitive when load data use Phoenix
> -------------------------------------------------------------------------
>
> Key: SPARK-15427
> URL: https://issues.apache.org/jira/browse/SPARK-15427
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 1.5.0
> Reporter: deng
> Labels: easyfix, features, newbie
>
> I use sparkSql load data from Apache Phoenix.
> SQLContext sqlContext = new SQLContext(sc);
> Map<String, String> options = new HashMap();
> options.put("driver", driver);
> options.put("url", PhoenixUtil.p.getProperty("phoenixURL"));
> options.put("dbtable", "(select "value","name" from "user")");
> DataFrame jdbcDF = sqlContext.load("jdbc", options);
> It always throws exception, like "can't find field VALUE".
> I tracked the code and found spark will use:
> val rs = conn.prepareStatement(s"SELECT * FROM $table WHERE 1=0").executeQuery()
> to get the field.But the field already be uppercased, like "value" to VALUE. So it will always throws "can't find field VALUE";
> It didn't think of the the case when data loaded from source in which filed is case sensitive.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org