You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Josh Mahonin (JIRA)" <ji...@apache.org> on 2015/11/03 18:54:27 UTC
[jira] [Updated] (PHOENIX-2288) Phoenix-Spark: PDecimal precision
and scale aren't carried through to Spark DataFrame
[ https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Mahonin updated PHOENIX-2288:
----------------------------------
Attachment: PHOENIX-2288.patch
Github PR + Spark unit test
> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame
> -------------------------------------------------------------------------------------
>
> Key: PHOENIX-2288
> URL: https://issues.apache.org/jira/browse/PHOENIX-2288
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.5.2
> Reporter: Josh Mahonin
> Attachments: PHOENIX-2288.patch
>
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying column. These appear to be exposed in the ResultSetMetaData, but if there was a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)