You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Josh Mahonin (JIRA)" <ji...@apache.org> on 2015/09/23 20:07:04 UTC

[jira] [Created] (PHOENIX-2288) Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame

Josh Mahonin created PHOENIX-2288:
-------------------------------------

             Summary: Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame
                 Key: PHOENIX-2288
                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.5.2
            Reporter: Josh Mahonin


When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the underlying precision and scale aren't carried forward to Spark.

The Spark catalyst schema converter should load these from the underlying column. These appear to be exposed in the ResultSetMetaData, but if there was a way to expose these somehow through ColumnInfo, it would be cleaner.

I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)