You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by 351zyf <gi...@git.apache.org> on 2018/10/30 07:46:36 UTC
[GitHub] spark pull request #22888: SPARK-25881
GitHub user 351zyf opened a pull request:
https://github.com/apache/spark/pull/22888
SPARK-25881
add parametere coerce_float
https://issues.apache.org/jira/browse/SPARK-25881
## What changes were proposed in this pull request?
when using pyspark dataframe.toPandas()
the type decimal in spark df turn to object in pandas dataframe
>>> for i in df_spark.dtypes:
... print(i)
...
('dt', 'string')
('cost_sum', 'decimal(38,3)')
('req_sum', 'bigint')
('pv_sum', 'bigint')
('click_sum', 'bigint')
>>> df_pd = df_spark.toPandas()
>>> df_pd.dtypes
dt object
cost_sum object
req_sum int64
pv_sum int64
click_sum int64
dtype: object
the paramater coerce_float in pd.DataFrame.from_records will handle type decimal.Decimal to floating point.
>>> arr = df_spark.collect()
>>> df2_pd = pd.DataFrame.from_records(df_spark.collect(), columns=df_spark.columns, coerce_float=True)
>>> df2_pd.dtypes
dt object
cost_sum float64
req_sum int64
pv_sum int64
click_sum int64
dtype: object
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review http://spark.apache.org/contributing.html before opening a pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/351zyf/spark SPARK-25881
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/22888.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #22888
----
commit edc2a6173c89315afddefbd0c29cfd98f80049f8
Author: zhangyefei <zh...@...>
Date: 2018-10-30T07:22:41Z
add parametere coerce_float
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by 351zyf <gi...@git.apache.org>.
Github user 351zyf commented on the issue:
https://github.com/apache/spark/pull/22888
> Then, you can convert the type into double or floats in Spark DataFrame. This is super easily able to work around at Pandas DataFrame or Spark's DataFrame. I don't think we should add this flag.
>
> BTW, the same feature should be added to when Arrow optimization is enabled as well.
Or can we correct this conversion in function dataframe._to_corrected_pandas_type ?
Converting decimal type manually everytime sounds not good..
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/22888
I would close this, @351zyf.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22888
Can one of the admins verify this patch?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/22888
I think you can just manually convert from Pandas DataFrame, no?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by 351zyf <gi...@git.apache.org>.
Github user 351zyf commented on the issue:
https://github.com/apache/spark/pull/22888
OK
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22888
Can one of the admins verify this patch?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #22888: SPARK-25881
Posted by 351zyf <gi...@git.apache.org>.
Github user 351zyf closed the pull request at:
https://github.com/apache/spark/pull/22888
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/22888
Then, you can convert the type into double or floats in Spark DataFrame. This is super easily able to work around at Pandas DataFrame or Spark's DataFrame. I don't think we should add this flag.
BTW, the same feature should be added to when Arrow optimization is enabled as well.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by 351zyf <gi...@git.apache.org>.
Github user 351zyf commented on the issue:
https://github.com/apache/spark/pull/22888
and this also have no effect on timestamp values.
tested.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/22888
You're introducing a flag to convert. I think it's virtually same enabling the flag vs calling a function to convert.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/22888
Can one of the admins verify this patch?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #22888: SPARK-25881
Posted by 351zyf <gi...@git.apache.org>.
Github user 351zyf commented on the issue:
https://github.com/apache/spark/pull/22888
> I think you can just manually convert from Pandas DataFrame, no?
If I'm using function toPandas, I dont think decimal to object is right.
Isn't decimal values usually a value to calculate? I mean, numbers.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org