You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/09/06 04:49:25 UTC

[GitHub] [spark] itholic commented on a change in pull request #33907: [SPARK-36610][PYTHON] Add `thousands` argument to `ps.read_csv`.

itholic commented on a change in pull request #33907:
URL: https://github.com/apache/spark/pull/33907#discussion_r702568290



##########
File path: python/pyspark/pandas/namespace.py
##########
@@ -407,6 +410,20 @@ def read_csv(
         index_spark_column_names = []
         index_names = []
 
+    data_spark_columns = [scol_for(sdf, col) for col in column_labels.values()]
+    if thousands is not None:

Review comment:
       I think seems like pandas just simply replace the string specified by `thousands` parameter to empty string, if the column is possible to cast to numeric type, regardless of locale standard ??
   
   For example,
   
   ```csv
   name;age;job;money
   Jorge;30;Developer;10000,,00,0,0,0,0
   Bob;32;Developer;-1234,424,142424,0
   ```
   
   ```python
   >>> pd.read_csv(path, sep=";", thousands=",")
       name  age        job           money
   0  Jorge   30  Developer     10000000000
   1    Bob   32  Developer -12344241424240
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org