You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/25 15:12:46 UTC

[GitHub] asmello edited a comment on issue #23882: [SPARK-26979][PySpark][WIP] Add missing column name support for some SQL functions

asmello edited a comment on issue #23882: [SPARK-26979][PySpark][WIP] Add missing column name support for some SQL functions
URL: https://github.com/apache/spark/pull/23882#issuecomment-467047983
 
 
   > PySpark side can also be easily done by `lower(df.col)`
   
   I deem this to be an anti-pattern, actually. By defining an explicit dependency on the `df` dataframe, several things stink:
   
   * You might rename the dataframe variable, and then this breaks - once for every column you use;
   * You rely on the variable name being very short for this to be convenient to write;
   * If your column name has spaces or other unsupported characters, you have to access it by `df["foo bar"]`, which is just as bad as `col("foo bar")`;
   * This will never be as clean and readable as simply passing the column name as a string.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org